Updates from: 02/01/2024 02:16:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
The default name of the **Change email** button in *selfAsserted.html* is **chan
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)] +
+- The B2C Users need to have an authentication method specified for self-service password reset. Select the B2C User, in the left menu under **Manage**, select **Authentication methods**, ensure **Authentication contact info** is set. B2C users created via a SignUp flow will have this set by default. For users created via Azure Portal or by Graph API need to have this set for SSPR to work.
++ ## Self-service password reset (recommended) The new password reset experience is now part of the sign-up or sign-in policy. When the user selects the **Forgot your password?** link, they are immediately sent to the Forgot Password experience. Your application no longer needs to handle the [AADB2C90118 error code](#password-reset-policy-legacy), and you don't need a separate policy for password reset.
ai-services Blob Storage Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/blob-storage-search.md
Title: Configure your blob storage container for image retrieval and video search
+ Title: Configure your blob storage container for image retrieval
description: Configure your Azure storage account to get started with the **Search photos image retrieval** experience in Vision Studio. #
-# Configure your blob storage for image retrieval and video search in Vision Studio
+# Configure your blob storage for image retrieval in Vision Studio
-To get started with the **Search photos image retrieval** scenario in Vision Studio, you need to select or create a new Azure storage account. Your storage account can be in any region, but creating it in the same region as your Vision resource is more efficient and reduces cost.
+To get started with the **Search photos with image retrieval** scenario in Vision Studio, you need to select or create a new Azure storage account. Your storage account can be in any region, but creating it in the same region as your Vision resource is more efficient and reduces cost.
> [!IMPORTANT]
-> You need to create your storage account on the same Azure subscription as the Vision resource you're using in the **Search photos image retrieval** scenario as shown below.
-
+> You need to create your storage account on the same Azure subscription as the Vision resource you're using in the **Search photos with image retrieval** scenario.
+>
+> :::image type="content" source="../media/storage-instructions/subscription.png" alt-text="Screenshot of resource selection.":::
## Create a new storage account
In the Allowed Methods field, select the `GET` checkbox to allow an authenticate
:::image type="content" source="../media/storage-instructions/cors-rule.png" alt-text="Screenshot of completed CORS screen.":::
-This allows Vision Studio to access images and videos in your blob storage container to extract insights on your data.
-
-## Upload images and videos in Vision Studio
-
-In the **Try with your own video** or **Try with your own image** section in Vision Studio, select the storage account that you configured with the CORS rule. Select the container in which your images or videos are stored. If you don't have a container, you can create one and upload the images or videos from your local device. If you have updated the CORS rules on the storage account, refresh the Blob container or Video files on container sections.
-
+This allows Vision Studio to access images in your blob storage container to extract insights on your data.
+## Upload images in Vision Studio
+In the **Search photos with image retrieval** section in Vision Studio, select the storage account that you configured with the CORS rule. Select the container in which your images are stored. If you don't have a container, you can create one and upload the images from your local device. If you have updated the CORS rules on the storage account, refresh the Blob container or Video files on container sections.
ai-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image-40.md
Last updated 08/01/2023-+ zone_pivot_groups: programming-languages-computer-vision-40
ai-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/image-retrieval.md
Previously updated : 01/19/2024 Last updated : 01/30/2024
The API call returns a **vector** JSON object, which defines the text string's c
Cosine similarity is a method for measuring the similarity of two vectors. In an image retrieval scenario, you'll compare the search query vector with each image's vector. Images that are above a certain threshold of similarity can then be returned as search results.
-The following example C# code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results.
+The following example code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results.
+
+#### [C#](#tab/csharp)
```csharp public static float GetCosineSimilarity(float[] vector1, float[] vector2)
public static float GetCosineSimilarity(float[] vector1, float[] vector2)
} ```
+#### [Python](#tab/python)
+
+```python
+import numpy as np
+
+def cosine_similarity(vector1, vector2):
+ return np.dot(vector1, vector2) / (np.linalg.norm(vector1) * np.linalg.norm(vector2))
+```
+++ ## Next steps [Image retrieval concepts](../concept-image-retrieval.md)
ai-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/model-customization.md
Last updated 02/06/2023 -+ # Create a custom Image Analysis model (preview)
ai-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
Last updated 01/24/2023 -+ zone_pivot_groups: programming-languages-computer-vision-40 keywords: Azure AI Vision, Azure AI Vision service
ai-services Install Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/install-sdk.md
Last updated 08/01/2023 -+ zone_pivot_groups: programming-languages-vision-40-sdk
ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/managed-identities.md
+
+ Title: Managed identities for storage blobs
+description: Create managed identities for containers and blobs with Azure portal.
+++++ Last updated : 01/31/2024+++
+# Managed identities for Language resources
+
+Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your [source and target container URLs](use-native-documents.md#create-azure-blob-storage-containers).
+
+ :::image type="content" source="media/managed-identity-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+
+* You can use managed identities to grant access to any resource that supports Microsoft Entra authentication, including your own applications.
+
+* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](/azure/role-based-access-control/overview).
+
+* There's no added cost to use managed identities in Azure.
+
+> [!IMPORTANT]
+>
+> * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail. Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your [source and target container URLs](use-native-documents.md#create-azure-blob-storage-containers).
+>
+> * To use managed identities for Language operations, you must [create your Language resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in a specific geographic Azure region such as **East US**. If your Language resource region is set to **Global**, then you can't use managed identity authentication. You can, however, still use [Shared Access Signature tokens (SAS)](shared-access-signatures.md).
+>
+
+## Prerequisites
+
+To get started, you need the following resources:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* An [**single-service Azure AI Language**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) resource created in a regional location.
+
+* A brief understanding of [**Azure role-based access control (`Azure RBAC`)**](/azure/role-based-access-control/role-assignments-portal) using the Azure portal.
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Language resource. You also need to create containers to store and organize your blob data within your storage account.
+
+* **If your storage account is behind a firewall, you must enable the following configuration**:
+ 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+ 1. Select your Storage account.
+ 1. In the **Security + networking** group in the left pane, select **Networking**.
+ 1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
+
+ :::image type="content" source="media/firewalls-and-virtual-networks.png" alt-text="Screenshot that shows the elected networks radio button selected.":::
+
+ 1. Deselect all check boxes.
+ 1. Make sure **Microsoft network routing** is selected.
+ 1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Language resource as the instance name.
+ 1. Make certain that the **Allow Azure services on the trusted services list to access this storage account** box is checked. For more information about managing exceptions, _see_ [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security?tabs=azure-portal#manage-exceptions).
+
+ :::image type="content" source="media/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot that shows the allow trusted services checkbox in the Azure portal.":::
+
+ 1. Select **Save**.
+
+ > [!NOTE]
+ > It may take up to 5 minutes for the network changes to propagate.
+
+ Although network access is now permitted, your Language resource is still unable to access the data in your Storage account. You need to [create a managed identity](#managed-identity-assignments) for and [assign a specific access role](#enable-a-system-assigned-managed-identity) to your Language resource.
+
+## Managed identity assignments
+
+There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports **system-assigned managed identity**:
+
+* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
+
+In the following steps, we enable a system-assigned managed identity and grant your Language resource limited access to your Azure Blob Storage account.
+
+## Enable a system-assigned managed identity
+
+You must grant the Language resource access to your storage account before it can create, read, or delete blobs. Once you enabled the Language resource with a system-assigned managed identity, you can use Azure role-based access control (`Azure RBAC`), to give Language features access to your Azure storage containers.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select your Language resource.
+1. In the **Resource Management** group in the left pane, select **Identity**. If your resource was created in the global region, the **Identity** tab isn't visible. You can still use [Shared Access Signature tokens (SAS)](shared-access-signatures.md) for authentication.
+1. Within the **System assigned** tab, turn on the **Status** toggle.
+
+ :::image type="content" source="media/resource-management-identity-tab.png" alt-text="Screenshot that shows the resource management identity tab in the Azure portal.":::
+
+ > [!IMPORTANT]
+ > User assigned managed identities don't meet the requirements for the batch processing storage account scenario. Be sure to enable system assigned managed identity.
+
+1. Select **Save**.
+
+## Grant storage account access for your Language resource
+
+> [!IMPORTANT]
+> To assign a system-assigned managed identity role, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](/azure/role-based-access-control/built-in-roles#owner) or [**User Access Administrator**](/azure/role-based-access-control/built-in-roles#user-access-administrator) at the storage scope for the storage resource.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select your Language resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. Under **Permissions** select **Azure role assignments**:
+
+ :::image type="content" source="media/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot that shows the enable system-assigned managed identity in Azure portal.":::
+
+1. On the Azure role assignments page that opened, choose your subscription from the drop-down menu then select **+ Add role assignment**.
+
+ :::image type="content" source="media/azure-role-assignments-page-portal.png" alt-text="Screenshot that shows the Azure role assignments page in the Azure portal.":::
+
+1. Next, assign a **Storage Blob Data Contributor** role to your Language service resource. The **Storage Blob Data Contributor** role gives Language (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
+
+ | Field | Value|
+ ||--|
+ |**Scope**| **_Storage_**.|
+ |**Subscription**| **_The subscription associated with your storage resource_**.|
+ |**Resource**| **_The name of your storage resource_**.|
+ |**Role** | **_Storage Blob Data Contributor_**.|
+
+ :::image type="content" source="media/add-role-assignment-window.png" alt-text="Screenshot that shows the role assignments page in the Azure portal.":::
+
+1. After the _Added Role assignment_ confirmation message appears, refresh the page to see the added role assignment.
+
+ :::image type="content" source="media/add-role-assignment-confirmation.png" alt-text="Screenshot that shows the added role assignment confirmation pop-up message.":::
+
+1. If you don't see the new role assignment right away, wait and try refreshing the page again. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
+
+## HTTP requests
+
+* A native document Language service operation request is submitted to your Language service endpoint via a POST request.
+
+* With managed identity and `Azure RBAC`, you no longer need to include SAS URLs.
+
+* If successful, the POST method returns a `202 Accepted` response code and the service creates a request.
+
+* The processed documents appear in your target container.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with native document support](use-native-documents.md#include-native-documents-with-an-http-request)
ai-services Shared Access Signatures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/shared-access-signatures.md
+
+ Title: Shared access signature (SAS) tokens for storage blobs
+description: Create shared access signature tokens (SAS) for containers and blobs with Azure portal.
+++++ Last updated : 01/31/2024++
+# SAS tokens for your storage containers
+
+Learn to create user delegation, shared access signature (SAS) tokens, using the Azure portal. User delegation SAS tokens are secured with Microsoft Entra credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
++
+>[!TIP]
+>
+> [Role-based access control (managed identities)](../concepts/role-based-access-control.md) provide an alternate method for granting access to your storage data without the need to include SAS tokens with your HTTP requests.
+>
+> * You can use managed identities to grant access to any resource that supports Microsoft Entra authentication, including your own applications.
+> * Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
+> * There's no added cost to use managed identities in Azure.
+
+At a high level, here's how SAS tokens work:
+
+* Your application submits the SAS token to Azure Storage as part of a REST API request.
+
+* If the storage service verifies that the SAS is valid, the request is authorized.
+
+* If the SAS token is deemed invalid, the request is declined, and the error code 403 (Forbidden) is returned.
+
+Azure Blob Storage offers three resource types:
+
+* **Storage** accounts provide a unique namespace in Azure for your data.
+* **Data storage containers** are located in storage accounts and organize sets of blobs (files, text, or images).
+* **Blobs** are located in containers and store text and binary data such as files, text, and images.
+
+> [!IMPORTANT]
+>
+> * SAS tokens are used to grant permissions to storage resources, and should be protected in the same manner as an account key.
+>
+> * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
+
+## Prerequisites
+
+To get started, you need the following resources:
+
+* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* An [Azure AI Language](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) resource.
+
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
+
+## Create SAS tokens in the Azure portal
+
+<!-- markdownlint-disable MD024 -->
+
+Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your container or a specific file as follows and continue with these steps:
+
+Workflow: **Your storage account** → **containers** → **your container** → **your file**
+
+1. Right-click the container or file and select **Generate SAS** from the drop-down menu.
+
+1. Select **Signing method** → **User delegation key**.
+
+1. Define **Permissions** by checking and/or clearing the appropriate check box:
+
+ * Your **source** file must designate **read** and **list** access.
+
+ * Your **target** file must designate **write** and **list** access.
+
+1. Specify the signed key **Start** and **Expiry** times.
+
+ * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token.
+ * Consider setting a longer duration period for the time you're using your storage account for Language Service operations.
+ * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
+ * **Account key**: No imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Microsoft Entra credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
+
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
+
+1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
+
+1. Review then select **Generate SAS token and URL**.
+
+1. The **Blob SAS token** query string and **Blob SAS URL** are displayed in the lower area of window.
+
+1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+### Use your SAS URL to grant access
+
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
+
+You can include your SAS URL with REST API requests in two ways:
+
+* Use the **SAS URL** as your sourceURL and targetURL values.
+
+* Append the **SAS query string** to your existing sourceURL and targetURL values.
+
+Here's a sample REST API request:
+
+```json
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "id": "doc_0",
+ "language": "en",
+ "source": {
+ "location": "myaccount.blob.core.windows.net/sample-input/input.pdf?{SAS-Token}"
+ },
+ "target": {
+ "location": "https://myaccount.blob.core.windows.net/sample-output?{SAS-Token}"
+ }
+ }
+ ]
+ }
+}
+```
+
+That's it! You learned how to create SAS tokens to authorize how clients access your data.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about native document support](use-native-documents.md "Learn how to process and analyze native documents.") [Learn more about granting access with SAS ](/azure/storage/common/storage-sas-overview "Grant limited access to Azure Storage resources using shared access SAS.")
+>
ai-services Use Native Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/use-native-documents.md
+
+ Title: Native document support for Azure AI Language (preview)
+
+description: How to use native document with Azure AI Languages Personally Identifiable Information and Summarization capabilities.
++++ Last updated : 01/31/2024+++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+<!-- markdownlint-disable MD001 -->
+
+# Native document support for Azure AI Language (preview)
+
+> [!IMPORTANT]
+>
+> * Native document support is a gated preview. To request access to the native document support feature, complete and submit the [**Apply for access to Language Service previews**](https://aka.ms/gating-native-document) form.
+>
+> * Azure AI Language public preview releases provide early access to features that are in active development.
+> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+
+Azure AI Language is a cloud-based service that applies Natural Language Processing (NLP) features to text-based data. The native document support capability enables you to send API requests asynchronously, using an HTTP POST request body to send your data and HTTP GET request query string to retrieve the processed data.
+
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities:
+
+* [Personally Identifiable Information (PII)](../personally-identifiable-information/overview.md). The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. The `PiiEntityRecognition` API supports native document processing.
+
+* [Document summarization](../summarization/overview.md). Document summarization uses natural language processing to generate extractive (salient sentence extraction) or abstractive (contextual word extraction) summaries for documents. Both `AbstractiveSummarization` and `ExtractiveSummarization` APIs support native document processing.
+
+## Development options
+
+Native document support can be integrated into your applications using the [Azure AI Language REST API](/rest/api/language/). The REST API is a language agnostic interface that enables you to create HTTP requests for text-based data analysis.
+
+|Service|Description|API Reference (Latest GA version)|API Reference (Latest Preview version)|
+|--|--|--|--|
+| Text analysis - runtime | &bullet; Runtime prediction calls to extract **Personally Identifiable Information (PII)**.</br>&bullet; Custom redaction for native documents is supported in the latest **2023-04-14-preview**.|[`2023-04-01`](/rest/api/language/2023-04-01/text-analysis-runtime)|[`2023-04-15-preview`.](/rest/api/language/2023-04-15-preview/text-analysis-runtime)|
+| Summarization for documents - runtime|Runtime prediction calls to **query summarization for documents models**.|[`2023-04-01`](/rest/api/language/2023-04-01/text-analysis-runtime/submit-job)|[`2023-04-15-preview`](/rest/api/language/2023-04-15-preview/text-analysis-runtime)|
+
+## Supported document formats
+
+ Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats:
+
+|File type|File extension|Description|
+||--|--|
+|Text| `.txt`|An unformatted text document.|
+|Adobe PDF| `.pdf`|A portable document file formatted document.|
+|Microsoft Word| `.docx`|A Microsoft Word document file.|
+
+## Input guidelines
+
+***Supported file formats***
+
+|Type|support and limitations|
+|||
+|**PDFs**| Fully scanned PDFs aren't supported.|
+|**Text within images**| Digital images with imbedded text aren't supported.|
+|**Digital tables**| Tables in scanned documents aren't supported.|
+
+***Document Size***
+
+|Attribute|Input limit|
+|||
+|**Total number of documents per request** |**Γëñ 20**|
+|**Total content size per request**| **Γëñ 1 MB**|
+
+## Include native documents with an HTTP request
+
+***Let's get started:***
+
+* For this project, we use the cURL command line tool to make REST API calls.
+
+ > [!NOTE]
+ > The cURL package is pre-installed on most Windows 10 and Windows 11 and most macOS and Linux distributions. You can check the package version with the following commands:
+ > Windows: `curl.exe -V`.
+ > macOS `curl -V`
+ > Linux: `curl --version`
+
+* If cURL isn't installed, here are installation links for your platform:
+
+ * [Windows](https://curl.haxx.se/windows/).
+ * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows).
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your native files for analysis (required).
+ * **Target container**. This container is where your analyzed files are stored (required).
+
+* A [**single-service Language resource**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) (**not** a multi-service Azure AI services resource):
+
+ **Complete the Language resource project and instance details fields as follows:**
+
+ 1. **Subscription**. Select one of your available Azure subscriptions.
+
+ 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+ 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity (RBAC)](../concepts/role-based-access-control.md) for authentication, choose a **geographic** region like **West US**.
+
+ 1. **Name**. Enter the name you chose for your resource. The name you choose must be unique within Azure.
+
+ 1. **Pricing tier**. You can use the free pricing tier (`Free F0`) to try the service, and upgrade later to a paid tier for production.
+
+ 1. Select **Review + Create**.
+
+ 1. Review the service terms and select **Create** to deploy your resource.
+
+ 1. After your resource successfully deploys, select **Go to resource**.
+
+### Retrieve your key and language service endpoint
+
+Requests to the Language service require a read-only key and custom endpoint to authenticate access.
+
+1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing language service resource, navigate directly to your resource page.
+
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+
+1. You can copy and paste your **`key`** and your **`language service instance endpoint`** into the code samples to authenticate your request to the Language service. Only one key is necessary to make an API call.
+
+## Create Azure Blob Storage containers
+
+[**Create containers**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
+
+* **Source container**. This container is where you upload your native files for analysis (required).
+* **Target container**. This container is where your analyzed files are stored (required).
+
+### **Authentication**
+
+Your Language resource needs granted access to your storage account before it can create, read, or delete blobs. There are two primary methods you can use to grant access to your storage data:
+
+* [**Shared access signature (SAS) tokens**](shared-access-signatures.md). User delegation SAS tokens are secured with Microsoft Entra credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
+
+* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources
+
+For this project, we authenticate access to the `source location` and `target location` URLs with Shared Access Signature (SAS) tokens appended as query strings. Each token is assigned to a specific blob (file).
++
+* Your **source** container or blob must designate **read** and **list** access.
+* Your **target** container or blob must designate **write** and **list** access.
+
+> [!TIP]
+>
+> Since we're processing a single file (blob), we recommend that you **delegate SAS access at the blob level**.
+
+## Request headers and parameters
+
+|parameter |Description |
+|||
+|`-X POST <endpoint>` | Specifies your Language resource endpoint for accessing the API. |
+|`--header Content-Type: application/json` | The content type for sending JSON data. |
+|`--header "Ocp-Apim-Subscription-Key:<key>` | Specifies the Language resource key for accessing the API. |
+|`-data` | The JSON file containing the data you want to pass with your request. |
+
+The following cURL commands are executed from a BASH shell. Edit these commands with your own resource name, resource key, and JSON values. Try analyzing native documents by selecting the `Personally Identifiable Information (PII)` or `Document Summarization` code sample project:
+
+### [Personally Identifiable Information (PII)](#tab/pii)
+
+### PII Sample document
+
+For this quickstart, you need a **source document** uploaded to your **source container**. You can download our [Microsoft Word sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Language/native-document-pii.docx) or [Adobe PDF](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl//Language/native-document-pii.pdf) for this project. The source language is English.
+
+### Build the POST request
+
+1. Using your preferred editor or IDE, create a new directory for your app named `native-document`.
+
+1. Create a new json file called **pii-detection.json** in your **native-document** directory.
+
+1. Copy and paste the following Personally Identifiable Information (PII) **request sample** into your `pii-detection.json` file. Replace **`{your-source-container-SAS-URL}`** and **`{your-target-container-SAS-URL}`** with values from your Azure portal Storage account containers instance:
+
+ ***Request sample***
+
+```json
+{
+ "displayName": "Extracting Location & US Region",
+ "analysisInput": {
+ "documents": [
+ {
+ "language": "en-US",
+ "id": "Output-excel-file",
+ "source": {
+ "location": "{your-source-container-with-SAS-URL}"
+ },
+ "target": {
+ "location": "{your-target-container-with-SAS-URL}"
+ }
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "kind": "PiiEntityRecognition",
+ "parameters":{
+ "excludePiiCategoriesredac" : ["PersonType", "Category2", "Category3"],
+ "redactionPolicy": "UseEntityTypeName"
+ }
+ }
+ ]
+}
+```
+
+### Run the POST request
+
+1. Here's the preliminary structure of the POST request:
+
+ ```bash
+ POST {your-language-endpoint}/language/analyze-documents/jobs?api-version=2023-11-15-preview
+ ```
+
+1. Before you run the **POST** request, replace `{your-language-resource-endpoint}` and `{your-key}` with the values from your Azure portal Language service instance.
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](/azure/key-vault/general/overview). For more information, *see* Azure AI services [security](/azure/ai-services/security-features).
+
+ ***PowerShell***
+
+ ```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-documents/jobs?api-version=2023-11-15-preview" -i -X POST --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@pii-detection.json"
+ ```
+
+ ***command prompt / terminal***
+
+ ```bash
+ curl -v -X POST "{your-language-resource-endpoint}/language/analyze-documents/jobs?api-version=2023-11-15-preview" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@pii-detection.json"
+ ```
+
+1. Here's a sample response:
+
+ ```http
+ HTTP/1.1 202 Accepted
+ Content-Length: 0
+ operation-location: https://{your-language-resource-endpoint}/language/analyze-documents/jobs/f1cc29ff-9738-42ea-afa5-98d2d3cabf94?api-version=2023-11-15-preview
+ apim-request-id: e7d6fa0c-0efd-416a-8b1e-1cd9287f5f81
+ x-ms-region: West US 2
+ Date: Thu, 25 Jan 2024 15:12:32 GMT
+ ```
+
+### POST response (jobId)
+
+You receive a 202 (Success) response that includes a read-only Operation-Location header. The value of this header contains a **jobId** that can be queried to get the status of the asynchronous operation and retrieve the results using a **GET** request:
+
+ :::image type="content" source="media/operation-location-result-id.png" alt-text="Screenshot showing the operation-location value in the POST response.":::
+
+### Get analyze results (GET request)
+
+1. After your successful **POST** request, poll the operation-location header returned in the POST request to view the processed data.
+
+1. Here's the preliminary structure of the **GET** request:
+
+ ```bash
+ GET {your-language-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview
+ ```
+
+1. Before you run the command, make these changes:
+
+ * Replace {**jobId**} with the Operation-Location header from the POST response.
+
+ * Replace {**your-language-resource-endpoint**} and {**your-key**} with the values from your Language service instance in the Azure portal.
+
+### Get request
+
+```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" -i -X GET --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+```bash
+ curl -v -X GET "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+#### Examine the response
+
+You receive a 200 (Success) response with JSON output. The **status** field indicates the result of the operation. If the operation isn't complete, the value of **status** is "running" or "notStarted", and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
+
+#### Sample response
+
+```json
+{
+ "jobId": "f1cc29ff-9738-42ea-afa5-98d2d3cabf94",
+ "lastUpdatedDateTime": "2024-01-24T13:17:58Z",
+ "createdDateTime": "2024-01-24T13:17:47Z",
+ "expirationDateTime": "2024-01-25T13:17:47Z",
+ "status": "succeeded",
+ "errors": [],
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "PiiEntityRecognitionLROResults",
+ "lastUpdateDateTime": "2024-01-24T13:17:58.33934Z",
+ "status": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "id": "doc_0",
+ "source": {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-input/input.pdf"
+ },
+ "targets": [
+ {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-output/df6611a3-fe74-44f8-b8d4-58ac7491cb13/PiiEntityRecognition-0001/input.result.json"
+ },
+ {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-output/df6611a3-fe74-44f8-b8d4-58ac7491cb13/PiiEntityRecognition-0001/input.docx"
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2023-09-01"
+ }
+ }
+ ]
+ }
+}
+```
+
+### [Document Summarization](#tab/summarization)
+
+### Summarization sample document
+
+For this project, you need a **source document** uploaded to your **source container**. You can download our [Microsoft Word sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Language/native-document-summarization.docx) or [Adobe PDF](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Language/native-document-summarization.pdf) for this quickstart. The source language is English.
+
+### Build the POST request
+
+1. Using your preferred editor or IDE, create a new directory for your app named `native-document`.
+1. Create a new json file called **document-summarization.json** in your **native-document** directory.
+
+1. Copy and paste the Document Summarization **request sample** into your `document-summarization.json` file. Replace **`{your-source-container-SAS-URL}`** and **`{your-target-container-SAS-URL}`** with values from your Azure portal Storage account containers instance:
+
+ `**Request sample**`
+
+ ```json
+ {
+ "kind": "ExtractiveSummarization",
+ "parameters": {
+ "sentenceCount": 6
+ },
+ "analysisInput":{
+ "documents":[
+ {
+ "source":{
+ "location":"{your-source-container-SAS-URL}"
+ },
+ "targets":
+ {
+ "location":"{your-target-container-SAS-URL}",
+ }
+ }
+ ]
+ }
+ }
+ ```
+
+### Run the POST request
+
+Before you run the **POST** request, replace `{your-language-resource-endpoint}` and `{your-key}` with the endpoint value from your Azure portal Language resource instance.
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](/azure/key-vault/general/overview). For more information, *see* Azure AI services [security](/azure/ai-services/security-features).
+
+ ***PowerShell***
+
+ ```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-text/jobs?api-version=2023-04-01" -i -X POST --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@document-summarization.json"
+ ```
+
+ ***command prompt / terminal***
+
+ ```bash
+ curl -v -X POST "{your-language-resource-endpoint}/language/analyze-text/jobs?api-version=2023-04-01" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@document-summarization.json"
+ ```
+
+Here's a sample response:
+
+ ```http
+ HTTP/1.1 202 Accepted
+ Content-Length: 0
+ operation-location: https://{your-language-resource-endpoint}/language/analyze-documents/jobs/f1cc29ff-9738-42ea-afa5-98d2d3cabf94?api-version=2023-11-15-preview
+ apim-request-id: e7d6fa0c-0efd-416a-8b1e-1cd9287f5f81
+ x-ms-region: West US 2
+ Date: Thu, 25 Jan 2024 15:12:32 GMT
+ ```
+
+### POST response (jobId)
+
+You receive a 202 (Success) response that includes a read-only Operation-Location header. The value of this header contains a jobId that can be queried to get the status of the asynchronous operation and retrieve the results using a GET request:
+
+ :::image type="content" source="media/operation-location-result-id.png" alt-text="Screenshot showing the operation-location value in the POST response.":::
+
+### Get analyze results (GET request)
+
+1. After your successful **POST** request, poll the operation-location header returned in the POST request to view the processed data.
+
+1. Here's the structure of the **GET** request:
+
+ ```http
+ GET {cognitive-service-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview
+ ```
+
+1. Before you run the command, make these changes:
+
+ * Replace {**jobId**} with the Operation-Location header from the POST response.
+
+ * Replace {**your-language-resource-endpoint**} and {**your-key**} with the values from your Language service instance in the Azure portal.
+
+### Get request
+
+```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" -i -X GET --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+```bash
+ curl -v -X GET "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+#### Examine the response
+
+You receive a 200 (Success) response with JSON output. The **status** field indicates the result of the operation. If the operation isn't complete, the value of **status** is "running" or "notStarted", and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
+
+#### Sample response
+
+```json
+{
+ "jobId": "f1cc29ff-9738-42ea-afa5-98d2d3cabf94",
+ "lastUpdatedDateTime": "2024-01-24T13:17:58Z",
+ "createdDateTime": "2024-01-24T13:17:47Z",
+ "expirationDateTime": "2024-01-25T13:17:47Z",
+ "status": "succeeded",
+ "errors": [],
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "ExtractiveSummarizationLROResults",
+ "lastUpdateDateTime": "2024-01-24T13:17:58.33934Z",
+ "status": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "id": "doc_0",
+ "source": {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-input/input.pdf"
+ },
+ "targets": [
+ {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-output/df6611a3-fe74-44f8-b8d4-58ac7491cb13/ExtractiveSummarization-0001/input.result.json"
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2023-02-01-preview"
+ }
+ }
+ ]
+ }
+}
+```
+++
+***Upon successful completion***:
+
+* The analyzed documents can be found in your target container.
+* The successful POST method returns a `202 Accepted` response code indicating that the service created the batch request.
+* The POST request also returned response headers including `Operation-Location` that provides a value used in subsequent GET requests.
+
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [PII detection overview](../personally-identifiable-information/overview.md "Learn more about Personally Identifiable Information detection.") [Document Summarization overview](../summarization/overview.md "Learn more about automatic document summarization.")
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/overview.md
Previously updated : 12/19/2023 Last updated : 01/31/2024 # What is Personally Identifiable Information (PII) detection in Azure AI Language?
-PII detection is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can **identify, categorize, and redact** sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use have been separated.
+PII detection is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can **identify, categorize, and redact** sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use are separate.
* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways. * The [**conceptual articles**](concepts/entity-categories.md) provide in-depth explanations of the service's functionality and features. PII comes into two shapes:+ * [PII](how-to-call.md) - works on unstructured text. * [Conversation PII (preview)](how-to-call-for-conversations.md) - tailored model to work on conversation transcription. - [!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
-## Get started with PII detection
+## Native document support
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for the [**PiiEntityRecognition**](../personally-identifiable-information/concepts/entity-categories.md) capability.
+
+ Currently **PII** supports the following native document formats:
+|File type|File extension|Description|
+||--|--|
+|Text| `.txt`|An unformatted text document.|
+|Adobe PDF| `.pdf` |A portable document file formatted document.|
+|Microsoft Word|`.docx`|A Microsoft Word document file.|
+For more information, *see* [**Use native documents for language processing**](../native-document-support/use-native-documents.md)
+
+## Get started with PII detection
+
-## Responsible AI
+## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who use it, the people affected by it, and the deployment environment. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. For more information, see the following articles:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)] ## Example scenarios * **Apply sensitivity labels** - For example, based on the results from the PII service, a public sensitivity label might be applied to documents where no PII entities are detected. For documents where US addresses and phone numbers are recognized, a confidential label might be applied. A highly confidential label might be used for documents where bank routing numbers are recognized.
-* **Redact some categories of personal information from documents that get wider circulation** - For example, if customer contact records are accessible to first line support representatives, the company may want to redact the customer's personal information besides their name from the version of the customer history to preserve the customer's privacy.
-* **Redact personal information in order to reduce unconscious bias** - For example, during a company's resume review process, they may want to block name, address and phone number to help reduce unconscious gender or other biases.
+* **Redact some categories of personal information from documents that get wider circulation** - For example, if customer contact records are accessible to frontline support representatives, the company can redact the customer's personal information besides their name from the version of the customer history to preserve the customer's privacy.
+* **Redact personal information in order to reduce unconscious bias** - For example, during a company's resume review process, they can block name, address and phone number to help reduce unconscious gender or other biases.
* **Replace personal information in source data for machine learning to reduce unfairness** ΓÇô For example, if you want to remove names that might reveal gender when training a machine learning model, you could use the service to identify them and you could replace them with generic placeholders for model training. * **Remove personal information from call center transcription** ΓÇô For example, if you want to remove names or other PII data that happen between the agent and the customer in a call center scenario. You could use the service to identify and remove them. * **Data cleaning for data science** - PII can be used to make the data ready for data scientists and engineers to be able to use these data to train their machine learning models. Redacting the data to make sure that customer data isn't exposed.
An AI system includes not only the technology, but also the people who will use
There are two ways to get started using the entity linking feature: * [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Language service features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md
Summarization is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
-Note that though the services are labeled document and conversation summarization, document summarization only accepts plain text blocks, and conversation summarization will accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use document summarization for that scenario.
+Though the services are labeled document and conversation summarization, document summarization only accepts plain text blocks, and conversation summarization accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use document summarization for that scenario.
Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md).
This documentation contains the following article types:
* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Document summarization uses natural language processing techniques to generate a summary for documents. There are two general approaches to automatic summarization, both of which are supported by the API: extractive and abstractive.
+Document summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive.
-Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words which are not simply extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
+Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
+
+## Native document support
+
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for both [**AbstractiveSummarization**](../summarization/how-to/document-summarization.md#try-document-abstractive-summarization) and [**ExtractiveSummarization**](../summarization/how-to/document-summarization.md#try-document-extractive-summarization) capabilities.
+
+ Currently **Document Summarization** supports the following native document formats:
+
+|File type|File extension|Description|
+||--|--|
+|Text| `.txt`|An unformatted text document.|
+|Adobe PDF| `.pdf` |A portable document file formatted document.|
+|Microsoft Word|`.docx`|A Microsoft Word document file.|
+
+For more information, *see* [**Use native documents for language processing**](../native-document-support/use-native-documents.md)
## Key features There are two types of document summarization this API provides: * **Extractive summarization**: Produces a summary by extracting salient sentences within the document.
- * Multiple extracted sentences: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
- * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
- * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization will return the three highest scored sentences.
- * Positional information: The start position and length of extracted sentences.
-* **Abstractive summarization**: Generates a summary that may not use the same words as those in the document, but captures the main idea.
- * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document may be segmented so multiple groups of summary texts may be returned with their contextual input range.
- * Contextual input range: The range within the input document that was used to generate the summary text.
+
+ * Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content.
+ * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
+ * Positional information: The start position and length of extracted sentences.
+
+* **Abstractive summarization**: Generates a summary that doesn't use the same words as in the document, but captures the main idea.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document can be segmented so multiple groups of summary texts can be returned with their contextual input range.
+ * Contextual input range: The range within the input document that was used to generate the summary text.
As an example, consider the following paragraph of text:
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
+*"At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there's magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we achieve human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../concepts/multilingual-emoji-support.md) for more information.
+The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API is returned. The output is available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response can contain text offsets. For more information, see [how to process offsets](../concepts/multilingual-emoji-support.md).
-Using the above example, the API might return the following summarized sentences:
+If we use the above example, the API might return these summarized sentences:
**Extractive summarization**:-- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."-- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."-- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
+- "At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
+- "We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."
+- "The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
**Abstractive summarization**:-- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."
+- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we achieved human performance on benchmarks in conversational speech recognition."
# [Conversation summarization](#tab/conversation-summarization)
Conversation summarization supports the following features:
## When to use issue and resolution summarization
-* When there are aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as:
+* When there are aspects of an "issue" and "resolution" such as:
* The reason for a service chat/call (the issue). * That resolution for the issue. * You only want a summary that focuses on related information about issues and resolutions.
Conversation summarization supports the following features:
As an example, consider the following example conversation:
-**Agent**: "*Hello, youΓÇÖre chatting with Rene. How may I help you?*"
+**Agent**: "*Hello, you're chatting with Rene. How may I help you?*"
-**Customer**: "*Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.*"
+**Customer**: "*Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didn't work.*"
-**Agent**: "*IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking?*"
+**Agent**: "*I'm sorry to hear that. Let's see what we can do to fix this issue. Could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking?*"
**Customer**: "*Yes, I pushed the wifi connection button, and now the power light is slowly blinking.*"
As an example, consider the following example conversation:
**Customer**: "*No. Nothing happened.*"
-**Agent**: "*I see. Thanks. LetΓÇÖs try if a factory reset can solve the issue. Could you please press and hold the center button for 5 seconds to start the factory reset.*"
+**Agent**: "*I see. Thanks. Let's try if a factory reset can solve the issue. Could you please press and hold the center button for 5 seconds to start the factory reset.*"
-**Customer**: *"IΓÇÖve tried the factory reset and followed the above steps again, but it still didnΓÇÖt work."*
+**Customer**: *"I've tried the factory reset and followed the above steps again, but it still didn't work."*
-**Agent**: "*IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.*"
+**Agent**: "*I'm very sorry to hear that. Let me see if there's another way to fix the issue. Please hold on for a minute.*"
-Conversation summarization feature would simplify the text into the following:
+Conversation summarization feature would simplify the text as follows:
|Example summary | Format | Conversation aspect | ||-|-|
Conversation summarization feature would simplify the text into the following:
# [Document summarization](#tab/document-summarization)
-* Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
-* Summarization works with a variety of written languages. See [language support](language-support.md?tabs=document-summarization) for more information.
+* Summarization takes text for analysis. For more information, see [Data and service limits](../concepts/data-limits.md) in the how-to guide.
+* Summarization works with various written languages. For more information, see [language support](language-support.md?tabs=document-summarization).
# [Conversation summarization](#tab/conversation-summarization)
-* Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information.
-* Conversation summarization accepts text in English. See [language support](language-support.md?tabs=conversation-summarization) for more information.
+* Conversation summarization takes structured text for analysis. For more information, see [data and service limits](../concepts/data-limits.md).
+* Conversation summarization accepts text in English. For more information, see [language support](language-support.md?tabs=conversation-summarization).
As you use document summarization in your applications, see the following refere
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who use it, the people affected by it, and the deployment environment. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. For more information, see the following articles:
* [Transparency note for Azure AI Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/ai-services/language-service/context/context) * [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/ai-services/language-service/context/context)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/whats-new.md
Previously updated : 04/14/2023 Last updated : 01/31/2024
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## January 2024
+
+* [Native document support](native-document-support/use-native-documents.md) is now available in `2023-11-15-preview` public preview.
+ ## November 2023 * [Named Entity Recognition Container](./named-entity-recognition/how-to/use-containers.md) is now Generally Available (GA).
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
## April 2023 * [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
-* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below.
- * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md).
+* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the following links:
+ * Autolabel your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md).
* Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
-* The latest model version (2022-10-01) for Language Detection now supports 6 more International languages and 12 Romanized Indic languages.
+* The latest model version (`2022-10-01`) for Language Detection now supports 6 more International languages and 12 Romanized Indic languages.
## March 2023
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
## February 2023
-* Conversational language understanding and orchestration workflow is now available in the following regions in the sovereign cloud for China:
+* Conversational language understanding and orchestration workflow now available in the following regions in the sovereign cloud for China:
* China East 2 (Authoring and Prediction) * China North 2 (Prediction) * New model evaluation updates for Conversational language understanding and Orchestration workflow.
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* The summarization feature now has the following capabilities: * [Document summarization](./summarization/overview.md):
- * Abstractive summarization, which generates a summary of a document that may not use the same words as those in the document, but captures the main idea.
+ * Abstractive summarization, which generates a summary of a document that can't use the same words as presented in the document, but captures the main idea.
* [Conversation summarization](./summarization/overview.md?tabs=document-summarization?tabs=conversation-summarization) * Chapter title summarization, which returns suggested chapter titles of input conversations. * Narrative summarization, which returns call notes, meeting notes or chat summaries of input conversations.
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* [Orchestration workflow](./orchestration-workflow/overview.md) * [Custom text classification](./custom-text-classification/overview.md) * [Custom named entity recognition](./custom-named-entity-recognition/overview.md)
-* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an additional ability to influence entity predictions.
+* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an added ability to influence entity predictions.
* [Entity resolution](./named-entity-recognition/concepts/entity-resolutions.md) in named entity recognition * New region support for: * [Conversational language understanding](./conversational-language-understanding/service-limits.md#regional-availability)
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Central India * Switzerland North * West US 2
-* Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
+* Text Analytics for Health now [supports more languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
* The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md). * Java * [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0)
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Conversational PII is now available in all Azure regions supported by the Language service.
-* A new version of the Language API (`2022-07-01-preview`) has been released. It provides:
+* A new version of the Language API (`2022-07-01-preview`) is available. It provides:
* [Automatic language detection](./concepts/use-asynchronously.md#automatic-language-detection) for asynchronous tasks. * Text Analytics for health confidence scores are now returned in relations.
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations) * v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for: * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
-* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and reference documentation for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
+* There's a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and reference documentation for information on structuring your API calls. All text analytics `3.2-preview.2` API users can begin migrating their workloads to this new endpoint.
* [Entity linking](./entity-linking/quickstart.md?pivots=rest-api) * [Language detection](./language-detection/quickstart.md?pivots=rest-api) * [Key phrase extraction](./key-phrase-extraction/quickstart.md?pivots=rest-api)
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Model improvements for latest model-version for [text summarization](summarization/overview.md)
-* Model 2021-10-01 is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
+* Model `2021-10-01` is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
* [Question Answering](question-answering/overview.md): Active learning v2 incorporates a better clustering logic providing improved accuracy of suggestions. It considers user actions when suggestions are accepted or rejected to avoid duplicate suggestions, and improve query suggestions. ## December 2021
-* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
+* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library are retired. Upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
## November 2021
-* Based on ongoing customer feedback, we have increased the character limit per document for Text Analytics for health from 5,120 to 30,720.
+* Based on ongoing customer feedback, we increased the character limit per document for Text Analytics for health from 5,120 to 30,720.
* Azure AI Language release, with support for:
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Preview model version `2021-10-01-preview` for [Sentiment Analysis and Opinion mining](sentiment-opinion-mining/overview.md), which provides: * Improved prediction quality.
- * [Additional language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
+ * [Added language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
* For more information, see the [project z-code site](https://www.microsoft.com/research/project/project-zcode/). * To use this [model version](sentiment-opinion-mining/how-to/call-api.md#specify-the-sentiment-analysis-model), you must specify it in your API calls, using the model version parameter.
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
One of the key features of Azure OpenAI on your data is its ability to retrieve
To get started, [connect your data source](../use-your-data-quickstart.md) using Azure OpenAI Studio and start asking questions and chatting on your data. > [!NOTE]
-> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) with either the gpt-35-turbo or the gpt-4 models deployed.
+> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) deployed in a [supported region](#azure-openai-on-your-data-regional-availability) with either the gpt-35-turbo or the gpt-4 models.
## Data formats and file types
class TokenEstimator(object):
token_output = TokenEstimator.estimate_tokens(input_text) ```
+## Azure OpenAI on your data regional availability
+
+You can use Azure OpenAI on your data with an Azure OpenAI resource in the following regions:
+* Australia East
+* Brazil South
+* Canada East
+* East US
+* East US 2
+* France Central
+* Japan East
+* North Central US
+* Norway East
+* South Central US
+* South India
+* Sweden Central
+* Switzerland North
+* UK South
+* West Europe
+* West US
+
+If your Azure OpenAI resource is in another region, you won't be able to use Azure OpenAI on your data.
+ ## Next steps * [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)
ai-services Dynamic Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/dynamic-quota.md
+
+ Title: Azure OpenAI Service dynamic quota
+
+description: Learn how to use Azure OpenAI dynamic quota
+#
++++ Last updated : 01/30/2024++++
+# Azure OpenAI Dynamic quota (Preview)
+
+Dynamic quota is an Azure OpenAI feature that enables a standard (pay-as-you-go) deployment to opportunistically take advantage of more quota when extra capacity is available. When dynamic quota is set to off, your deployment will be able to process a maximum throughput established by your Tokens Per Minute (TPM) setting. When you exceed your preset TPM, requests will return HTTP 429 responses. When dynamic quota is enabled, the deployment has the capability to access higher throughput before returning 429 responses, allowing you to perform more calls earlier. The extra requests are still billed at the [regular pricing rates](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
+
+Dynamic quota can only temporarily *increase* your available quota: it will never decrease below your configured value.
+
+## When to use dynamic quota
+
+Dynamic quota is useful in most scenarios, particularly when your application can use extra capacity opportunistically or the application itself is driving the rate at which the Azure OpenAI API is called.
+
+Typically, the situation in which you might prefer to avoid dynamic quota is when your application would provide an adverse experience if quota is volatile or increased.
+
+For dynamic quota, consider scenarios such as:
+
+* Bulk processing,
+* Creating summarizations or embeddings for Retrieval Augmented Generation (RAG),
+* Offline analysis of logs for generation of metrics and evaluations,
+* Low-priority research,
+* Apps that have a small amount of quota allocated.
+
+### When does dynamic quota come into effect?
+
+The Azure OpenAI backend decides if, when, and how much extra dynamic quota is added or removed from different deployments. It isn't forecasted or announced in advance, and isn't predictable. Azure OpenAI lets your application know there's more quota available by responding with an HTTP 429 and not letting more API calls through. To take advantage of dynamic quota, your application code must be able to issue more requests as HTTP 429 responses become infrequent.
+
+### How does dynamic quota change costs?
+
+* Calls that are done above your base quota have the same costs as regular calls.
+
+* There's no extra cost to turn on dynamic quota on a deployment, though the increased throughput could ultimately result in increased cost depending on the amount of traffic your deployment receives.
+
+> [!NOTE]
+> With dynamic quota, there is no call enforcement of a "ceiling" quota or throughput. Azure OpenAI will process as many requests as it can above your baseline quota. If you need to control the rate of spend even when quota is less constrained, your application code needs to hold back requests accordingly.
+
+## How to use dynamic quota
+
+To use dynamic quota, you must:
+
+* Turn on the dynamic quota property in your Azure OpenAI deployment.
+* Make sure your application can take advantage of dynamic quota.
+
+### Enable dynamic quota
+
+To activate dynamic quota for your deployment, you can go to the advanced properties in the resource configuration, and switch it on:
++
+Alternatively, you can enable it programmatically with Azure CLI's [`az rest`](/cli/azure/reference-index?view=azure-cli-latest#az-rest&preserve-view=true):
+
+Replace the `{subscriptionId}`, `{resourceGroupName}`, `{accountName}`, and `{deploymentName}` with the relevant values for your resource. In this case, `accountName` is equal to Azure OpenAI resource name.
+
+```azurecli
+az rest --method patch --url "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments/{deploymentName}?2023-10-01-preview" --body '{"properties": {"dynamicThrottlingEnabled": true} }'
+```
+
+### How do I know how much throughput dynamic quota is adding to my app?
+
+To monitor how it's working, you can track the throughput of your application in Azure Monitor. During the Preview of dynamic quota, there's no specific metric or log to indicate if quota has been dynamically increased or decreased.
+dynamic quota is less likely to be engaged for your deployment if it runs in heavily utilized regions, and during peak hours of use for those regions.
+
+## Next steps
+
+* Learn more about how [quota works](./quota.md).
+* Learn more about [monitoring Azure OpenAI](./monitoring.md).
++
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| Parameter | Type | Required? | Default | Description | |--|--|--|--|--|
-| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#using-whisper-models) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
+| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
| ```language``` | string | No | Null | The language of the input audio such as `fr`. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format improves accuracy and latency.<br/><br/>For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```prompt``` | string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```response_format``` | string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. |
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).
+- An Azure OpenAI resource in a [supported region](./concepts/use-your-data.md#azure-openai-on-your-data-regional-availability) with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).
- Your chat model can use version `gpt-35-turbo (0301)`, `gpt-35-turbo-16k`, `gpt-4`, and `gpt-4-32k`. You can view or change your model version in [Azure OpenAI Studio](./how-to/working-with-models.md#model-updates).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Azure OpenAI Service now supports the GPT-3.5 Turbo Instruct model. This model h
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whisper model. Get AI-generated text based on the speech audio you provide. To learn more, check out the [quickstart](./whisper-quickstart.md). > [!NOTE]
-> Azure AI Speech also supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](../speech-service/batch-transcription-create.md#using-whisper-models) guide. Check out [What is the Whisper model?](../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
+> Azure AI Speech also supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) guide. Check out [What is the Whisper model?](../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
### New Regions
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
zone_pivot_groups: openai-whisper
In this quickstart, you use the Azure OpenAI Whisper model for speech to text.
-The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#using-whisper-models) API.
+The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.
## Prerequisites
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
The batch transcription API supports many different formats and codecs, such as:
## Azure Blob Storage upload
-When audio files are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account, you can request transcription of individual audio files or an entire Azure Blob Storage container. You can also [write transcription results](batch-transcription-create.md#destination-container-url) to a Blob container.
+When audio files are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account, you can request transcription of individual audio files or an entire Azure Blob Storage container. You can also [write transcription results](batch-transcription-create.md#specify-a-destination-container-url) to a Blob container.
> [!NOTE] > For blob and container limits, see [batch transcription quotas and limits](speech-services-quotas-and-limits.md#batch-transcription).
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Title: Create a batch transcription - Speech service
-description: With batch transcriptions, you submit the audio, and then retrieve transcription results asynchronously.
+description: Learn how to use Azure AI Speech for batch transcriptions, where you submit audio and then retrieve transcription results asynchronously.
Previously updated : 1/18/2024 Last updated : 1/26/2024 zone_pivot_groups: speech-cli-rest
+#customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
# Create a batch transcription
+With batch transcriptions, you submit [audio data](batch-transcription-audio-data.md) in a batch. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
+ > [!IMPORTANT]
-> New pricing is in effect for batch transcription via [Speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
+> New pricing is in effect for batch transcription by using [Speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
-With batch transcriptions, you submit the [audio data](batch-transcription-audio-data.md), and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
+## Prerequisites
-> [!NOTE]
-> To use batch transcription, you need to use a standard (S0) Speech resource. Free resources (F0) aren't supported. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+- The [Speech SDK](quickstarts/setup-platform.md) installed.
+- A standard (S0) Speech resource. Free resources (F0) aren't supported.
## Create a transcription job
With batch transcriptions, you submit the [audio data](batch-transcription-audio
To create a transcription, use the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation of the [Speech to text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions: - You must set either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).-- Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.
+- Set the required `locale` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later.
- Set the required `displayName` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later.-- Optionally to use a model other than the base model, set the `model` property to the model ID. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).-- Optionally you can set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. For Whisper models set the `displayFormWordLevelTimestampsEnabled` property instead. Whisper is a display-only model, so the lexical field isn't populated in the transcription.-- Optionally you can set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.
+- Optionally, to use a model other than the base model, set the `model` property to the model ID. For more information, see [Use a custom model](#use-a-custom-model) and [Use a Whisper model](#use-a-whisper-model).
+- Optionally, set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. For Whisper models, set the `displayFormWordLevelTimestampsEnabled` property instead. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
+- Optionally, set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.
+
+For more information, see [Request configuration options](#request-configuration-options).
-For more information, see [request configuration options](#request-configuration-options).
+Make an HTTP POST request that uses the URI as shown in the following [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) example.
-Make an HTTP POST request using the URI as shown in the following [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+- Replace `YourSubscriptionKey` with your Speech resource key.
+- Replace `YourServiceRegion` with your Speech resource region.
+- Set the request body properties as previously described.
```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
regularly from the service, after you retrieve the results. Alternatively, set t
To create a transcription, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions: -- Set the required `content` parameter. You can specify either a semi-colon delimited list of individual files, or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).-- Set the required `language` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `content` parameter. You can specify a semi-colon delimited list of individual files or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
+- Set the required `language` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
- Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response. Here's an example Speech CLI command that creates a transcription job:
-```azurecli-interactive
+```azurecli
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav ```
The top-level `self` property in the response body is the transcription's URI. U
For Speech CLI help with transcriptions, run the following command:
-```azurecli-interactive
+```azurecli
spx help batch transcription ```
Here are some property options that you can use to configure a transcription whe
| Property | Description | |-|-| |`channels`|An array of channel numbers to process. Channels `0` and `1` are transcribed by default. |
-|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
-|`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
-|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) contains a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0), then it's ignored and only 2 speakers are identified.|
-|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1 and later).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
+|`contentContainerUrl`| You can submit individual audio files or a whole storage container.<br/><br/>You must specify the audio data location by using either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property isn't returned in the response.|
+|`contentUrls`| You can submit individual audio files or a whole storage container.<br/><br/>You must specify the audio data location by using either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property isn't returned in the response.|
+|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information, such as the supported security scenarios, see [Specify a destination container URL](#specify-a-destination-container-url).|
+|`diarization`|Indicates that the Speech service should attempt diarization analysis on the input, which is expected to be a mono channel that contains multiple voices. The feature isn't available with stereo recordings.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings.<br/><br/>Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) contains a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers, setting `diarizationEnabled` property to `true` is enough. For an example of the property usage, see [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).<br/><br/>The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property. For an example, see [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version, such as version 3.0, it's ignored and only two speakers are identified.|
+|`diarizationEnabled`|Specifies that the Speech service should attempt diarization analysis on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization`. Use only with Speech to text REST API version 3.1 and later.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
|`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
-|`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the displayWords property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
+|`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the `displayWords` property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
|`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.|
-|`languageIdentification.candidateLocales`|The candidate locales for language identification such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of 2 and a maximum of 10 candidate locales, including the main locale for the transcription, is supported.|
-|`locale`|The locale of the batch transcription. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
-|`model`|You can set the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).|
+|`languageIdentification.candidateLocales`|The candidate locales for language identification, such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of two and a maximum of ten candidate locales, including the main locale for the transcription, is supported.|
+|`locale`|The locale of the batch transcription. This value should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
+|`model`|You can set the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Use a custom model](#use-a-custom-model) and [Use a Whisper model](#use-a-whisper-model).|
|`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. | |`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.<br/><br/>This property isn't applicable for Whisper models.| |`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete) regularly after you retrieve the transcription results.|
Here are some property options that you can use to configure a transcription whe
For Speech CLI help with transcription configuration options, run the following command:
-```azurecli-interactive
+```azurecli
spx help batch transcription create advanced ``` ::: zone-end
-## Using custom models
+## Use a custom model
-Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
+Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
-Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model.
+Optionally, you can modify the previous [create transcription example](#create-a-transcription-job) by setting the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model.
::: zone pivot="rest-api"
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone pivot="speech-cli"
-```azurecli-interactive
+```azurecli
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d" ``` ::: zone-end
-To use a custom speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
+To use a custom speech model for batch transcription, you need the model's URI. The top-level `self` property in the response body is the model's URI. You can retrieve the model location when you create or get a model. For more information, see the JSON response example in [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model).
> [!TIP]
-> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
+> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if you use the [custom speech model](how-to-custom-speech-train-model.md) only for batch transcription.
-Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+Batch transcription requests for expired models fail with a 4xx error. Set the `model` property to a base model or custom model that isn't expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
-## Using Whisper models
+## Use a Whisper model
-Azure AI Speech supports OpenAI's Whisper model via the batch transcription API.
+Azure AI Speech supports OpenAI's Whisper model by using the batch transcription API. You can use the Whisper model for batch transcription.
> [!NOTE]
-> Azure OpenAI Service also supports OpenAI's Whisper model for speech to text with a synchronous REST API. To learn more, check out the [quickstart](../openai/whisper-quickstart.md). Check out [What is the Whisper model?](./whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
+> Azure OpenAI Service also supports OpenAI's Whisper model for speech to text with a synchronous REST API. To learn more, see [Speech to text with the Azure OpenAI Whisper model](../openai/whisper-quickstart.md). For more information about when to use Azure AI Speech vs. Azure OpenAI Service, see [What is the Whisper model?](./whisper-overview.md)
-To use a Whisper model for batch transcription, you also need to set the `model` property. Whisper is a display-only model, so the lexical field isn't populated in the response.
+To use a Whisper model for batch transcription, you need to set the `model` property. Whisper is a display-only model, so the lexical field isn't populated in the response.
> [!IMPORTANT]
-> Whisper models are currently in preview. And you should always use [version 3.2](./migrate-v3-1-to-v3-2.md) of the speech to text API (that's available in a seperate preview) for Whisper models.
+> Whisper models are currently in preview. You should always use [version 3.2](./migrate-v3-1-to-v3-2.md) of the speech to text API, which is available in a separate preview, for Whisper models.
-Whisper models via batch transcription are supported in the East US, Southeast Asia, and West Europe regions.
+Whisper models by batch transcription are supported in the East US, Southeast Asia, and West Europe regions.
::: zone pivot="rest-api"
-You can make a [Models_ListBaseModels](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-2-preview1/operations/Models_ListBaseModels) request to get available base models for all locales.
+You can make a [Models_ListBaseModels](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-2-preview1/operations/Models_ListBaseModels) request to get available base models for all locales.
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
Make an HTTP GET request as shown in the following example for the `eastus` regi
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-By default only the 100 oldest base models are returned, so you can use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
+By default, only the 100 oldest base models are returned. Use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
```azurecli-interactive curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base?skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-``````
+```
::: zone-end ::: zone pivot="speech-cli"
-Make sure that you set the [configuration variables](spx-basics.md#create-a-resource-configuration) for a Speech resource in one of the supported regions. You can run the `spx csr list --base` command to get available base models for all locales.
+Make sure that you set the [configuration variables](spx-basics.md#create-a-resource-configuration) for a Speech resource in one of the supported regions. You can run the `spx csr list --base` command to get available base models for all locales.
-```azurecli-interactive
+```azurecli
spx csr list --base --api-version v3.2-preview.1 ```+ ::: zone-end The `displayName` property of a Whisper model contains "Whisper Preview" as shown in this example. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone pivot="speech-cli"
-```azurecli-interactive
+```azurecli
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base/d9cbeee6-582b-47ad-b5c1-6226583c92b6" --api-version v3.2-preview.1 ``` ::: zone-end -
-## Destination container URL
+## Specify a destination container URL
The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
-You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic.
+You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). This option uses only an [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic.
-If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). See details on how to use BYOS-enabled Speech resource for Batch transcription in [this article](bring-your-own-storage-speech-resource-speech-to-text.md).
+If you want to store the transcription results in an Azure Blob storage container by using the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). For more information, see [Use the Bring your own storage (BYOS) Speech resource for speech to text](bring-your-own-storage-speech-resource-speech-to-text.md).
-## Next steps
+## Related content
- [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md)
ai-services Bring Your Own Storage Speech Resource Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md
Perform these steps to execute Batch transcription with BYOS-enabled Speech reso
> [!IMPORTANT] > Don't use `destinationContainerUrl` parameter in your transcription request. If you use BYOS, the transcription results are stored in the BYOS-associated Storage account automatically. >
- > If you use `destinationContainerUrl` parameter, it will work, but provide significantly less security for your data, because of ad hoc SAS usage. See details [here](batch-transcription-create.md#destination-container-url).
+ > If you use `destinationContainerUrl` parameter, it will work, but provide significantly less security for your data, because of ad hoc SAS usage. See details [here](batch-transcription-create.md#specify-a-destination-container-url).
1. When transcription is complete, get transcription results according to [this guide](batch-transcription-get.md). Consider using `sasValidityInSeconds` parameter (see the following section).
ai-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis-viseme.md
synthesizer.visemeReceived = function (s, e) {
window.console.log("(Viseme), Audio offset: " + e.audioOffset / 10000 + "ms. Viseme ID: " + e.visemeId); // `Animation` is an xml string for SVG or a json string for blend shapes
- var animation = e.Animation;
+ var animation = e.animation;
} // If VisemeID is the only thing you want, you can also use `speakTextAsync()`
ai-services Migrate V3 1 To V3 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-1-to-v3-2.md
The `LanguageIdentificationMode` is added to `LanguageIdentificationProperties`
### Whisper models
-Azure AI Speech now supports OpenAI's Whisper model via Speech to text REST API v3.2. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#using-whisper-models) guide.
+Azure AI Speech now supports OpenAI's Whisper model via Speech to text REST API v3.2. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#use-a-whisper-model) guide.
> [!NOTE] > Azure OpenAI Service also supports OpenAI's Whisper model for speech to text with a synchronous REST API. To learn more, check out the [quickstart](../openai/whisper-quickstart.md). Check out [What is the Whisper model?](./whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
You need to use [speech synthesis markup language (SSML)](./speech-synthesis-mar
- The `speakerProfileId` property in SSML is used to specify the [speaker profile ID](./personal-voice-create-voice.md) for the personal voice. -- The voice name is specified in the `name` property in SSML. For personal voice, the voice name must be set to `PhoenixV2Neural` or another supported base model voice name. To get a list of supported base model voice names, use the `BaseModels_List` operation of the custom voice API.
+- The voice name is specified in the `name` property in SSML. For personal voice, the voice name must be one of the supported base model voice names. To get a list of supported base model voice names, use the `BaseModels_List` operation of the custom voice API.
+
+ > [!NOTE]
+ > The voice names labeled with the `Latest`, such as `DragonLatestNeural` or `PhoenixLatestNeural`, will be updated from time to time; its performance may vary with updates for ongoing improvements. If you would like to use a stable version, select one labeled with a version number, such as `PhoenixV2Neural`.
+- `Dragon` is a base model with superior voice cloning similarity compared to `Phoenix`. `Phoenix` is a base model with more accurate pronunciation and lower latency than `Dragon`. ΓÇâ
+
Here's example SSML in a request for text to speech with the voice name and the speaker profile ID. ```xml <speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts' xml:lang='en-US'>
- <voice name='PhoenixV2Neural'>
+ <voice name='DragonLatestNeural'>
<mstts:ttsembedding speakerProfileId='your speaker profile ID here'> I'm happy to hear that you find me amazing and that I have made your trip planning easier and more fun. 我很高兴听到你觉得我很了不起,我让你的旅行计划更轻松、更有趣。Je suis heureux d'apprendre que vous me trouvez incroyable et que j'ai rendu la planification de votre voyage plus facile et plus amusante. </mstts:ttsembedding>
ai-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md
Azure AI Speech is updated on an ongoing basis. To stay up-to-date with recent d
## Recent highlights
-* Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#using-whisper-models) guide.
+* Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#use-a-whisper-model) guide.
* [Speech to text REST API version 3.2](./migrate-v3-1-to-v3-2.md) is available in public preview. * [Real-time diarization](./get-started-stt-diarization.md) is in public preview.
ai-services Whisper Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/whisper-overview.md
Either the Whisper model or the Azure AI Speech models are appropriate depending
| Scenario | Whisper model | Azure AI Speech models | |||| | Real-time transcriptions, captions, and subtitles for audio and video. | Not available | Recommended |
-| Transcriptions, captions, and subtitles for prerecorded audio and video. | The Whisper model via [Azure OpenAI](../openai/whisper-quickstart.md) is recommended for fast processing of individual audio files. The Whisper model via [Azure AI Speech](./batch-transcription-create.md#using-whisper-models) is recommended for batch processing of large files. For more information, see [Whisper model via Azure AI Speech or via Azure OpenAI Service?](#whisper-model-via-azure-ai-speech-or-via-azure-openai-service) | Recommended for batch processing of large files, diarization, and word level timestamps. |
+| Transcriptions, captions, and subtitles for prerecorded audio and video. | The Whisper model via [Azure OpenAI](../openai/whisper-quickstart.md) is recommended for fast processing of individual audio files. The Whisper model via [Azure AI Speech](./batch-transcription-create.md#use-a-whisper-model) is recommended for batch processing of large files. For more information, see [Whisper model via Azure AI Speech or via Azure OpenAI Service?](#whisper-model-via-azure-ai-speech-or-via-azure-openai-service) | Recommended for batch processing of large files, diarization, and word level timestamps. |
| Transcript of phone call recordings and analytics such as call summary, sentiment, key topics, and custom insights. | Available | Recommended | | Real-time transcription and analytics to assist call center agents with customer questions. | Not available | Recommended | | Transcript of meeting recordings and analytics such as meeting summary, meeting chapters, and action item extraction. | Available | Recommended |
Either the Whisper model or the Azure AI Speech models are appropriate depending
## Whisper model via Azure AI Speech or via Azure OpenAI Service?
-You can choose whether to use the Whisper Model via [Azure OpenAI](../openai/whisper-quickstart.md) or via [Azure AI Speech](./batch-transcription-create.md#using-whisper-models). In either case, the readability of the transcribed text is the same. You can input mixed language audio and the output is in English.
+You can choose whether to use the Whisper Model via [Azure OpenAI](../openai/whisper-quickstart.md) or via [Azure AI Speech](./batch-transcription-create.md#use-a-whisper-model). In either case, the readability of the transcribed text is the same. You can input mixed language audio and the output is in English.
Whisper Model via Azure OpenAI Service might be best for: - Quickly transcribing audio files one at a time
Regional support is another consideration.
## Next steps -- [Use Whisper models via the Azure AI Speech batch transcription API](./batch-transcription-create.md#using-whisper-models)
+- [Use Whisper models via the Azure AI Speech batch transcription API](./batch-transcription-create.md#use-a-whisper-model)
- [Try the speech to text quickstart for Whisper via Azure OpenAI](../openai/whisper-quickstart.md) - [Try the real-time speech to text quickstart via Azure AI Speech](./get-started-speech-to-text.md)
ai-services Document Translation Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/document-translation-rest-api.md
Previously updated : 07/18/2023 Last updated : 01/17/2024 recommendations: false ms.devlang: csharp
To get started, you need:
1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
- 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+ 1. **Name**. Enter the name you chose for your resource. The name you choose must be unique within Azure.
> [!NOTE] > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
To get started, you need:
1. Review the service terms and select **Create** to deploy your resource.
- 1. After your resource has successfully deployed, select **Go to resource**.
+ 1. After your resource successfully deploys, select **Go to resource**.
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) -->
### Retrieve your key and document translation endpoint *Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
-1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
-
-1. You paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
+1. You can copy and paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service. Only one key is necessary to make an API call.
:::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint) -->
- ## Create Azure Blob Storage containers You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
You need to [**create containers**](../../../../storage/blobs/storage-quickstart
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](../how-to-guides/create-sas-tokens.md).
-* Your **source** container or blob must have designated **read** and **list** access.
-* Your **target** container or blob must have designated **write** and **list** access.
-* Your **glossary** blob must have designated **read** and **list** access.
+* Your **source** container or blob must designate **read** and **list** access.
+* Your **target** container or blob must designate **write** and **list** access.
+* Your **glossary** blob must designate **read** and **list** access.
> [!TIP] >
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**. > * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers) -->
### Sample document
That's it, congratulations! In this quickstart, you used Document Translation to
## Next steps -
+> [!div class="nextstepaction"]
+> [**Learn more about Document Translation operations**](../reference/rest-api-guide.md)
ai-studio Rbac Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md
Title: Role-based access control in Azure AI Studio
-description: This article introduces role-based access control in Azure AI Studio
+description: This article introduces role-based access control in Azure AI Studio.
-# Role-based access control in Azure AI Studio
+# Role-based access control in Azure AI Studio
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
Automatic is the default option for a runtime. You can start an automatic runtim
On a flow page, you can use the following options to manage an automatic runtime: -- **Install packages** triggers `pip install -r requirements.txt` in the flow folder. The process can take a few minutes, depending on the packages that you install.
+- **Install packages** Open `requirements.txt` in prompt flow UI, you can add packages in it.
+- **View installed packages** shows the packages that are installed in the runtime. It includes the packages baked to base image and packages specify in the `requirements.txt` file in the flow folder.
- **Reset** deletes the current runtime and creates a new one with the same environment. If you encounter a package conflict, you can try this option. - **Edit** opens the runtime configuration page, where you can define the VM side and the idle time for the runtime. - **Stop** deletes the current runtime. If there's no active runtime on the underlying compute, the compute resource is also deleted.
If you want to use a private feed in Azure DevOps, follow these steps:
:::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png" alt-text="Screenshot that shows the toggle for using a workspace user-assigned managed identity." lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png":::
+#### Change the base image for automatic runtime (preview)
+
+By default, we use the latest prompt flow image as the base image. If you want to use a different base image, you need build your own base image, this docker image should be built from prompt flow base image that is `mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<newest_version>`. If possible use the [latest version of the base image](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list). To use the new base image, you need to reset the runtime via the `reset` command. This process takes several minutes as it pulls the new base image and reinstalls packages.
++
+```yaml
+environment:
+ image: <your-custom-image>
+ python_requirements_txt: requirements.txt
+```
+ ### Update a compute instance runtime on a runtime page Azure AI Studio gets regular updates to the base image (`mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable`) to include the latest features and bug fixes. To get the best experience and performance, periodically update your runtime to the [latest version](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list).
ai-studio Create Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 1/31/2024
Once a project is created, you can access the **Tools**, **Components**, and **S
In the project details page (select **Build** > **Settings**), you can find information about the project, such as the project name, description, and the Azure AI resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API. -- Project name: The name of the project corresponds to the selected project in the left panel. The project name is also referenced in the *Welcome to the YOUR-PROJECT-NAME project* message on the main page. You can change the name of the project by selecting the edit icon next to the project name.-- Project description: The project description (if set) is shown directly below the *Welcome to the YOUR-PROJECT-NAME project* message on the main page. You can change the description of the project by selecting the edit icon next to the project description.
+- Project name: The name of the project corresponds to the selected project in the left panel.
- Azure AI resource: The Azure AI resource that hosts the project. -- Location: The location of the Azure AI resource that hosts the project. Azure AI resources are supported in the same regions as Azure OpenAI.
+- Location: The location of the Azure AI resource that hosts the project. For supported locations, see [Azure AI Studio regions](../reference/region-support.md).
- Subscription: The subscription that hosts the Azure AI resource that hosts the project. - Resource group: The resource group that hosts the Azure AI resource that hosts the project.-- Container registry: The container for project files. Container registry allows you to build, store, and manage container images and artifacts in a private registry for all types of container deployments.-- Storage account: The storage account for the project.
+- Permissions: The users that have access to the project. For more information, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
-Select the Azure AI resource, subscription, resource group, container registry, or storage account to navigate to the corresponding resource in the Azure portal.
+Select the Azure AI resource, subscription, or resource group to navigate to the corresponding resource in the Azure portal.
## Next steps -- [Quickstart: Generate product name ideas in the Azure AI Studio playground](../quickstarts/playground-completions.md)
+- [QuickStart: Moderate text and images with content safety in Azure AI Studio](../quickstarts/content-safety.md)
- [Learn more about Azure AI Studio](../what-is-ai-studio.md) - [Learn more about Azure AI resources](../concepts/ai-resources.md)
ai-studio Data Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-add.md
To create and work with data, you need:
* An Azure subscription. If you don't have one, create a free account before you begin.
-* An Azure AI Studio project.
+* An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
## Create data
If you're using SDK or CLI to create data, you must specify a `path` that points
A data that is a File (`uri_file`) type points to a *single file* on storage (for example, a CSV file). You can create a file typed data using: -- # [Studio](#tab/azure-studio) These steps explain how to create a File typed data in the Azure AI Studio:
myfile = Data(
client.data.create_or_update(myfile) ``` - ### Create data: Folder type
ai-studio Data Image Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md
Use this article to learn how to provide your own image data for GPT-4 Turbo wit
This guide is scoped to the Azure AI Studio playground, but you can also add image data via your project's **Data** page. See [Add data to your project](../how-to/data-add.md) for more information.
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](create-projects.md) in Azure AI Studio.
1. If you aren't already in the playground, select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. In the playground, make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed GPT-4 Turbo with Vision model from the **Deployment** dropdown.
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
You must have:
## Create an index
-1. Sign in to Azure AI Studio and open the Azure AI project in which you want to create the index.
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. From the collapsible menu on the left, select **Indexes** under **Components**. :::image type="content" source="../media/index-retrieve/project-left-menu.png" alt-text="Screenshot of Project Left Menu." lightbox="../media/index-retrieve/project-left-menu.png":::
ai-studio Hear Speak Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md
The speech to text and text to speech features can be used together or separatel
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
## Configure the playground
The speech to text and text to speech features can be used together or separatel
Before you can start a chat session, you need to configure the playground to use the speech to text and text to speech features. 1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. Select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. Make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed chat model from the **Deployment** dropdown.
ai-studio Multimodal Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md
Extra usage fees might apply for using GPT-4 Turbo with Vision and Azure AI Visi
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
## Start a chat session to analyze images or video
You need a video up to three minutes in length to complete the video quickstart.
In this chat session, you instruct the assistant to aid in understanding images that you input. 1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. Select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. Make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed GPT-4 Turbo with Vision model from the **Deployment** dropdown. Under the chat session text box, you should now see the option to select a file.
ai-studio Playground Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/playground-completions.md
Use this article to get started making your first calls to Azure OpenAI.
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).-
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
### Try text completions To use the Azure OpenAI for text completions in the playground, follow these steps: 1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. From the Azure AI Studio Home page, select **Build** > **Playground**. 1. Select your deployment from the **Deployments** dropdown. 1. Select **Completions** from the **Mode** dropdown menu.
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
The steps in this tutorial are:
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An Azure OpenAI resource with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).
+- An [Azure AI resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio.
- You need at least one file to upload that contains example data. To complete this tutorial, use the product information samples from the [Azure/aistudio-copilot-sample repository on GitHub](https://github.com/Azure/aistudio-copilot-sample/tree/main/data). Specifically, the [product_info_11.md](https://github.com/Azure/aistudio-copilot-sample/blob/main/dat` on your local computer.
The steps in this tutorial are:
Follow these steps to deploy a chat model and test it without your data.
-1. Sign in to [Azure AI Studio](https://ai.azure.com) with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. You should be on the Azure AI Studio **Home** page.
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. Select **Build** from the top menu and then select **Deployments** > **Create**. :::image type="content" source="../media/tutorials/chat-web-app/deploy-create.png" alt-text="Screenshot of the deployments page without deployments." lightbox="../media/tutorials/chat-web-app/deploy-create.png":::
ai-studio Screen Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md
Within **Explore**, you can explore many capabilities found within the secondary
## Projects
-To work within the Azure AI Studio, you must first create a project:
+To work within the Azure AI Studio, you must first [create a project](../how-to/create-projects.md):
1. Navigate to the Build tab in the primary navigation. 1. Press the Tab key until you hear *New project* and select this button. 1. Enter the information requested in the **Create a new project** dialog.
ai-studio What Is Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md
Using Azure AI Studio also incurs cost associated with the underlying services,
Azure AI Studio is currently available in the following regions: Australia East, Brazil South, Canada Central, East US, East US 2, France Central, Germany West Central, India South, Japan East, North Central US, Norway East, Poland Central, South Africa North, South Central US, Sweden Central, Switzerland North, UK South, West Europe, and West US.
-To learn more, see [Azure global infrastructure - Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services).
+To learn more, see [Azure AI Studio regions](./reference/region-support.md).
## How to get access
aks Active Active Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/active-active-solution.md
+
+ Title: Recommended active-active high availability solution overview for Azure Kubernetes Service (AKS)
+description: Learn about the recommended active-active high availability solution overview for Azure Kubernetes Service (AKS).
++++ Last updated : 01/30/2024++
+# Recommended active-active high availability solution overview for Azure Kubernetes Service (AKS)
+
+When you create an application in Azure Kubernetes Service (AKS) and choose an Azure region during resource creation, it's a single-region app. In the event of a disaster that causes the region to become unavailable, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region, your application becomes less susceptible to a single-region disaster, which guarantees business continuity, and any data replication across the regions lets you recover your last application state.
+
+While there are multiple patterns that can provide recoverability for an AKS solution, this guide outlines the recommended active-active high availability solution for AKS. Within this solution, we deploy two independent and identical AKS clusters into two paired Azure regions with both clusters actively serving traffic.
+
+> [!NOTE]
+> The following use case can be considered standard practice within AKS. It has been reviewed internally and vetted in conjunction with our Microsoft partners.
+
+## Active-active high availability solution overview
+
+This solution relies on two identical AKS clusters configured to actively serve traffic. You place a global traffic manager, such as [Azure Front Door](../frontdoor/front-door-overview.md), in front of the two clusters to distribute traffic across them. You must consistently configure the clusters to host an instance of all applications required for the solution to function.
+
+Availability zones are another way to ensure high availability and fault tolerance for your AKS cluster within the same region. Availability zones allow you to distribute your cluster nodes across multiple isolated locations within an Azure region. This way, if one zone goes down due to a power outage, hardware failure, or network issue, your cluster can continue to run and serve your applications. Availability zones also improve the performance and scalability of your cluster by reducing the latency and contention among nodes. To set up availability zones for your AKS cluster, you need to specify the zone numbers when creating or updating your node pools. For more information, see [What are Azure availability zones?](../reliability/availability-zones-overview.md)
+
+> [!NOTE]
+> Many regions support availability zones. Consider using regions with availability zones to provide more resiliency and availability for your workloads. For more information, see [Recover from a region-wide service disruption](/azure/architecture/resiliency/recovery-loss-azure-region).
+
+## Scenarios and configurations
+
+This solution is best implemented when hosting stateless applications and/or with other technologies also deployed across both regions, such as horizontal scaling. In scenarios where the hosted application is reliant on resources, such as databases, that are actively in only one region, we recommend instead implementing an [active-passive solution](./active-passive-solution.md) for potential cost savings, as active-passive has more downtime than active-active.
+
+## Components
+
+The active-active high availability solution uses many Azure services. This section covers only the components unique to this multi-cluster architecture. For more information on the remaining components, see the [AKS baseline architecture](/azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=%2Fazure%2Faks%2Ftoc.json&bc=%2Fazure%2Faks%2Fbreadcrumb%2Ftoc.json).
+
+**Multiple clusters and regions**: You deploy multiple AKS clusters, each in a separate Azure region. During normal operations, your Azure Front Door configuration routes network traffic between all regions. If one region becomes unavailable, traffic routes to a region with the fastest load time for the user.
+
+**Hub-spoke network per region**: A regional hub-spoke network pair is deployed for each regional AKS instance. [Azure Firewall Manager](../firewall-manager/overview.md) policies manage the firewall policies across all regions.
+
+**Regional key store**: You provision [Azure Key Vault](../key-vault/general/overview.md) in each region to store sensitive values and keys specific to the AKS instance and to support services found in that region.
+
+**Azure Front Door**: [Azure Front Door](../frontdoor/front-door-overview.md) load balances and routes traffic to a regional [Azure Application Gateway](../application-gateway/overview.md) instance, which sits in front of each AKS cluster. Azure Front Door allows for *layer seven* global routing.
+
+**Log Analytics**: Regional [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) instances store regional networking metrics and diagnostic logs. A shared instance stores metrics and diagnostic logs for all AKS instances.
+
+**Container Registry**: The container images for the workload are stored in a managed container registry. With this solution, a single [Azure Container Registry](../container-registry/container-registry-intro.md) instance is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables you to replicate images to the selected Azure regions and provides continued access to images even if a region experiences an outage.
+
+## Failover process
+
+If a service or service component becomes unavailable in one region, traffic should be routed to a region where that service is available. A multi-region architecture includes many different failure points. In this section, we cover the potential failure points.
+
+### Application Pods (Regional)
+
+A Kubernetes deployment object creates multiple replicas of a pod (*ReplicaSet*). If one is unavailable, traffic is routed between the remaining replicas. The Kubernetes *ReplicaSet* attempts to keep the specified number of replicas up and running. If one instance goes down, a new instance should be recreated. [Liveness probes](../container-instances/container-instances-liveness-probe.md) can check the state of the application or process running in the pod. If the pod is unresponsive, the liveness probe removes the pod, which forces the *ReplicaSet* to create a new instance.
+
+For more information, see [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/).
+
+### Application Pods (Global)
+
+When an entire region becomes unavailable, the pods in the cluster are no longer available to serve requests. In this case, the Azure Front Door instance routes all traffic to the remaining health regions. The Kubernetes clusters and pods in these regions continue to serve requests. To compensate for increased traffic and requests to the remaining cluster, keep in mind the following guidance:
+
+- Make sure network and compute resources are right sized to absorb any sudden increase in traffic due to region failover. For example, when using Azure Container Network Interface (CNI), make sure you have a subnet that can support all pod IPs with a spiked traffic load.
+- Use the [Horizontal Pod Autoscaler](./concepts-scale.md#horizontal-pod-autoscaler) to increase the pod replica count to compensate for the increased regional demand.
+- Use the AKS [Cluster Autoscaler](./cluster-autoscaler.md) to increase the Kubernetes instance node counts to compensate for the increased regional demand.
+
+### Kubernetes node pools (Regional)
+
+Occasionally, localized failure can occur to compute resources, such as power becoming unavailable in a single rack of Azure servers. To protect your AKS nodes from becoming a single point regional failure, use [Azure Availability Zones](./availability-zones.md). Availability zones ensure that AKS nodes in each availability zone are physically separated from those defined in another availability zones.
+
+### Kubernetes node pools (Global)
+
+In a complete regional failure, Azure Front Door routes traffic to the remaining healthy regions. Again, make sure to compensate for increased traffic and requests to the remaining cluster.
+
+## Failover testing strategy
+
+While there are no mechanisms currently available within AKS to take down an entire region of deployment for testing purposes, [Azure Chaos Studio](../chaos-studio/chaos-studio-overview.md) offers the ability to create a chaos experiment on your cluster.
+
+## Next steps
+
+If you're considering a different solution, see the following articles:
+
+- [Active passive disaster recovery solution overview for Azure Kubernetes Service (AKS)](./active-passive-solution.md)
+- [Passive cold solution overview for Azure Kubernetes Service (AKS)](./passive-cold-solution.md)
aks Active Passive Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/active-passive-solution.md
+
+ Title: Recommended active-passive disaster recovery solution overview for Azure Kubernetes Service (AKS)
+description: Learn about an active-passive disaster recovery solution overview for Azure Kubernetes Service (AKS).
++++ Last updated : 01/30/2024++
+# Active-passive disaster recovery solution overview for Azure Kubernetes Service (AKS)
+
+When you create an application in Azure Kubernetes Service (AKS) and choose an Azure region during resource creation, it's a single-region app. When the region becomes unavailable during a disaster, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region, your application becomes less susceptible to a single-region disaster, which guarantees business continuity, and any data replication across the regions lets you recover your last application state.
+
+This guide outlines an active-passive disaster recovery solution for AKS. Within this solution, we deploy two independent and identical AKS clusters into two paired Azure regions with only one cluster actively serving traffic.
+
+> [!NOTE]
+> The following practice has been reviewed internally and vetted in conjunction with our Microsoft partners.
+
+## Active-passive solution overview
+
+In this disaster recovery approach, we have two independent AKS clusters being deployed in two Azure regions. However, only one of the clusters is actively serving traffic at any one time. The secondary cluster (not actively serving traffic) contains the same configuration and application data as the primary cluster but doesnΓÇÖt accept any traffic unless directed by Azure Front Door traffic manager.
+
+## Scenarios and configurations
+
+This solution is best implemented when hosting applications reliant on resources, such as databases, that actively serve traffic in one region. In scenarios where you need to host stateless applications deployed across both regions, such as horizontal scaling, we recommend considering an [active-active solution](./active-active-solution.md), as active-passive involves added latency.
+
+## Components
+
+The active-passive disaster recovery solution uses many Azure services. This example architecture involves the following components:
+
+**Multiple clusters and regions**: You deploy multiple AKS clusters, each in a separate Azure region. During normal operations, network traffic is routed to the primary AKS cluster set in the Azure Front Door configuration.
+
+**Configured cluster prioritization**: You set a prioritization level between 1-5 for each cluster (with 1 being the highest priority and 5 being the lowest priority). You can set multiple clusters to the same priority level and specify the weight for each cluster. If the primary cluster becomes unavailable, traffic automatically routes to the next region selected in Azure Front Door. All traffic must go through Azure Front Door for this system to work.
+
+**Azure Front Door**: [Azure Front Door](../frontdoor/front-door-overview.md) load balances and routes traffic to the [Azure Application Gateway](../application-gateway/overview.md) instance in the primary region (cluster must be marked with priority 1). In the event of a region failure, the service redirects traffic to the next cluster in the priority list.
+
+For more information, see [Priority-based traffic-routing](../frontdoor/routing-methods.md#priority-based-traffic-routing).
+
+**Hub-spoke pair**: A hub-spoke pair is deployed for each regional AKS instance. [Azure Firewall Manager](../firewall-manager/overview.md) policies manage the firewall rules across each region.
+
+**Key Vault**: You provision an [Azure Key Vault](../key-vault/general/overview.md) in each region to store secrets and keys.
+
+**Log Analytics**: Regional [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) instances store regional networking metrics and diagnostic logs. A shared instance stores metrics and diagnostic logs for all AKS instances.
+
+**Container Registry**: The container images for the workload are stored in a managed container registry. With this solution, a single [Azure Container Registry](../container-registry/container-registry-intro.md) instance is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables you to replicate images to the selected Azure regions and provides continued access to images even if a region experiences an outage.
+
+## Failover process
+
+If a service or service component becomes unavailable in one region, traffic should be routed to a region where that service is available. A multi-region architecture includes many different failure points. In this section, we cover the potential failure points.
+
+### Application Pods (Regional)
+
+A Kubernetes deployment object creates multiple replicas of a pod (*ReplicaSet*). If one is unavailable, traffic is routed between the remaining replicas. The Kubernetes *ReplicaSet* attempts to keep the specified number of replicas up and running. If one instance goes down, a new instance should be recreated. [Liveness probes](../container-instances/container-instances-liveness-probe.md) can check the state of the application or process running in the pod. If the pod is unresponsive, the liveness probe removes the pod, which forces the *ReplicaSet* to create a new instance.
+
+For more information, see [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/).
+
+### Application Pods (Global)
+
+When an entire region becomes unavailable, the pods in the cluster are no longer available to serve requests. In this case, the Azure Front Door instance routes all traffic to the remaining health regions. The Kubernetes clusters and pods in these regions continue to serve requests. To compensate for increased traffic and requests to the remaining cluster, keep in mind the following guidance:
+
+- Make sure network and compute resources are right sized to absorb any sudden increase in traffic due to region failover. For example, when using Azure Container Network Interface (CNI), make sure you have a subnet that can support all pod IPs with a spiked traffic load.
+- Use the [Horizontal Pod Autoscaler](./concepts-scale.md#horizontal-pod-autoscaler) to increase the pod replica count to compensate for the increased regional demand.
+- Use the AKS [Cluster Autoscaler](./cluster-autoscaler.md) to increase the Kubernetes instance node counts to compensate for the increased regional demand.
+
+### Kubernetes node pools (Regional)
+
+Occasionally, localized failure can occur to compute resources, such as power becoming unavailable in a single rack of Azure servers. To protect your AKS nodes from becoming a single point regional failure, use [Azure Availability Zones](./availability-zones.md). Availability zones ensure that AKS nodes in each availability zone are physically separated from those defined in another availability zones.
+
+### Kubernetes node pools (Global)
+
+In a complete regional failure, Azure Front Door routes traffic to the remaining healthy regions. Again, make sure to compensate for increased traffic and requests to the remaining cluster.
+
+## Failover testing strategy
+
+While there are no mechanisms currently available within AKS to take down an entire region of deployment for testing purposes, [Azure Chaos Studio](../chaos-studio/chaos-studio-overview.md) offers the ability to create a chaos experiment on your cluster.
+
+## Next steps
+
+If you're considering a different solution, see the following articles:
+
+- [Active active high availability solution overview for Azure Kubernetes Service (AKS)](./active-active-solution.md)
+- [Passive cold solution overview for Azure Kubernetes Service (AKS)](./passive-cold-solution.md)
aks App Routing Dns Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md
az keyvault create -g <ResourceGroupName> -l <Location> -n <KeyVaultName> --enab
### Create and export a self-signed SSL certificate
-> [!NOTE]
-> If you already have a certificate, you can skip this step.
->
+For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. If you already have a certificate, you can skip this step.
+
+> [!CAUTION]
+> Self-signed certificates are digital certificates that are not signed by a trusted third-party CA. Self-signed certificates are created, issued, and signed by the company or developer who is responsible for the website or software being signed. This is why self-signed certificates are considered unsafe for public-facing websites and applications. Azure Key Vault has a [trusted partnership with the some Certificate Authorities](../key-vault/certificates/how-to-integrate-certificate-authority.md).
+ 1. Create a self-signed SSL certificate to use with the Ingress using the `openssl req` command. Make sure you replace *`<Hostname>`* with the DNS name you're using. ```bash
aks Control Plane Metrics Default List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-plane-metrics-default-list.md
+
+ Title: List of control plane metrics in Azure Monitor managed service for Prometheus (preview)
+description: This article describes the minimal ingestion profile metrics for Azure Kubernetes Service (AKS) control plane metrics.
+ Last updated : 01/31/2024+++
+# Minimal ingestion profile for control plane Metrics in Managed Prometheus
+
+Azure Monitor metrics addon collects many Prometheus metrics by default. `Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules and default alerts are collected. This article describes how this setting is configured specifically for control plane metrics. This article also lists metrics collected by default when `minimal ingestion profile` is enabled.
+
+> [!NOTE]
+> For addon based collection, `Minimal ingestion profile` setting is enabled by default. The discussion here is focused on control plane metrics. The current set of default targets and metrics is listed [here][azure-monitor-prometheus-metrics-scrape-config-minimal].
+
+Following targets are **enabled/ON** by default - meaning you don't have to provide any scrape job configuration for scraping these targets, as metrics addon scrapes these targets automatically by default:
+
+- `controlplane-apiserver` (job=`controlplane-apiserver`)
+- `controlplane-etcd` (job=`controlplane-etcd`)
+
+Following targets are available to scrape, but scraping isn't enabled (**disabled/OFF**) by default. Meaning you don't have to provide any scrape job configuration for scraping these targets, and you need to turn **ON/enable** scraping for these targets using the [ama-metrics-settings-configmap][ama-metrics-settings-configmap-github] under the `default-scrape-settings-enabled` section.
+
+- `controlplane-cluster-autoscaler`
+- `controlplane-kube-scheduler`
+- `controlplane-kube-controller-manager`
+
+> [!NOTE]
+> The default scrape frequency for all default targets and scrapes is `30 seconds`. You can override it for each target using the [ama-metrics-settings-configmap][ama-metrics-settings-configmap-github] under `default-targets-scrape-interval-settings` section.
+
+### Minimal ingestion for default ON targets
+
+The following metrics are allow-listed with `minimalingestionprofile=true` for default **ON** targets. The below metrics are collected by default, as these targets are scraped by default.
+
+**controlplane-apiserver**
+
+- `apiserver_request_total`
+- `apiserver_cache_list_fetched_objects_total`
+- `apiserver_cache_list_returned_objects_total`
+- `apiserver_flowcontrol_demand_seats_average`
+- `apiserver_flowcontrol_current_limit_seats`
+- `apiserver_request_sli_duration_seconds_bucket`
+- `apiserver_request_sli_duration_seconds_sum`
+- `apiserver_request_sli_duration_seconds_count`
+- `process_start_time_seconds`
+- `apiserver_request_duration_seconds_bucket`
+- `apiserver_request_duration_seconds_sum`
+- `apiserver_request_duration_seconds_count`
+- `apiserver_storage_list_fetched_objects_total`
+- `apiserver_storage_list_returned_objects_total`
+- `apiserver_current_inflight_requests`
+
+**controlplane-etcd**
+
+- `etcd_server_has_leader`
+- `rest_client_requests_total`
+- `etcd_mvcc_db_total_size_in_bytes`
+- `etcd_mvcc_db_total_size_in_use_in_bytes`
+- `etcd_server_slow_read_indexes_total`
+- `etcd_server_slow_apply_total`
+- `etcd_network_client_grpc_sent_bytes_total`
+- `etcd_server_heartbeat_send_failures_total`
+
+### Minimal ingestion for default OFF targets
+
+The following are metrics that are allow-listed with `minimalingestionprofile=true` for default **OFF** targets. These metrics aren't collected by default. You can turn **ON** scraping for these targets using `default-scrape-settings-enabled.<target-name>=true` using the [ama-metrics-settings-configmap][ama-metrics-settings-configmap-github] under the `default-scrape-settings-enabled` section.
+
+**controlplane-kube-controller-manager**
+
+- `workqueue_depth `
+- `rest_client_requests_total`
+- `rest_client_request_duration_seconds `
+
+**controlplane-kube-scheduler**
+
+- `scheduler_pending_pods`
+- `scheduler_unschedulable_pods`
+- `scheduler_queue_incoming_pods_total`
+- `scheduler_schedule_attempts_total`
+- `scheduler_preemption_attempts_total`
+
+**controlplane-cluster-autoscaler**
+
+- `rest_client_requests_total`
+- `cluster_autoscaler_last_activity`
+- `cluster_autoscaler_cluster_safe_to_autoscale`
+- `cluster_autoscaler_failed_scale_ups_total`
+- `cluster_autoscaler_scale_down_in_cooldown`
+- `cluster_autoscaler_scaled_up_nodes_total`
+- `cluster_autoscaler_unneeded_nodes_count`
+- `cluster_autoscaler_unschedulable_pods_count`
+- `cluster_autoscaler_nodes_count`
+- `cloudprovider_azure_api_request_errors`
+- `cloudprovider_azure_api_request_duration_seconds_bucket`
+- `cloudprovider_azure_api_request_duration_seconds_count`
+
+> [!NOTE]
+> The CPU and memory usage metrics for all control-plane targets are not exposed irrespective of the profile.
+
+## References
+
+- [Kubernetes Upstream metrics list][kubernetes-metrics-instrumentation-reference]
+
+- [Cluster autoscaler metrics list][kubernetes-metrics-autoscaler-reference]
+
+## Next steps
+
+- [Learn more about control plane metrics in Managed Prometheus](monitor-control-plane-metrics.md)
+
+<!-- EXTERNAL LINKS -->
+[ama-metrics-settings-configmap-github]: https://github.com/Azure/prometheus-collector/blob/89e865a73601c0798410016e9beb323f1ecba335/otelcollector/configmaps/ama-metrics-settings-configmap.yaml
+[kubernetes-metrics-instrumentation-reference]: https://kubernetes.io/docs/reference/instrumentation/metrics/
+(https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/metrics.md)
+[kubernetes-metrics-autoscaler-reference]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/metrics.md
+
+<!-- INTERNAL LINKS -->
+[azure-monitor-prometheus-metrics-scrape-config-minimal]: ../azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
As you work with the node resource group, keep in mind that you can't:
You might get unexpected scaling and upgrading errors if you modify or delete Azure-created tags and other resource properties in the node resource group. AKS allows you to create and modify custom tags created by end users, and you can add those tags when [creating a node pool](manage-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool). You might want to create or modify custom tags, for example, to assign a business unit or cost center. Another option is to create Azure Policies with a scope on the managed resource group.
-However, modifying any **Azure-created tags** on resources under the node resource group in the AKS cluster is an unsupported action, which breaks the service-level objective (SLO). For more information, see [Does AKS offer a service-level agreement?](#does-aks-offer-a-service-level-agreement)
+Azure-created tags are created for their respective Azure Services and should always be allowed. For AKS, there are the `aks-managed` and `k8s-azure` tags. Modifying any **Azure-created tags** on resources under the node resource group in the AKS cluster is an unsupported action, which breaks the service-level objective (SLO). For more information, see [Does AKS offer a service-level agreement?](#does-aks-offer-a-service-level-agreement)
+
+> [!NOTE]
+> In the past, the tag name "Owner" was reserved for AKS to manage the public IP that is assigned on front end IP of the loadbalancer. Now, services follow use the `aks-managed` prefix. For legacy resources, don't use Azure policies to apply the "Owner" tag name. Otherwise, all resources on your AKS cluster deployment and update operations will break. This does not apply to newly created resources.
## What Kubernetes admission controllers does AKS support? Can admission controllers be added or removed?
The following example shows an ip route setup of Transparent mode. Each Pod's in
## How to avoid permission ownership setting slow issues when the volume has numerous files?
-Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the podΓÇÖs security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
+Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the pod's security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
A side effect of setting `fsGroup` is that each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume (with a few exceptions noted below). This scenario happens even if group ownership of the volume already matches the requested `fsGroup`. It can be expensive for larger volumes with lots of small files, which can cause pod startup to take a long time. This scenario has been a known problem before v1.20, and the workaround is setting the Pod run as root:
Any patch, including a security patch, is automatically applied to the AKS clust
The AKS Linux Extension is an Azure VM extension that installs and configures monitoring tools on Kubernetes worker nodes. The extension is installed on all new and existing Linux nodes. It configures the following monitoring tools: - [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics.-- [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.
+- [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the cluster's API server using Events and NodeConditions.
- [ig](https://inspektor-gadget.io/docs/latest/ig/): An eBPF-powered open-source framework for debugging and observing Linux and Kubernetes systems. It provides a set of tools (or gadgets) designed to gather relevant information, allowing users to identify the cause of performance issues, crashes, or other anomalies. Notably, its independence from Kubernetes enables users to employ it also for debugging control plane issues. These tools help provide observability around many node health related problems, such as:
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Graphical processing units (GPUs) are often used for compute-intensive workloads
This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters. ## Supported GPU-enabled VMs+ To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported on AKS. > [!NOTE] > GPU-enabled VMs contain specialized hardware subject to higher pricing and region availability. For more information, see the [pricing][azure-pricing] tool and [region availability][azure-availability]. ## Limitations
-* AKS does not support Windows GPU-enabled node pools.
+ * If you're using an Azure Linux GPU-enabled node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).
-* [NVadsA10](../virtual-machines/nva10v5-series.md) v5-series are not a recommended SKU for GPU VHD.
+* [NVadsA10](../virtual-machines/nva10v5-series.md) v5-series are *not* a recommended SKU for GPU VHD.
+* AKS doesn't support Windows GPU-enabled node pools.
+* Updating an existing node pool to add GPU isn't supported.
## Before you begin * This article assumes you have an existing AKS cluster. If you don't have a cluster, create one using the [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
-* You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* You need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Get the credentials for your cluster
To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-sku
## Options for using NVIDIA GPUs
-There are three ways to add the NVIDIA device plugin:
-
-1. [Using the AKS GPU image](#update-your-cluster-to-use-the-aks-gpu-image-preview)
-2. [Manually installing the NVIDIA device plugin](#manually-install-the-nvidia-device-plugin)
-3. Using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html)
-
-### Use NVIDIA GPU Operator with AKS
-You can use the NVIDIA GPU Operator by skipping the gpu driver installation on AKS. For more information about using the NVIDIA GPU Operator with AKS, see [NVIDIA Documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html).
-
-Adding the node pool tag `SkipGPUDriverInstall=true` will skip installing the GPU driver automatically on newly created nodes in the node pool. Any existing nodes will not be changed - the pool can be scaled to 0 and back up to make the change take effect. You can specify the tag using the `--nodepool-tags` argument to [`az aks create`][az-aks-create] command (for a new cluster) or `--tags` with [`az aks nodepool add`][az-aks-nodepool-add] or [`az aks nodepool update`][az-aks-nodepool-update].
-
-> [!WARNING]
-> We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image.
+Using NVIDIA GPUs involves the installation of various NVIDIA software components such as the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin?tab=readme-ov-file), GPU driver installation, and more.
-### Update your cluster to use the AKS GPU image (preview)
+### Skip GPU driver installation (preview)
-AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github].
+AKS has automatic GPU driver installation enabled by default. In some cases, such as installing your own drivers or using the NVIDIA GPU Operator, you may want to skip GPU driver installation.
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-1. Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+1. Register or update the aks-preview extension using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command.
```azurecli-interactive
+ # Register the aks-preview extension
az extension add --name aks-preview
- ```
-
-2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
- ```azurecli-interactive
+ # Update the aks-preview extension
az extension update --name aks-preview ```
-3. Register the `GPUDedicatedVHDPreview` feature flag using the [`az feature register`][az-feature-register] command.
+2. Create a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--skip-gpu-driver-install` flag to skip automatic GPU driver installation.
```azurecli-interactive
- az feature register --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
+ --skip-gpu-driver-install \
+ --node-vm-size Standard_NC6s_v3 \
+ --node-taints sku=gpu:NoSchedule \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
```
- It takes a few minutes for the status to show *Registered*.
+ Adding the `--skip-gpu-driver-install` flag during node pool creation skips the automatic GPU driver installation. Any existing nodes aren't changed. You can scale the node pool to zero and then back up to make the change take effect.
-4. Verify the registration status using the [`az feature show`][az-feature-show] command.
+### NVIDIA device plugin installation
- ```azurecli-interactive
- az feature show --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
- ```
+NVIDIA device plugin installation is required when using GPUs on AKS. In some cases, the installation is handled automatically, such as when using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html) or the [AKS GPU image (preview)](#use-the-aks-gpu-image-preview). Alternatively, you can manually install the NVIDIA device plugin.
-5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
+#### Manually install the NVIDIA device plugin
- ```azurecli-interactive
- az provider register --namespace Microsoft.ContainerService
- ```
+You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on each node to provide the required drivers for the GPUs. This is the recommended approach when using GPU-enabled node pools for Azure Linux.
-#### Add a node pool for GPU nodes
+##### [Ubuntu Linux node pool (default SKU)](#tab/add-ubuntu-gpu-node-pool)
-Now that you updated your cluster to use the AKS GPU image, you can add a node pool for GPU nodes to your cluster.
+To use the default OS SKU, you create the node pool without specifying an OS SKU. The node pool is configured for the default operating system based on the Kubernetes version of the cluster.
-* Add a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+1. Add a node pool to your cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command.
```azurecli-interactive az aks nodepool add \
Now that you updated your cluster to use the AKS GPU image, you can add a node p
--node-count 1 \ --node-vm-size Standard_NC6s_v3 \ --node-taints sku=gpu:NoSchedule \
- --aks-custom-headers UseGPUDedicatedVHD=true \
--enable-cluster-autoscaler \ --min-count 1 \ --max-count 3 ```
- The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
+ This command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
- * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
- * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
- * `--aks-custom-headers`: Specifies a specialized AKS GPU image, *UseGPUDedicatedVHD=true*. If your GPU sku requires generation 2 VMs, use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true* instead.
- * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
- * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
- * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
+ * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
+ * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
+ * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
+ * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
> [!NOTE]
- > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
-### Manually install the NVIDIA device plugin
+##### [Azure Linux node pool](#tab/add-azure-linux-gpu-node-pool)
-You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on each node to provide the required drivers for the GPUs.
+To use Azure Linux, you specify the OS SKU by setting `os-sku` to `AzureLinux` during node pool creation. The `os-type` is set to `Linux` by default.
-1. Add a node pool to your cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+1. Add a node pool to your cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--os-sku` flag set to `AzureLinux`.
```azurecli-interactive az aks nodepool add \
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
--cluster-name myAKSCluster \ --name gpunp \ --node-count 1 \
+ --os-sku AzureLinux \
--node-vm-size Standard_NC6s_v3 \ --node-taints sku=gpu:NoSchedule \ --enable-cluster-autoscaler \
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
--max-count 3 ```
- The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
+ This command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
* `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*. * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
* `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool. > [!NOTE]
- > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time. Certain SKUs, including A100 and H100 VM SKUs, aren't available for Azure Linux. For more information, see [GPU-optimized VM sizes in Azure][gpu-skus].
-2. Create a namespace using the [`kubectl create namespace`][kubectl-create] command.
+
- ```console
+1. Create a namespace using the [`kubectl create namespace`][kubectl-create] command.
+
+ ```bash
kubectl create namespace gpu-resources ```
-3. Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github]:
+2. Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github]:
```yaml apiVersion: apps/v1
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
path: /var/lib/kubelet/device-plugins ```
-4. Create the DaemonSet and confirm the NVIDIA device plugin is created successfully using the [`kubectl apply`][kubectl-apply] command.
+3. Create the DaemonSet and confirm the NVIDIA device plugin is created successfully using the [`kubectl apply`][kubectl-apply] command.
- ```console
+ ```bash
kubectl apply -f nvidia-device-plugin-ds.yaml ```
+4. Now that you successfully installed the NVIDIA device plugin, you can check that your [GPUs are schedulable](#confirm-that-gpus-are-schedulable) and [run a GPU workload](#run-a-gpu-enabled-workload).
+
+### Use NVIDIA GPU Operator with AKS
+
+The NVIDIA GPU Operator automates the management of all NVIDIA software components needed to provision GPU including driver installation, the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin?tab=readme-ov-file), the NVIDIA container runtime, and more. Since the GPU Operator handles these components, it's not necessary to manually install the NVIDIA device plugin. This also means that the automatic GPU driver installation on AKS is no longer required.
+
+1. Skip automatic GPU driver installation by creating a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command with `--skip-gpu-driver-install`. Adding the `--skip-gpu-driver-install` flag during node pool creation skips the automatic GPU driver installation. Any existing nodes aren't changed. You can scale the node pool to zero and then back up to make the change take effect.
+
+2. Follow the NVIDIA documentation to [Install the GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/install-gpu-ocp.html#install-nvidiagpu:~:text=NVIDIA%20GPU%20Operator-,Installing%20the%20NVIDIA%20GPU%20Operator,-%EF%83%81).
+
+3. Now that you successfully installed the GPU Operator, you can check that your [GPUs are schedulable](#confirm-that-gpus-are-schedulable) and [run a GPU workload](#run-a-gpu-enabled-workload).
+
+> [!WARNING]
+> We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image.
+
+### Use the AKS GPU image (preview)
+
+AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github]. The AKS GPU image is currently only supported for Ubuntu 18.04.
++
+1. Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
+
+3. Register the `GPUDedicatedVHDPreview` feature flag using the [`az feature register`][az-feature-register] command.
+
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ ```
+
+ It takes a few minutes for the status to show *Registered*.
+
+4. Verify the registration status using the [`az feature show`][az-feature-show] command.
+
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ ```
+
+5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
+
+ Now that you updated your cluster to use the AKS GPU image, you can add a node pool for GPU nodes to your cluster.
+
+6. Add a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
+ --node-vm-size Standard_NC6s_v3 \
+ --node-taints sku=gpu:NoSchedule \
+ --aks-custom-headers UseGPUDedicatedVHD=true \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
+ ```
+
+ The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
+
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
+ * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
+ * `--aks-custom-headers`: Specifies a specialized AKS GPU image, *UseGPUDedicatedVHD=true*. If your GPU sku requires generation 2 VMs, use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true* instead.
+ * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
+ * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
+ * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
+
+ > [!NOTE]
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
+
+7. Now that you successfully created a node pool using the GPU image, you can check that your [GPUs are schedulable](#confirm-that-gpus-are-schedulable) and [run a GPU workload](#run-a-gpu-enabled-workload).
+ ## Confirm that GPUs are schedulable After creating your cluster, confirm that GPUs are schedulable in Kubernetes.
aks Ha Dr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ha-dr-overview.md
+
+ Title: High availability and disaster recovery overview for Azure Kubernetes Service (AKS)
+description: Learn about the high availability and disaster recovery options for Azure Kubernetes Service (AKS) clusters.
++++ Last updated : 01/30/2024++
+# High availability and disaster recovery overview for Azure Kubernetes Service (AKS)
+
+When creating and managing applications in the cloud, there's always a risk of disruption from outages and disasters. To ensure business continuity (BC), you need to plan for high availability (HA) and disaster recovery (DR).
+
+HA refers to the design and implementation of a system or service that's highly reliable and experiences minimal downtime. HA is a combination of tools, technologies, and processes that ensure a system or service is available to perform its intended function. HA is a critical component of DR planning. DR is the process of recovering from a disaster and restoring business operations to a normal state. DR is a subset of BC, which is the process of maintaining business functions or quickly resuming them in the event of a major disruption.
+
+This article covers some recommended practices for applications deployed to AKS, but is by no means meant as an exhaustive list of possible solutions.
+
+## Technology overview
+
+A Kubernetes cluster is divided into two components:
+
+- The **control plane**, which provides the core Kubernetes services and orchestration of application workloads, and
+- The **nodes**, which run your application workloads.
+
+![Diagram of Kubernetes control plane and node components.](media/concepts-clusters-workloads/control-plane-and-nodes.png)
+
+When you create an AKS cluster, the Azure platform automatically creates and configures a control plane. AKS offers two pricing tiers for cluster management: the **Free tier** and the **Standard tier**. For more information, see [Free and Standard pricing tiers for AKS cluster management](./free-standard-pricing-tiers.md).
+
+The control plane and its resources reside only in the region where you created the cluster. AKS provides a single-tenant control plane with a dedicated API server, scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the control plane and nodes. Interaction with the control plane occurs through Kubernetes APIs, such as `kubectl` or the Kubernetes dashboard.
+
+To run your applications and supporting services, you need a Kubernetes *node*. An AKS cluster has at least one node, an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime. The Azure VM size for your nodes defines CPUs, memory, size, and the storage type available (such as high-performance SSD or regular HDD). Plan the VM and storage size around whether your applications may require large amounts of CPU and memory or high-performance storage. In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux, [Azure Linux](./use-azure-linux.md), or Windows Server 2022. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs.
+
+For more information on cluster and workload components in AKS, see [Kubernetes core concepts for AKS](./concepts-clusters-workloads.md).
+
+## Important considerations
+
+### Regional and global resources
+
+**Regional resources** are provisioned as part of a *deployment stamp* to a single Azure region. These resources share nothing with resources in other regions, and they can be independently removed or replicated to other regions. For more information, see [Regional resources](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-intro#regional-resources).
+
+**Global resources** share the lifetime of the system, and they can be globally available within the context of a multi-region deployment. For more information, see [Global resources](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-intro#global-resources).
+
+### Recovery objectives
+
+A complete disaster recovery plan must specify business requirements for each process the application implements:
+
+- **Recovery Point Objective (RPO)** is the maximum duration of acceptable data loss. RPO is measured in units of time, such as minutes, hours, or days.
+- **Recovery Time Objective (RTO)** is the maximum duration of acceptable downtime, with *downtime* defined by your specification. For example, if the acceptable downtime duration in a disaster is *eight hours*, then the RTO is eight hours.
+
+### Availability zones
+
+You can use availability zones to spread your data across multiple zones in the same region. Within a region, availability zones are close enough to have low-latency connections to other availability zones, but they're far enough apart to reduce the likelihood that more than one will be affected by local outages or weather. For more information, see [Recommendations for using availability zones and regions](/azure/well-architected/reliability/regions-availability-zones).
+
+### Zonal resilience
+
+AKS clusters are resilient to zonal failures. If a zone fails, the cluster continues to run in the remaining zones. The cluster's control plane and nodes are spread across the zones, and the Azure platform automatically handles the distribution of the nodes. For more information, see [AKS zonal resilience](./availability-zones.md).
+
+### Load balancing
+
+#### Global load balancing
+
+Global load balancing services distribute traffic across regional backends, clouds, or hybrid on-premises services. These services route end-user traffic to the closest available backend. They also react to changes in service reliability or performance to maximize availability and performance. The following Azure services provide global load balancing:
+
+- [Azure Front Door](../frontdoor/front-door-overview.md)
+- [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md)
+- [Cross-region Azure Load Balancer](../load-balancer/cross-region-overview.md)
+- [Azure Kubernetes Fleet Manager](../kubernetes-fleet/overview.md)
+
+#### Regional load balancing
+
+Regional load balancing services distribute traffic within virtual networks across VMs or zonal and zone-redundant service endpoints within a region. The following Azure services provide regional load balancing:
+
+- [Azure Load Balancer](../load-balancer/load-balancer-overview.md)
+- [Azure Application Gateway](../application-gateway/overview.md)
+- [Azure Application Gateway for Containers](../application-gateway/for-containers/overview.md)
+
+### Observability
+
+You need to collect data from applications and infrastructure to allow for effective operations and maximized reliability. Azure provides tools to help you monitor and manage your AKS workloads. For more information, see [Observability resources](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-intro#observability-resources).
+
+## Scope definition
+
+Application uptime becomes important as you manage AKS clusters. By default, AKS provides high availability by using multiple nodes in a [Virtual Machine Scale Set](../virtual-machine-scale-sets/overview.md), but these nodes donΓÇÖt protect your system from a region failure. To maximize your uptime, plan ahead to maintain business continuity and prepare for disaster recovery using the following best practices:
+
+- Plan for AKS clusters in multiple regions.
+- Route traffic across multiple clusters using Azure Traffic Manager.
+- Use geo-replication for your container image registries.
+- Plan for application state across multiple clusters.
+- Replicate storage across multiple regions.
+
+### Deployment model implementations
+
+|Deployment model|Pros|Cons|
+|-|-|-|
+|[Active-active](#active-active-high-availability-deployment-model)|ΓÇó No data loss or inconsistency during failover <br> ΓÇó High resiliency <br> ΓÇó Better utilization of resources with higher performance|ΓÇó Complex implementation and management <br> ΓÇó Higher cost <br> ΓÇó Requires a load balancer and form of traffic routing|
+|[Active-passive](#active-passive-disaster-recovery-deployment-model)|ΓÇó Simpler implementation and management <br> ΓÇó Lower cost <br> ΓÇó Doesn't require a load balancer or traffic manager|ΓÇó Potential for data loss or inconsistency during failover <br> ΓÇó Longer recovery time and downtime <br> ΓÇó Underutilization of resources|
+|[Passive-cold](#passive-cold-failover-deployment-model)|ΓÇó Lowest cost <br> ΓÇó Doesn't require synchronization, replication, load balancer, or traffic manager <br> ΓÇó Suitable for low-priority, non-critical workloads|ΓÇó High risk of data loss or inconsistency during failover <br> ΓÇó Longest recovery time and downtime <br> ΓÇó Requires manual intervention to activate cluster and trigger backup|
+
+#### Active-active high availability deployment model
+
+In the active-active high availability (HA) deployment model, you have two independent AKS clusters deployed in two different Azure regions (typically paired regions, such as Canada Central and Canada East or US East 2 and US Central) that actively serve traffic.
+
+With this example architecture:
+
+- You deploy two AKS clusters in separate Azure regions.
+- During normal operations, network traffic routes between both regions. If one region becomes unavailable, traffic automatically routes to a region closest to the user who issued the request.
+- There's a deployed hub-spoke pair for each regional AKS instance. Azure Firewall Manager policies manage the firewall rules across the regions.
+- Azure Key Vault is provisioned in each region to store secrets and keys.
+- Azure Front Door load balances and routes traffic to a regional Azure Application Gateway instance, which sits in front of each AKS cluster.
+- Regional Log Analytics instances store regional networking metrics and diagnostic logs.
+- The container images for the workload are stored in a managed container registry. A single Azure Container Registry is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables replicating images to the selected Azure regions and provides continued access to images, even if a region experiences an outage.
+
+To create an active-active deployment model in AKS, you perform the following steps:
+
+1. Create two identical deployments in two different Azure regions.
+2. Create two instances of your web app.
+3. Create an Azure Front Door profile with the following resources:
+
+ - An endpoint.
+ - Two origin groups, each with a priority of *one*.
+ - A route.
+
+4. Limit network traffic to the web apps only from the Azure Front Door instance. 5. Configure all other backend Azure services, such as databases, storage accounts, and authentication providers.
+5. Deploy code to both web apps with continuous deployment.
+
+For more information, see the [**Recommended active-active high availability solution overview for AKS**](./active-active-solution.md).
+
+#### Active-passive disaster recovery deployment model
+
+In the active-passive disaster recovery (DR) deployment model, you have two independent AKS clusters deployed in two different Azure regions (typically paired regions, such as Canada Central and Canada East or US East 2 and US Central) that actively serve traffic. Only one of the clusters actively serves traffic at any given time. The other cluster contains the same configuration and application data as the active cluster, but doesn't accept traffic unless directed by a traffic manager.
+
+With this example architecture:
+
+- You deploy two AKS clusters in separate Azure regions.
+- During normal operations, network traffic routes to the primary AKS cluster, which you set in the Azure Front Door configuration.
+ - Priority needs to be set between *1-5* with 1 being the highest and 5 being the lowest.
+ - You can set multiple clusters to the same priority level and can specify the weight of each.
+- If the primary cluster becomes unavailable (disaster occurs), traffic automatically routes to the next region selected in the Azure Front Door.
+ - All traffic must go through the Azure Front Door traffic manager for this system to work.
+- Azure Front Door routes traffic to the Azure App Gateway in the primary region (cluster must be marked with priority 1). If this region fails, the service redirects traffic to the next cluster in the priority list.
+ - Rules come from Azure Front Door.
+- A hub-spoke pair is deployed for each regional AKS instance. Azure Firewall Manager policies manage the firewall rules across the regions.
+- Azure Key Vault is provisioned in each region to store secrets and keys.
+- Regional Log Analytics instances store regional networking metrics and diagnostic logs.
+- The container images for the workload are stored in a managed container registry. A single Azure Container Registry is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables replicating images to the selected Azure regions and provides continued access to images, even if a region experiences an outage.
+
+To create an active-passive deployment model in AKS, you perform the following steps:
+
+1. Create two identical deployments in two different Azure regions.
+2. Configure autoscaling rules for the secondary application so it scales to the same instance count as the primary when the primary region becomes inactive. While inactive, it doesn't need to be scaled up. This helps reduce costs.
+3. Create two instances of your web application, with one on each cluster.
+4. Create an Azure Front Door profile with the following resources:
+
+ - An endpoint.
+ - An origin group with a priority of *one* for the primary region.
+ - A second origin group with a priority of *two* for the secondary region.
+ - A route.
+
+5. Limit network traffic to the web applications from only the Azure Front Door instance.
+6. Configure all other backend Azure services, such as databases, storage accounts, and authentication providers.
+7. Deploy code to both the web applications with continuous deployment.
+
+For more information, see the [**Recommended active-passive disaster recovery solution overview for AKS**](./active-passive-solution.md).
+
+#### Passive-cold failover deployment model
+
+The passive-cold failover deployment model is configured in the same way as the [active-passive disaster recovery deployment model](#active-passive-disaster-recovery-deployment-model), except the clusters remain inactive until a user activates them in the event of a disaster. We consider this approach *out-of-scope* because it involves a similar configuration to active-passive, but with the added complexity of manual intervention to activate the cluster and trigger a backup.
+
+With this example architecture:
+
+- You create two AKS clusters, preferably in different regions or zones for better resiliency.
+- When you need to fail over, you activate the deployment to take over the traffic flow.
+- In the case the primary passive cluster goes down, you need to manually activate the cold cluster to take over the traffic flow.
+- This condition needs to be set either by a manual input every time or a certain event as specified by you.
+- Azure Key Vault is provisioned in each region to store secrets and keys.
+- Regional Log Analytics instances store regional networking metrics and diagnostic logs for each cluster.
+
+To create a passive-cold failover deployment model in AKS, you perform the following steps:
+
+1. Create two identical deployments in different zones/regions.
+2. Configure autoscaling rules for the secondary application so it scales to the same instance count as the primary when the primary region becomes inactive. While inactive, it doesn't need to be scaled up, which helps reduce costs.
+3. Create two instances of your web application, with one on each cluster.
+4. Configure all other backend Azure services, such as databases, storage accounts, and authentication providers.
+5. Set a condition when the cold cluster should be triggered. You can use a load balancer if you need.
+
+For more information, see the [**Recommended passive-cold failover solution overview for AKS**](./passive-cold-solution.md).
+
+## Service quotas and limits
+
+AKS sets default limits and quotas for resources and features, including usage restrictions for certain VM SKUs.
++
+For more information, see [AKS service quotas and limits](./quotas-skus-regions.md#service-quotas-and-limits).
+
+## Backup
+
+Azure Backup supports backing up AKS cluster resources and persistent volumes attached to the cluster using a backup extension. The Backup vault communicates with the AKS cluster through the extension to perform backup and restore operations.
+
+For more information, see the following articles:
+
+- [About AKS backup using Azure Backup (preview)](../backup/azure-kubernetes-service-backup-overview.md)
+- [Back up AKS using Azure Backup (preview)](../backup/azure-kubernetes-service-cluster-backup.md)
aks Istio Plugin Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md
Title: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview) description: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview) + Last updated 12/04/2023- # Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
You may need to periodically rotate the certificate authorities for security or
[az-provider-register]: /cli/azure/provider#az-provider-register [az-aks-mesh-disable]: /cli/azure/aks/mesh#az-aks-mesh-disable [istio-generate-certs]: https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster
-[istio-mtls-reference]: https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication
+[istio-mtls-reference]: https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication
aks Kubelogin Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelogin-authentication.md
Title: Use kubelogin to authenticate in Azure Kubernetes Service description: Learn how to use the kubelogin plugin for all Microsoft Entra authentication methods in Azure Kubernetes Service (AKS). -+ Last updated 11/28/2023
aks Monitor Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md
+
+ Title: Monitor Azure Kubernetes Service control plane metrics (preview)
+description: Learn how to collect metrics from the Azure Kubernetes Service (AKS) control plane and view the telemetry in Azure Monitor.
++++ Last updated : 01/31/2024++
+#CustomerIntent: As a platform engineer, I want to collect metrics from the control plane and monitor them for any potential issues
++
+# Monitor Azure Kubernetes Service (AKS) control plane metrics (preview)
+
+The Azure Kubernetes Service (AKS) [control plane](concepts-clusters-workloads.md#control-plane) health is critical for the performance and reliability of the cluster. Control plane metrics (preview) provide more visibility into its availability and performance, allowing you to maximize overall observability and maintain operational excellence. These metrics are fully compatible with Prometheus and Grafana, and can be customized to only store what you consider necessary. With these new metrics, you can collect all metrics from API server, ETCD, Scheduler, Autoscaler, and controller manager.
+
+This article helps you understand this new feature, how to implement it, and how to observe the telemetry collected.
+
+## Prerequisites and limitations
+
+- Only supports [Azure Monitor managed service for Prometheus][managed-prometheus-overview].
+- [Private link](../azure-monitor/logs/private-link-security.md) isn't supported.
+- Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported.
+- The cluster must use [managed identity authentication](use-managed-identity.md).
+- This feature is currently available in the following regions: West US 2, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central.
+
+### Install or update the `aks-preview` Azure CLI extension
++
+Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+
+```azurecli-interactive
+az extension add --name aks-preview
+```
+
+If you need to update the extension version, you can do this using the [`az extension update`][az-extension-update] command.
+
+```azurecli-interactive
+az extension update --name aks-preview
+```
+
+### Register the 'AzureMonitorMetricsControlPlanePreview' feature flag
+
+Register the `AzureMonitorMetricsControlPlanePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace "Microsoft.ContainerService"
+```
+
+## Enable control plane metrics on your AKS cluster
+
+You can enable control plane metrics with the Azure Monitor managed service for Prometheus add-on during cluster creation or for an existing cluster. To collect Prometheus metrics from your Kubernetes cluster, see [Enable Prometheus and Grafana for Kubernetes clusters][enable-monitoring-kubernetes-cluster] and follow the steps on the **CLI** tab for an AKS cluster. On the command-line, be sure to include the parameters `--generate-ssh-keys` and `--enable-managed-identity`.
+
+>[!NOTE]
+> Unlike the metrics collected from cluster nodes, control plane metrics are collected by a component which isn't part of the **ama-metrics** add-on. Enabling the `AzureMonitorMetricsControlPlanePreview` feature flag and the managed prometheus add-on ensures control plane metrics are collected. After enabling metric collection, it can take several minutes for the data to appear in the workspace.
+
+## Querying control plane metrics
+
+Control plane metrics are stored in an Azure monitor workspace in the cluster's region. They can be queried directly from the workspace or through the Azure Managed Grafana instance connected to the workspace. To find the Azure Monitor workspace associated with the cluster, from the left-hand pane of your selected AKS cluster, navigate to the **Monitoring** section and select **Insights**. On the Container Insights page for the cluster, select **Monitor Settings**.
++
+If you're using Azure Managed Grafana to visualize the data, you can import the following dashboards. AKS provides dashboard templates to help you view and analyze your control plane telemetry data in real-time.
+
+* [API server][grafana-dashboard-template-api-server]
+* [ETCD][grafana-dashboard-template-etcd]
+
+## Customize control plane metrics
+
+By default, AKs includes a pre-configured set of metrics to collect and store for each component. `API server` and `etcd` are enabled by default. This list can be customized through the [ama-settings-configmap][ama-metrics-settings-configmap]. The list of `minimal-ingestion` profile metrics are available [here][list-of-default-metrics-aks-control-plane].
+
+The following lists the default targets:
+
+```yaml
+controlplane-apiserver = true
+controlplane-cluster-autoscaler = false
+controlplane-kube-scheduler = false
+controlplane-kube-controller-manager = false
+controlplane-etcd = true
+```
+
+The various options are similar to Azure Managed Prometheus listed [here][prometheus-metrics-scrape-configuration-minimal].
+
+All ConfigMaps should be applied to `kube-system` namespace for any cluster.
+
+### Ingest only minimal metrics for the default targets
+
+This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only metrics listed later in this article are ingested for each of the default targets, which in this case is `controlplane-apiserver` and `controlplane-etcd`.
+
+### Ingest all metrics from all targets
+
+Perform the following steps to collect all metrics from all targets on the cluster.
+
+1. Download the ConfigMap file [ama-metrics-settings-configmap.yaml][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.
+
+1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape, are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
+
+1. Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f configmap-controlplane.yaml
+ ```
+
+ After the configuration is applied, it takes several minutes before the metrics from the specified targets scraped from the control plane appear in the Azure Monitor workspace.
+
+### Ingest a few other metrics in addition to minimal metrics
+
+`Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules & default alerts are collected. Perform the following steps to customize this behavior.
+
+1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.
+
+1. Set `minimalingestionprofile = true` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
+
+1. Under the `default-target-metrics-list`, specify the list of metrics for the `true` targets. For example,
+
+ ```yaml
+ controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests"
+ ```
+
+- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f configmap-controlplane.yaml
+ ```
+
+ After the configuration is applied, it takes several minutes before the metrics from the specified targets scraped from the control plane appear in the Azure Monitor workspace.
+
+### Ingest only specific metrics from some targets
+
+1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.
+
+1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify here are `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`,`controlplane-kube-controller-manager`, and `controlplane-etcd`.
+
+1. Under the `default-target-metrics-list`, specify the list of metrics for the `true` targets. For example,
+
+ ```yaml
+ controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests"
+ ```
+
+- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f configmap-controlplane.yaml
+ ```
+
+ After the configuration is applied, it takes several minutes before the metrics from the specified targets scraped from the control plane appear in the Azure Monitor workspace.
+
+## Troubleshoot control plane metrics issues
+
+Make sure to check that the feature flag `AzureMonitorMetricsControlPlanePreview` is enabled and the `ama-metrics` pods are running.
+
+> [!NOTE]
+> The [troubleshooting methods][prometheus-troubleshooting] for Azure managed service Prometheus won't translate directly here as the components scraping the control plane aren't present in the managed prometheus add-on.
+
+## ConfigMap formatting or errors
+
+Make sure to double check the formatting of the ConfigMap, and if the fields are correctly populated with the intended values. Specifically the `default-targets-metrics-keep-list`, `minimal-ingestion-profile`, and `default-scrape-settings-enabled`.
+
+### Isolate control plane from data plane issue
+
+Start by setting some of the [node related metrics][node-metrics] to `true` and verify the metrics are being forwarded to the workspace. This helps determine if the issue is specific to scraping control plane metrics.
+
+### Events ingested
+
+Once you applied the changes, you can open metrics explorer from the **Azure Monitor overview** page, or from the **Monitoring** section the selected cluster. In the Azure portal, select **Metrics**. Check for an increase or decrease in the number of events ingested per minute. It should help you determine if the specific metric is missing or all metrics are missing.
+
+### Specific metric is not exposed
+
+There were cases where the metrics are documented, but not exposed from the target and wasn't forwarded to the Azure Monitor workspace. In this case, it's necessary to verify other metrics are being forwarded to the workspace.
+
+### No access to the Azure Monitor workspace
+
+When you enable the add-on, you might have specified an existing workspace that you don't have access to. In that case, it might look like the metrics are not being collected and forwarded. Make sure that you create a new workspace while enabling the add-on or while creating the cluster.
+
+## Disable control plane metrics on your AKS cluster
+
+You can disable control plane metrics at any time, by either disabling the feature flag, disabling managed Prometheus, or by deleting the AKS cluster.
+
+> [!NOTE]
+> This action doesn't remove any existing data stored in your Azure Monitor workspace.
+
+Run the following command to remove the metrics add-on that scrapes Prometheus metrics.
+
+```azurecli-interactive
+az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+Run the following command to disable scraping of control plane metrics on the AKS cluster by unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag using the [az feature unregister][az-feature-unregister] command.
+
+```azurecli-interactive
+az feature unregister "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+```
+
+## Next steps
+
+After evaluating this preview feature, [share your feedback][share-feedback]. We're interested in hearing what you think.
+
+- Learn more about the [list of default metrics for AKS control plane][list-of-default-metrics-aks-control-plane].
+
+<!-- EXTERNAL LINKS -->
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[ama-metrics-settings-configmap]: https://github.com/Azure/prometheus-collector/blob/89e865a73601c0798410016e9beb323f1ecba335/otelcollector/configmaps/ama-metrics-settings-configmap.yaml
+[share-feedback]: https://forms.office.com/r/Mq4hdZ1W7W
+[grafana-dashboard-template-api-server]: https://grafana.com/grafana/dashboards/20331-kubernetes-api-server/
+[grafana-dashboard-template-etcd]: https://grafana.com/grafana/dashboards/20330-kubernetes-etcd/
+
+<!-- INTERNAL LINKS -->
+[managed-prometheus-overview]: ../azure-monitor/essentials/prometheus-metrics-overview.md
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[enable-monitoring-kubernetes-cluster]: ../azure-monitor/containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana
+[prometheus-metrics-scrape-configuration-minimal]: ../azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md#scenarios
+[prometheus-troubleshooting]: ../azure-monitor/containers/prometheus-metrics-troubleshoot.md
+[node-metrics]: ../azure-monitor/containers/prometheus-metrics-scrape-default.md
+[list-of-default-metrics-aks-control-plane]: control-plane-metrics-default-list.md
+[az-feature-unregister]: /cli/azure/feature#az-feature-unregister
+[release-tracker]: https://releases.aks.azure.com/#tabversion
aks Operator Best Practices Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-multi-region.md
- Title: Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS)
-description: Best practices for a cluster operator to achieve maximum uptime for your applications and to provide high availability and prepare for disaster recovery in Azure Kubernetes Service (AKS).
- Previously updated : 03/08/2023-
-#Customer intent: As an AKS cluster operator, I want to plan for business continuity or disaster recovery to help protect my cluster from region problems.
--
-# Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS)
-
-As you manage clusters in Azure Kubernetes Service (AKS), application uptime becomes important. By default, AKS provides high availability by using multiple nodes in a [Virtual Machine Scale Set (VMSS)](../virtual-machine-scale-sets/overview.md). But these multiple nodes donΓÇÖt protect your system from a region failure. To maximize your uptime, plan ahead to maintain business continuity and prepare for disaster recovery.
-
-This article focuses on how to plan for business continuity and disaster recovery in AKS. You learn how to:
-
-> [!div class="checklist"]
-
-> * Plan for AKS clusters in multiple regions.
-> * Route traffic across multiple clusters using Azure Traffic Manager.
-> * Use geo-replication for your container image registries.
-> * Plan for application state across multiple clusters.
-> * Replicate storage across multiple regions.
-
-## Plan for multiregion deployment
-
-> **Best practice**
->
-> When you deploy multiple AKS clusters, choose regions where AKS is available. Use paired regions.
-
-An AKS cluster is deployed into a single region. To protect your system from region failure, deploy your application into multiple AKS clusters across different regions. When planning where to deploy your AKS cluster, consider:
-
-* [**AKS region availability**](./quotas-skus-regions.md#region-availability)
- * Choose regions close to your users.
- * AKS continually expands into new regions.
-
-* [**Azure paired regions**](../availability-zones/cross-region-replication-azure.md)
- * For your geographic area, choose two regions paired together.
- * AKS platform updates (planned maintenance) are serialized with a delay of at least 24 hours between paired regions.
- * Recovery efforts for paired regions are prioritized where needed.
-
-* **Service availability**
- * Decide whether your paired regions should be hot/hot, hot/warm, or hot/cold.
- * Do you want to run both regions at the same time, with one region *ready* to start serving traffic? *or*
- * Do you want to give one region time to get ready to serve traffic?
-
-AKS region availability and paired regions are a joint consideration. Deploy your AKS clusters into paired regions designed to manage region disaster recovery together. For example, AKS is available in East US and West US. These regions are paired. Choose these two regions when you're creating an AKS BC/DR strategy.
-
-When you deploy your application, add another step to your CI/CD pipeline to deploy to these multiple AKS clusters. Updating your deployment pipelines prevents applications from deploying into only one of your regions and AKS clusters. In that scenario, customer traffic directed to a secondary region won't receive the latest code updates.
-
-## Use Azure Traffic Manager to route traffic
-
-> **Best practice**
->
-> For the best performance and redundancy, direct all application traffic through Traffic Manager before it goes to your AKS cluster.
-
-If you have multiple AKS clusters in different regions, use Traffic Manager to control traffic flow to the applications running in each cluster. [Azure Traffic Manager](../traffic-manager/index.yml) is a DNS-based traffic load balancer that can distribute network traffic across regions. Use Traffic Manager to route users based on cluster response time or based on priority.
-
-![AKS with Traffic Manager](media/operator-best-practices-bc-dr/aks-azure-traffic-manager.png)
-
-If you have a single AKS cluster, you typically connect to the service IP or DNS name of a given application. In a multi-cluster deployment, you should connect to a Traffic Manager DNS name that points to the services on each AKS cluster. Define these services by using Traffic Manager endpoints. Each endpoint is the *service load balancer IP*. Use this configuration to direct network traffic from the Traffic Manager endpoint in one region to the endpoint in a different region.
-
-Traffic Manager performs DNS lookups and returns your most appropriate endpoint. With priority routing you can enable a primary service endpoint and multiple backup endpoints in case the primary or one of the backup endpoints is unavailable.
-
-![Priority routing through Traffic Manager](media/operator-best-practices-bc-dr/traffic-manager-priority-routing.png)
-
-For information on how to set up endpoints and routing, see [Configure priority traffic routing method in Traffic Manager](../traffic-manager/traffic-manager-configure-priority-routing-method.md).
-
-### Application routing with Azure Front Door Service
-
-Using split TCP-based anycast protocol, [Azure Front Door Service](../frontdoor/front-door-overview.md) promptly connects your end users to the nearest Front Door POP (Point of Presence). More features of Azure Front Door Service:
-
-* TLS termination
-* Custom domain
-* Web application firewall
-* URL Rewrite
-* Session affinity
-
-Review the needs of your application traffic to understand which solution is the most suitable.
-
-### Interconnect regions with global virtual network peering
-
-Connect both virtual networks to each other through [virtual network peering](../virtual-network/virtual-network-peering-overview.md) to enable communication between clusters. Virtual network peering interconnects virtual networks, providing high bandwidth across Microsoft's backbone network - even across different geographic regions.
-
-Before peering virtual networks with running AKS clusters, use the standard Load Balancer in your AKS cluster. This prerequisite makes Kubernetes services reachable across the virtual network peering.
-
-## Enable geo-replication for container images
-
-> **Best practice**
->
-> Store your container images in Azure Container Registry and geo-replicate the registry to each AKS region.
-
-To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.
-
-To improve performance and availability, use Container Registry geo-replication to create a registry in each region where you have an AKS cluster.Each AKS cluster will then pull container images from the local container registry in the same region.
-
-![Container Registry geo-replication for container images](media/operator-best-practices-bc-dr/acr-geo-replication.png)
-
-Using Container Registry geo-replication to pull images from the same region has the following benefits:
-
-* **Faster**: Pull images from high-speed, low-latency network connections within the same Azure region.
-* **More reliable**: If a region is unavailable, your AKS cluster pulls the images from an available container registry.
-* **Cheaper**: No network egress charge between datacenters.
-
-Geo-replication is a *Premium* SKU container registry feature. For information on how to configure geo-replication, see [Container Registry geo-replication](../container-registry/container-registry-geo-replication.md).
-
-## Remove service state from inside containers
-
-> **Best practice**
->
-> Avoid storing service state inside the container. Instead, use an Azure platform as a service (PaaS) that supports multi-region replication.
-
-*Service state* refers to the in-memory or on-disk data required by a service to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources stored on the disk. For example, the state might include the files a database uses to store data and transaction logs.
-
-State can be either externalized or co-located with the code that manipulates the state. Typically, you externalize state by using a database or other data store that runs on different machines over the network or that runs out of process on the same machine.
-
-Containers and microservices are most resilient when the processes that run inside them don't retain state. Since applications almost always contain some state, use a PaaS solution, such as:
-
-* Azure Cosmos DB
-* Azure Database for PostgreSQL
-* Azure Database for MySQL
-* Azure SQL Database
-
-To build portable applications, see the following guidelines:
-
-* [The 12-factor app methodology](https://12factor.net/)
-* [Run a web application in multiple Azure regions](/azure/architecture/reference-architectures/app-service-web-app/multi-region)
-
-## Create a storage migration plan
-
-> **Best practice**
->
-> If you use Azure Storage, prepare and test how to migrate your storage from the primary region to the backup region.
-
-Your applications might use Azure Storage for their data. If so, your applications are spread across multiple AKS clusters in different regions. You need to keep the storage synchronized. Here are two common ways to replicate storage:
-
-* Infrastructure-based asynchronous replication
-* Application-based asynchronous replication
-
-### Infrastructure-based asynchronous replication
-
-Your applications might require persistent storage even after a pod is deleted. In Kubernetes, you can use persistent volumes to persist data storage. Persistent volumes are mounted to a node VM and then exposed to the pods. Persistent volumes follow pods even if the pods are moved to a different node inside the same cluster.
-
-The replication strategy you use depends on your storage solution. The following common storage solutions provide their own guidance about disaster recovery and replication:
-
-* [Gluster](https://docs.gluster.org/en/latest/Administrator-Guide/Geo-Replication/)
-* [Ceph](https://docs.ceph.com/docs/master/cephfs/disaster-recovery/)
-* [Rook](https://rook.io/docs/rook/v1.2/ceph-disaster-recovery.html)
-* [Portworx](https://docs.portworx.com/portworx-enterprise/operations/operate-kubernetes/storage-operations/kubernetes-storage-101/volumes.html)
-
-Typically, you provide a common storage point where applications write their data. This data is then replicated across regions and accessed locally.
-
-![Infrastructure-based asynchronous replication](media/operator-best-practices-bc-dr/aks-infra-based-async-repl.png)
-
-If you use Azure Managed Disks, you can use [Velero on Azure][velero] and [Kasten][kasten] to handle replication and disaster recovery. These options are back up solutions native to but unsupported by Kubernetes.
-
-### Application-based asynchronous replication
-
-Kubernetes currently provides no native implementation for application-based asynchronous replication. Since containers and Kubernetes are loosely coupled, any traditional application or language approach should work. Typically, the applications themselves replicate the storage requests, which are then written to each cluster's underlying data storage.
-
-![Application-based asynchronous replication](media/operator-best-practices-bc-dr/aks-app-based-async-repl.png)
-
-## Next steps
-
-This article focuses on business continuity and disaster recovery considerations for AKS clusters. For more information about cluster operations in AKS, see these articles about best practices:
-
-* [Multitenancy and cluster isolation][aks-best-practices-cluster-isolation]
-* [Basic Kubernetes scheduler features][aks-best-practices-scheduler]
-
-<!-- INTERNAL LINKS -->
-[aks-best-practices-scheduler]: operator-best-practices-scheduler.md
-[aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md
-
-[velero]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md
-[kasten]: https://www.kasten.io/
aks Passive Cold Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/passive-cold-solution.md
+
+ Title: Passive-cold solution overview for Azure Kubernetes Service (AKS)
+description: Learn about a passive-cold disaster solution overview for Azure Kubernetes Service (AKS).
++++ Last updated : 01/30/2024++
+# Passive-cold solution overview for Azure Kubernetes Service (AKS)
+
+When you create an application in Azure Kubernetes Service (AKS) and choose an Azure region during resource creation, it's a single-region app. When the region becomes unavailable during a disaster, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region, your application becomes less susceptible to a single-region disaster, which guarantees business continuity, and any data replication across the regions lets you recover your last application state.
+
+This guide outlines a passive-cold solution for AKS. Within this solution, we deploy two independent and identical AKS clusters into two paired Azure regions with only one cluster actively serving traffic when the application is needed.
+
+> [!NOTE]
+> The following practice has been reviewed internally and vetted in conjunction with our Microsoft partners.
+
+## Passive-cold solution overview
+
+In this approach, we have two independent AKS clusters being deployed in two Azure regions. When the application is needed, we activate the passive cluster to receive traffic. If the passive cluster goes down, we must manually activate the cold cluster to take over the flow of traffic. We can set this condition through a manual input every time or to specify a certain event.
+
+## Scenarios and configurations
+
+This solution is best implemented as a ΓÇ£use as neededΓÇ¥ workload, which is useful for scenarios that require workloads to run at specific times of day or run on demand. Example use cases for a passive-cold approach include:
+
+- A manufacturing company that needs to run a complex and resource-intensive simulation on a large dataset. In this case, the passive cluster is located in a cloud region that offers high-performance computing and storage services. The passive cluster is only used when the simulation is triggered by the user or by a schedule. If the cluster doesnΓÇÖt work upon triggering, the cold cluster can be used as a backup and the workload can run on it instead.
+- A government agency that needs to maintain a backup of its critical systems and data in case of a cyber attack or natural disaster. In this case, the passive cluster is located in a secure and isolated location thatΓÇÖs not accessible to the public.
+
+## Components
+
+The passive-cold disaster recovery solution uses many Azure services. This example architecture involves the following components:
+
+**Multiple clusters and regions**: You deploy multiple AKS clusters, each in a separate Azure region. When the app is needed, the passive cluster is activated to receive network traffic.
+
+**Key Vault**: You provision an [Azure Key Vault](../key-vault/general/overview.md) in each region to store secrets and keys.
+
+**Log Analytics**: Regional [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) instances store regional networking metrics and diagnostic logs. A shared instance stores metrics and diagnostic logs for all AKS instances.
+
+**Hub-spoke pair**: A hub-spoke pair is deployed for each regional AKS instance. [Azure Firewall Manager](../firewall-manager/overview.md) policies manage the firewall rules across each region.
+
+**Container Registry**: The container images for the workload are stored in a managed container registry. With this solution, a single [Azure Container Registry](../container-registry/container-registry-intro.md) instance is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables you to replicate images to the selected Azure regions and provides continued access to images even if a region experiences an outage.
+
+## Failover process
+
+If the passive cluster isn't functioning properly because of an issue in its specific Azure region, you can activate the cold cluster and redirect all traffic to that cluster's region. You can use this process while the passive cluster is deactivated until it starts working again. The cold cluster can take a couple minutes to come online, as it has been turned off and needs to complete the setup process. This approach isn't ideal for time-sensitive applications. In that case, we recommend considering an [active-active failover](./active-active-solution.md#failover-process).
+
+### Application Pods (Regional)
+
+A Kubernetes deployment object creates multiple replicas of a pod (*ReplicaSet*). If one is unavailable, traffic is routed between the remaining replicas. The Kubernetes *ReplicaSet* attempts to keep the specified number of replicas up and running. If one instance goes down, a new instance should be recreated. [Liveness probes](../container-instances/container-instances-liveness-probe.md) can check the state of the application or process running in the pod. If the pod is unresponsive, the liveness probe removes the pod, which forces the *ReplicaSet* to create a new instance.
+
+For more information, see [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/).
+
+### Application Pods (Global)
+
+When an entire region becomes unavailable, the pods in the cluster are no longer available to serve requests. In this case, the Azure Front Door instance routes all traffic to the remaining health regions. The Kubernetes clusters and pods in these regions continue to serve requests. To compensate for increased traffic and requests to the remaining cluster, keep in mind the following guidance:
+
+- Make sure network and compute resources are right sized to absorb any sudden increase in traffic due to region failover. For example, when using Azure Container Network Interface (CNI), make sure you have a subnet that can support all pod IPs with a spiked traffic load.
+- Use the [Horizontal Pod Autoscaler](./concepts-scale.md#horizontal-pod-autoscaler) to increase the pod replica count to compensate for the increased regional demand.
+- Use the AKS [Cluster Autoscaler](./cluster-autoscaler.md) to increase the Kubernetes instance node counts to compensate for the increased regional demand.
+
+### Kubernetes node pools (Regional)
+
+Occasionally, localized failure can occur to compute resources, such as power becoming unavailable in a single rack of Azure servers. To protect your AKS nodes from becoming a single point regional failure, use [Azure Availability Zones](./availability-zones.md). Availability zones ensure that AKS nodes in each availability zone are physically separated from those defined in another availability zones.
+
+### Kubernetes node pools (Global)
+
+In a complete regional failure, Azure Front Door routes traffic to the remaining healthy regions. Again, make sure to compensate for increased traffic and requests to the remaining cluster.
+
+## Failover testing strategy
+
+While there are no mechanisms currently available within AKS to take down an entire region of deployment for testing purposes, [Azure Chaos Studio](../chaos-studio/chaos-studio-overview.md) offers the ability to create a chaos experiment on your cluster.
+
+## Next steps
+
+If you're considering a different solution, see the following articles:
+
+- [Active passive disaster recovery solution overview for Azure Kubernetes Service (AKS)](./active-passive-solution.md)
+- [Active active high availability solution overview for Azure Kubernetes Service (AKS)](./active-active-solution.md)
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
Title: Manage API inventory in Azure API Center - Azure CLI
description: Use the Azure CLI to create and update APIs, API versions, and API definitions in your Azure API center. + Last updated 01/12/2024
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
To follow the steps in this article, you must have:
+ API Management management plane services, such as management actions applied via the Azure portal or Azure Resource Manager, or load coming from the [developer portal](api-management-howto-developer-portal.md). + Selected operating system processes, including processes that involve cost of TLS handshakes on new connections. + Platform updates, such as OS updates on the underlying compute resources for the instance.++ Number of APIs deployed, regardless of activity, which can consume additional capacity. Total **capacity** is an average of its own values from every [unit](upgrade-and-scale.md) of an API Management instance.
Low **capacity metric** doesn't necessarily mean that your API Management instan
- [Upgrade and scale an Azure API Management service instance](upgrade-and-scale.md) - [Automatically scale an Azure API Management instance](api-management-howto-autoscale.md)-- [Plan and manage costs for API Management](plan-manage-costs.md)
+- [Plan and manage costs for API Management](plan-manage-costs.md)
api-management Api Management Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-kubernetes.md
Microservices are perfect for building APIs. With [Azure Kubernetes Service](htt
## Background
-When publishing microservices as APIs for consumption, it can be challenging to manage the communication between the microservices and the clients that consume them. There is a multitude of cross-cutting concerns such as authentication, authorization, throttling, caching, transformation, and monitoring. These concerns are valid regardless of whether the microservices are exposed to internal or external clients.
+When publishing microservices as APIs for consumption, it can be challenging to manage the communication between the microservices and the clients that consume them. There's a multitude of cross-cutting concerns such as authentication, authorization, throttling, caching, transformation, and monitoring. These concerns are valid regardless of whether the microservices are exposed to internal or external clients.
The [API Gateway](/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern) pattern addresses these concerns. An API gateway serves as a front door to the microservices, decouples clients from your microservices, adds an additional layer of security, and decreases the complexity of your microservices by removing the burden of handling cross cutting concerns. [Azure API Management](https://aka.ms/apimrocks) is a turnkey solution to solve your API gateway needs. You can quickly create a consistent and modern gateway for your microservices and publish them as APIs. As a full-lifecycle API management solution, it also provides additional capabilities including a self-service developer portal for API discovery, API lifecycle management, and API analytics.
-When used together, AKS and API Management provide a platform for deploying, publishing, securing, monitoring, and managing your microservices-based APIs. In this article, we will go through a few options of deploying AKS in conjunction with API Management.
+When used together, AKS and API Management provide a platform for deploying, publishing, securing, monitoring, and managing your microservices-based APIs. In this article, we'll go through a few options of deploying AKS in conjunction with API Management.
## Kubernetes Services and APIs
-In a Kubernetes cluster, containers are deployed in [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/), which are ephemeral and have a lifecycle. When a worker node dies, the Pods running on the node are lost. Therefore, the IP address of a Pod can change anytime. We cannot rely on it to communicate with the pod.
+In a Kubernetes cluster, containers are deployed in [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/), which are ephemeral and have a lifecycle. When a worker node dies, the Pods running on the node are lost. Therefore, the IP address of a Pod can change anytime. We can't rely on it to communicate with the pod.
To solve this problem, Kubernetes introduced the concept of [Services](https://kubernetes.io/docs/concepts/services-networking/service/). A Kubernetes Service is an abstraction layer which defines a logic group of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.
-When we are ready to publish our microservices as APIs through API Management, we need to think about how to map our Services in Kubernetes to APIs in API Management. There are no set rules. It depends on how you designed and partitioned your business capabilities or domains into microservices at the beginning. For instance, if the pods behind a Service are responsible for all operations on a given resource (e.g., Customer), the Service may be mapped to one API. If operations on a resource are partitioned into multiple microservices (e.g., GetOrder, PlaceOrder), then multiple Services may be logically aggregated into one single API in API management (See Fig. 1).
+When we are ready to publish our microservices as APIs through API Management, we need to think about how to map our Services in Kubernetes to APIs in API Management. There are no set rules. It depends on how you designed and partitioned your business capabilities or domains into microservices at the beginning. For instance, if the pods behind a Service are responsible for all operations on a given resource (for example, Customer), the Service may be mapped to one API. If operations on a resource are partitioned into multiple microservices (for example, GetOrder, PlaceOrder), then multiple Services may be logically aggregated into one single API in API management (See Fig. 1).
The mappings can also evolve. Since API Management creates a facade in front of the microservices, it allows us to refactor and right-size our microservices over time.
The mappings can also evolve. Since API Management creates a facade in front of
There are a few options of deploying API Management in front of an AKS cluster.
-While an AKS cluster is always deployed in a virtual network (VNet), an API Management instance is not required to be deployed in a VNet. When API Management does not reside within the cluster VNet, the AKS cluster has to publish public endpoints for API Management to connect to. In that case, there is a need to secure the connection between API Management and AKS. In other words, we need to ensure the cluster can only be accessed exclusively through API Management. LetΓÇÖs go through the options.
+While an AKS cluster is always deployed in a virtual network (VNet), an API Management instance isn't required to be deployed in a VNet. When API Management doesn't reside within the cluster VNet, the AKS cluster has to publish public endpoints for API Management to connect to. In that case, there's a need to secure the connection between API Management and AKS. In other words, we need to ensure the cluster can only be accessed exclusively through API Management. LetΓÇÖs go through the options.
### Option 1: Expose Services publicly
This might be the easiest option to deploy API Management in front of AKS, espec
![Publish services directly](./media/api-management-aks/direct.png) Pros:
-* Easy configuration on the API Management side because it does not need to be injected into the cluster VNet
+* Easy configuration on the API Management side because it doesn't need to be injected into the cluster VNet
* No change on the AKS side if Services are already exposed publicly and authentication logic already exists in microservices Cons:
Cons:
### Option 2: Install an Ingress Controller
-Although Option 1 might be easier, it has notable drawbacks as mentioned above. If an API Management instance does not reside in the cluster VNet, Mutual TLS authentication (mTLS) is a robust way of ensuring the traffic is secure and trusted in both directions between an API Management instance and an AKS cluster.
+Although Option 1 might be easier, it has notable drawbacks as mentioned above. If an API Management instance doesn't reside in the cluster VNet, Mutual TLS authentication (mTLS) is a robust way of ensuring the traffic is secure and trusted in both directions between an API Management instance and an AKS cluster.
Mutual TLS authentication is [natively supported](./api-management-howto-mutual-certificates.md) by API Management and can be enabled in Kubernetes by [installing an Ingress Controller](../aks/ingress-own-tls.md) (Fig. 3). As a result, authentication will be performed in the Ingress Controller, which simplifies the microservices. Additionally, you can add the IP addresses of API Management to the allowed list by Ingress to make sure only API Management has access to the cluster.
Mutual TLS authentication is [natively supported](./api-management-howto-mutual-
Pros:
-* Easy configuration on the API Management side because it does not need to be injected into the cluster VNet and mTLS is natively supported
+* Easy configuration on the API Management side because it doesn't need to be injected into the cluster VNet and mTLS is natively supported
* Centralizes protection for inbound cluster traffic at the Ingress Controller layer * Reduces security risk by minimizing publicly visible cluster endpoints
To get a subscription key for accessing APIs, a subscription is required. A subs
### Option 3: Deploy APIM inside the cluster VNet
-In some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints. In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there is no reason to expose the cluster publicly as all API traffic will remain within the VNet. For these scenarios, you can deploy API Management into the cluster VNet. [API Management Developer and Premium tiers](https://aka.ms/apimpricing) support VNet deployment.
+In some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints. In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there's no reason to expose the cluster publicly as all API traffic will remain within the VNet. For these scenarios, you can deploy API Management into the cluster VNet. [API Management Developer and Premium tiers](https://aka.ms/apimpricing) support VNet deployment.
-There are two modes of [deploying API Management into a VNet](./api-management-using-with-vnet.md) ΓÇô External and Internal.
+There are two modes of [deploying API Management into a VNet](./virtual-network-concepts.md) ΓÇô External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig. 4) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic. ![External VNet mode](./media/api-management-aks/vnet-external.png)
-If all API consumers reside within the cluster VNet, then the Internal mode (Fig. 5) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
+If all API consumers reside within the cluster VNet, then the Internal mode (Fig. 5) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There's no way to reach the API Management gateway or the AKS cluster from public internet.
![Internal VNet mode](./media/api-management-aks/vnet-internal.png)
- In both cases, the AKS cluster is not publicly visible. Compared to Option 2, the Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
+ In both cases, the AKS cluster isn't publicly visible. Compared to Option 2, the Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros: * The most secure option because the AKS cluster has no public endpoint
Cons:
## Next steps * Learn more about [Network concepts for applications in AKS](../aks/concepts-network.md)
-* Learn more about [How to use API Management with virtual networks](./api-management-using-with-vnet.md)
+* Learn more about [How to use API Management with virtual networks](./virtual-network-concepts.md)
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
A subscriber can use an API Management subscription key in one of two ways:
> [!TIP] > **Ocp-Apim-Subscription-Key** is the default name of the subscription key header, and **subscription-key** is the default name of the query parameter. If desired, you may modify these names in the settings for each API. For example, in the portal, update these names on the **Settings** tab of an API.
+> [!NOTE]
+> When included in a request header or query parameter, the subscription key by default is passed to the backend and may be exposed in backend monitoring logs or other systems. If this is considered sensitive data, you can configure a policy in the `outbound` section to remove the subscription key header ([`set-header`](set-header-policy.md)) or query parameter ([`set-query-parameter`](set-query-parameter-policy.md)).
+ ## Enable or disable subscription requirement for API or product access By default when you create an API, a subscription key is required for API access. Similarly, when you create a product, by default a subscription key is required to access any API that's added to the product. Under certain scenarios, an API publisher might want to publish a product or a particular API to the public without the requirement of subscriptions. While a publisher could choose to enable unsecured (anonymous) access to certain APIs, configuring another mechanism to secure client access is recommended.
api-management Cosmosdb Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cosmosdb-data-source-policy.md
Use the policy to configure a single query request, read request, delete request
<write-request type="insert | replace | upsert" consistency-level="bounded-staleness | consistent-prefix | eventual | session | strong" pre-trigger="myPreTrigger" post-trigger="myPostTrigger"> <id template="liquid"> "Item ID in container"
- </id>
+ </id>
+ <partition-key data-type="string | number | bool | none | null" template="liquid">
+ "Container partition key"
+ </partition-key>
<etag type="match | no-match" template="liquid" > "System-generated entity tag"
- </etag>
- <set-body template="liquid" >...set-body policy configuration...</set-body>
- <partition-key data-type="string | number | bool | none | null" template="liquid">
- "Container partition key"
- </partition-key>
+ </etag>
+ <set-body template="liquid" >...set-body policy configuration...</set-body>
</write-request> <response>
resourceGroupName="<MY-RESOURCE-GROUP>"
# Variable for subscription resourceGroupName="<MY-SUBSCRIPTION-NAME>"
-# Set principal variable to the value from Azure portal
+# Set principal variable to the value from Managed identities page of API Management instance in Azure portal
principal="<MY-APIM-MANAGED-ID-PRINCIPAL-ID>" # Get the scope value of Cosmos DB account
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
# Configure a Java app for Azure App Service
+> [!NOTE]
+> For Spring applications, we recommend using Azure Spring Apps. However, you can still use Azure App Service as a destination.
+ Azure App Service lets Java developers to quickly build, deploy, and scale their Java SE, Tomcat, and JBoss EAP web applications on a fully managed service. Deploy applications with Maven plugins, from the command line, or in editors like IntelliJ, Eclipse, or Visual Studio Code. This guide provides key concepts and instructions for Java developers using App Service. If you've never used Azure App Service, you should read through the [Java quickstart](quickstart-java.md) first. General questions about using App Service that aren't specific to Java development are answered in the [App Service FAQ](faq-configuration-and-management.yml).
app-service App Service App Service Environment Control Inbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Create Ilb Ase Resourcemanager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-intro.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Layered Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-layered-security.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Since App Service Environments provide an isolated runtime environment deployed into a virtual network, developers can create a layered security architecture providing differing levels of network access for each physical application tier.
app-service App Service App Service Environment Network Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-architecture-overview.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> App Service Environments are always created within a subnet of a [virtual network][virtualnetwork] - apps running in an App Service Environment can communicate with private endpoints located within the same virtual network topology. Since customers may lock down parts of their virtual network infrastructure, it is important to understand the types of network communication flows that occur with an App Service Environment.
app-service App Service App Service Environment Network Configuration Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Customers can connect an [Azure ExpressRoute][ExpressRoute] circuit to their virtual network infrastructure to extend their on-premises network to Azure. App Service Environment is created in a subnet of the [virtual network][virtualnetwork] infrastructure. Apps that run on App Service Environment establish secure connections to back-end resources that are accessible only over the ExpressRoute connection.
app-service App Service App Service Environment Securely Connecting To Backend Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-securely-connecting-to-backend-resources.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Since an App Service Environment is always created in **either** an Azure Resource Manager virtual network, **or** a classic deployment model [virtual network][virtualnetwork], outbound connections from an App Service Environment to other backend resources can flow exclusively over the virtual network. As of June 2016, ASEs can also be deployed into virtual networks that use either public address ranges or RFC1918 address spaces (private addresses).
app-service App Service Environment Auto Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-environment-auto-scale.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service environments support *autoscaling*. You can autoscale individual worker pools based on metrics or schedule.
app-service App Service Web Configure An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-configure-an-app-service-environment.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service Web Scale A Web App In An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-scale-a-web-app-in-an-app-service-environment.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> In the Azure App Service there are normally three things you can scale:
app-service Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/certificates.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment(ASE) is a deployment of the Azure App Service that runs within your Azure Virtual Network(VNet). It can be deployed with an internet accessible application endpoint or an application endpoint that is in your VNet. If you deploy the ASE with an internet accessible endpoint, that deployment is called an External ASE. If you deploy the ASE with an endpoint in your VNet, that deployment is called an ILB ASE. You can learn more about the ILB ASE from the [Create and use an ILB ASE](./create-ilb-ase.md) document.
app-service Create External Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-external-ase.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service environments (ASEs) can be created with an internet-accessible endpoint or an endpoint on an internal address in an Azure Virtual Network. When created with an internal endpoint, that endpoint is provided by an Azure component called an internal load balancer (ILB). The ASE on an internal IP address is called an ILB ASE. The ASE with a public endpoint is called an External ASE.
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment (ASE) has many external dependencies that it requires access to in order to function properly. The ASE lives in the customer Azure Virtual Network. Customers must allow the ASE dependency traffic, which is a problem for customers that want to lock down all egress from their virtual network.
app-service Forced Tunnel Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment (ASE) is a deployment of Azure App Service in a customer's Azure Virtual Network. Many customers configure their Azure virtual networks to be extensions of their on-premises networks with VPNs or Azure ExpressRoute connections. Forced tunneling is when you redirect internet bound traffic to your VPN or a virtual appliance instead. Virtual appliances are often used to inspect and audit outbound network traffic.
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
Unlike earlier versions, the FTPS endpoints for your App Services on your App Se
## Prerequisites - ILB variation of App Service Environment v3.
+- The Azure Key Vault that has the certificate must be publicly accessible to fetch the certificate.
- Valid SSL/TLS certificate must be stored in an Azure Key Vault in .PFX format. For more information on using certificates with App Service, see [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md). ### Managed identity
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Under **Get new IP addresses**, confirm you understand the implications and star
## 3. Update dependent resources with new IPs
-When the previous step finishes, you're shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't move on to the next step until you confirmed that you made these updates.
+When the previous step finishes, you're shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't move on to the next step until you confirm that you made these updates.
:::image type="content" source="./media/migration/ip-sample.png" alt-text="Screenshot that shows sample IPs generated during premigration.":::
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> [!INCLUDE [azure-CLI-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 12/14/2023 Last updated : 01/30/2024
App Service can now automate migration of your App Service Environment v1 and v2
At this time, the migration feature doesn't support migrations to App Service Environment v3 in the following regions:
-### Azure Public
--- Jio India West- ### Microsoft Azure operated by 21Vianet - China East 2
The following App Service Environment configurations can be migrated using the m
|ILB App Service Environment v1 |ILB App Service Environment v3 | |ELB App Service Environment v1 |ELB App Service Environment v3 | |ILB App Service Environment v1 with a custom domain suffix |ILB App Service Environment v3 with a custom domain suffix |
+|[Zone pinned](zone-redundancy.md) App Service Environment v2 |App Service Environment v3 with optional zone redundancy configuration |
If you want your new App Service Environment v3 to use a custom domain suffix and you aren't using one currently, custom domain suffix can be configured at any time once migration is complete. For more information, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
The migration feature doesn't support the following scenarios. See the [manual m
- App Service Environment v1 in a [Classic VNet](/previous-versions/azure/virtual-network/create-virtual-network-classic) - ELB App Service Environment v2 with IP SSL addresses - ELB App Service Environment v1 with IP SSL addresses-- [Zone pinned](zone-redundancy.md) App Service Environment v2 - App Service Environment in a region not listed in the supported regions The App Service platform reviews your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to be available in your region. | |Migration cannot be called on this ASE, please contact support for help migrating. |Support needs to be engaged for migrating this App Service Environment. This issue is potentially due to custom settings used by this environment. |Engage support to resolve your issue. |
-|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2 that is zone pinned can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). | |Migration to ASEv3 is not allowed for this ASE. |You can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
For more scenarios on cost changes and savings opportunities with App Service En
- **What if my App Service Environment has a custom domain suffix?** The migration feature supports this [migration scenario](#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after. - **What if my App Service Environment is zone pinned?**
- Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. To migrate to App Service Environment v3, see the [manual migration options](migration-alternatives.md).
+ Zone pinned App Service Environment v2 is now a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. When migrating to App Service Environment v3, you can choose to configure zone redundancy or not.
- **What if my App Service Environment has IP SSL addresses?** IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration. - **What properties of my App Service Environment will change?**
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 07/24/2023 Last updated : 01/30/2024 # Migrate to App Service Environment v3
Once your migration and any testing with your new environment is complete, delet
No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3. If you're using IP SSL, you must remove the IP SSL bindings before migrating. - **What if my App Service Environment has a custom domain suffix?** The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after. -- **What if my App Service Environment is zone pinned?**
- Zone pinning isn't a supported feature on App Service Environment v3.
+- **What if my App Service Environment v2 is zone pinned?**
+ Zone pinning isn't a supported feature on App Service Environment v3. You can choose to enable zone redundancy when creating your App Service Environment v3.
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **Is backup and restore supported for moving apps from App Service Environment v2 to v3?**
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service Environment:
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
description: Take the first steps toward upgrading to App Service Environment v3
Previously updated : 12/11/2023 Last updated : 1/31/2024 # Upgrade to App Service Environment v3
This page is your one-stop shop for guidance and resources to help you upgrade s
|**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| |**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).| |**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Explore reserved instance pricing, savings plans, and check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [How reservation discounts apply to Isolated v2 instances](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|
-|**5**|**Learn more**|Join the [free live webinar](https://developer.microsoft.com/en-us/reactor/events/20417) with FastTrack Architects.<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
+|**5**|**Learn more**|On-demand: [Learn Live webinar with Azure FastTrack Architects](https://www.youtube.com/watch?v=lI9TK_v-dkg&ab_channel=MicrosoftDeveloper).<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
## Additional information
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using-an-ase.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> An App Service Environment (ASE) is a deployment of Azure App Service into a subnet in a customer's Azure Virtual Network instance. An ASE consists of:
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
App Service Environment has three versions. App Service Environment v3 is the la
> App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Comparison between versions
app-service Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/zone-redundancy.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> App Service Environment v2 (ASE) can be deployed into Availability Zones (AZ). Customers can deploy an internal load balancer (ILB) ASEs into a specific AZ within an Azure region. If you pin your ILB ASE to a specific AZ, the resources used by a ILB ASE will either be pinned to the specified AZ, or deployed in a zone redundant manner.
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
# Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB
+> [!NOTE]
+> For Spring applications, we recommend using Azure Spring Apps. However, you can still use Azure App Service as a destination.
+ This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure. When you are finished, you will have a [Spring Boot](https://spring.io/projects/spring-boot) application storing data in [Azure Cosmos DB](../cosmos-db/index.yml) running on [Azure App Service on Linux](overview.md).
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
automation Quickstart Cli Support Powershell Runbook Runtime Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstart-cli-support-powershell-runbook-runtime-environment.md
description: This article shows how to add support for Azure CLI in PowerShell 7
Last updated 01/17/2024 -+ # Run Azure CLI commands in PowerShell 7.2 runbooks
automation Runtime Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runtime-environment-overview.md
Last updated 01/24/2024 -+ # Runtime environment in Azure Automation
While the new Runtime environment experience is recommended, you can also switch
* To work with runbooks and Runtime environment, see [Manage Runtime environment](manage-runtime-environment.md). * For details of PowerShell, see [PowerShell Docs](/powershell/scripting/overview).-
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
Last updated 3/20/2023
#Customer intent: As a .NET Framework developer, I want to use feature flags to control feature availability quickly and confidently.
-# Quickstart: Add feature flags to a .NET Framework app
+# Quickstart: Add feature flags to a .NET Framework console app
In this quickstart, you incorporate Azure App Configuration into a .NET Framework app to create an end-to-end implementation of feature management. You can use the App Configuration service to centrally store all your feature flags and control their states.
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
> [!div class="mx-imgBorder"] > ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
-## Create a .NET console app
+## Create a .NET Framework console app
1. Start Visual Studio, and select **File** > **New** > **Project**.
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search and add the following NuGet packages to your project. ```
- Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Configuration.AzureAppConfiguration Microsoft.FeatureManagement ```
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
1. Open *Program.cs* and add the following statements: ```csharp
- using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration.AzureAppConfiguration; using Microsoft.FeatureManagement; using System.Threading.Tasks; ```
-1. Update the `Main` method to connect to App Configuration, specifying the `UseFeatureFlags` option so that feature flags are retrieved. Then display a message if the `Beta` feature flag is enabled.
+1. Update the `Main` method to connect to App Configuration, specifying the `UseFeatureFlags` option so that feature flags are retrieved. Create a `ConfigurationFeatureDefinitionProvider` to provide feature flag definitions from the configuration and a `FeatureManager` to evaluate feature flags' state. Then display a message if the `Beta` feature flag is enabled.
```csharp public static async Task Main(string[] args) {
- IConfigurationRoot configuration = new ConfigurationBuilder()
+ IConfiguration configuration = new ConfigurationBuilder()
.AddAzureAppConfiguration(options => { options.Connect(Environment.GetEnvironmentVariable("ConnectionString")) .UseFeatureFlags(); }).Build();
- IServiceCollection services = new ServiceCollection();
+ IFeatureDefinitionProvider featureDefinitionProvider = new ConfigurationFeatureDefinitionProvider(configuration);
- services.AddSingleton<IConfiguration>(configuration).AddFeatureManagement();
+ IFeatureManager featureManager = new FeatureManager(
+ featureDefinitionProvider,
+ new FeatureManagementOptions());
- using (ServiceProvider serviceProvider = services.BuildServiceProvider())
+ if (await featureManager.IsEnabledAsync("Beta"))
{
- IFeatureManager featureManager = serviceProvider.GetRequiredService<IFeatureManager>();
-
- if (await featureManager.IsEnabledAsync("Beta"))
- {
- Console.WriteLine("Welcome to the beta!");
- }
+ Console.WriteLine("Welcome to the beta!");
} Console.WriteLine("Hello World!");
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024 #
azure-arc Onboard Dsc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-dsc.md
Using [Windows PowerShell Desired State Configuration](/powershell/dsc/getting-s
- Windows PowerShell version 4.0 or higher -- The [AzureConnectedMachineDsc](https://www.powershellgallery.com/packages/AzureConnectedMachineDsc) DSC module
+- The AzureConnectedMachineDsc module
- A service principal to connect the machines to Azure Arc-enabled servers non-interactively. Follow the steps under the section [Create a Service Principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) if you have not created a service principal for Azure Arc-enabled servers already.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Complete the following tutorial to create a new function app a secured storage a
# [Deployment templates](#tab/templates)
-Use Bicep or Azure Resource Manager (ARM) [quickstart templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints) to create secured function app and storage account resources.
+Use Bicep files or Azure Resource Manager (ARM) templates to create a secured function app and storage account resources. When you create a secured storage account in an automated deployment, you must also specifically set the `WEBSITE_CONTENTSHARE` setting and create the file share as part of your deployment. For more information, including links to example deployments, see [Secured deployments](functions-infrastructure-as-code.md#secured-deployments).
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md
Before you begin, you must have the following:
+ The [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
-+ The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 or 11. The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK.
++ The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8, 11, 17, 21(Linux only). The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK. + [Apache Maven](https://maven.apache.org), version 3.0 or above.
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|Prompt|Selection| |--|--| |**Select a language**| Choose `Java`.|
- |**Select a version of Java**| Choose `Java 11` or `Java 8`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. |
+ |**Select a version of Java**| Choose `Java 8`, `Java 11`, `Java 17` or `Java 21`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. |
| **Provide a group ID** | Choose `com.function`. | | **Provide an artifact ID** | Choose `myFunction`. | | **Provide a version** | Choose `1.0-SNAPSHOT`. |
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The share is created when your function app is created. Changing or removing thi
The following considerations apply when using an Azure Resource Manager (ARM) template or Bicep file to create a function app during deployment: + When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. Not setting `WEBSITE_CONTENTSHARE` _is the recommended approach_ for an ARM template deployment.
-+ There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined share, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot.
++ There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined value, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot. In the case of a storage account secured by a virtual network, you must also create the share itself as part of your automated deployment. For more information, see [Secured deployments](functions-infrastructure-as-code.md#secured-deployments). + Don't make `WEBSITE_CONTENTSHARE` a slot setting. + When you specify `WEBSITE_CONTENTSHARE`, the value must follow [this guidance for share names](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#share-names).
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
Title: Azure Cosmos DB trigger for Functions 2.x and higher description: Learn to use the Azure Cosmos DB trigger in Azure Functions. Previously updated : 04/04/2023 Last updated : 01/19/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, powershell, python
The following examples depend on the extension version for the given C# mode.
# [Extension 4.x+](#tab/extensionv4/in-process)
-Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties, which are shown below. This example refers to a simple `ToDoItem` type.
+Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher have different attribute properties, which are shown here. This example refers to a simple `ToDoItem` type.
```cs namespace CosmosDBSamplesV2
public void Run([CosmosDBTrigger(
The following code defines a `MyDocument` type: An [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1) is used as the Azure Cosmos DB trigger binding parameter in the following example:
This example requires the following `using` statements:
::: zone-end ::: zone pivot="programming-language-java"
-This function is invoked when there are inserts or updates in the specified database and collection.
+This function is invoked when there are inserts or updates in the specified database and container.
+
+# [Extension 4.x+](#tab/extensionv4)
++
+```java
+ @FunctionName("CosmosDBTriggerFunction")
+ public void run(
+ @CosmosDBTrigger(
+ name = "items",
+ databaseName = "ToDoList",
+ containerName = "Items",
+ leaseContainerName="leases",
+ connection = "AzureCosmosDBConnection",
+ createLeaseContainerIfNotExists = true
+ )
+ Object inputItem,
+ final ExecutionContext context
+ ) {
+ context.getLogger().info("Items modified: " + inputItems.size());
+ }
+```
# [Functions 2.x+](#tab/functionsv2)
This function is invoked when there are inserts or updates in the specified data
context.getLogger().info(items.length + "item(s) is/are changed."); } ```
-# [Extension 4.x+](#tab/extensionv4)
-
The following example shows an Azure Cosmos DB trigger [TypeScript function](fun
# [Model v3](#tab/nodejs-v3)
-TypeScript samples are not documented for model v3.
+TypeScript samples aren't documented for model v3.
For Python functions defined by using *function.json*, see the [Configuration](#
::: zone pivot="programming-language-java" ## Annotations
+# [Extension 4.x+](#tab/extensionv4)
++
+Use the `@CosmosDBTrigger` annotation on parameters that read data from Azure Cosmos DB. The annotation supports the following properties:
+
+|Attribute property | Description|
+||-|
+|**connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](#connections).|
+|**name** | The name of the function. |
+|**databaseName** | The name of the Azure Cosmos DB database with the container being monitored. |
+|**containerName** | The name of the container being monitored. |
+|**leaseConnectionStringSetting** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `Connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.|
+|**leaseDatabaseName** | (Optional) The name of the database that holds the container used to store leases. When not set, the value of the `databaseName` setting is used. |
+|**leaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |
+|**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't start.|
+|**leasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `CreateLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. |
+|**leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. |
+|**feedPollDelay**| (Optional) The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all current changes are drained. Default is 5,000 milliseconds, or 5 seconds.|
+|**leaseAcquireInterval**| (Optional) When set, it defines, in milliseconds, the interval to kick off a task to compute if partitions are distributed evenly among known host instances. Default is 13000 (13 seconds). |
+|**leaseExpirationInterval**| (Optional) When set, it defines, in milliseconds, the interval for which the lease is taken on a lease representing a partition. If the lease isn't renewed within this interval, it will expire and ownership of the partition moves to another instance. Default is 60000 (60 seconds).|
+|**leaseRenewInterval**| (Optional) When set, it defines, in milliseconds, the renew interval for all leases for partitions currently held by an instance. Default is 17000 (17 seconds). |
+|**maxItemsPerInvocation**| (Optional) When set, this property sets the maximum number of items received per Function call. If operations in the monitored container are performed through stored procedures, [transaction scope](../cosmos-db/nosql/stored-procedures-triggers-udfs.md#transactions) is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch. |
+|**startFromBeginning**| (Optional) This option tells the Trigger to read changes from the beginning of the container's change history instead of starting at the current time. Reading from the beginning only works the first time the trigger starts, as in subsequent runs, the checkpoints are already stored. Setting this option to `true` when there are leases already created has no effect. |
+|**preferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
+ # [Functions 2.x+](#tab/functionsv2)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Azure Cosmos DB. The annotation supports the following properties:
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBTrigger` annotation on parameters that read data from Azure Cosmos DB. The annotation supports the following properties:
+ [name](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.name) + [connectionStringSetting](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.connectionstringsetting)
From the [Java functions runtime library](/java/api/overview/azure/functions/run
+ [startFromBeginning](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.startfrombeginning) + [preferredLocations](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.preferredlocations)
-# [Extension 4.x+](#tab/extensionv4)
-- ::: zone-end
The following table explains the binding configuration properties that you set i
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python"
-# [Functions 2.x+](#tab/functionsv2)
-- # [Extension 4.x+](#tab/extensionv4) [!INCLUDE [functions-cosmosdb-settings-v4](../../includes/functions-cosmosdb-settings-v4.md)]
+# [Functions 2.x+](#tab/functionsv2)
++ ::: zone-end
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
The following table indicates which triggers support retries and where the retry
### Retry policies
-Starting with version 3.x of the Azure Functions runtime, you can define retry policies for Timer, Kafka, and Event Hubs triggers that are enforced by the Functions runtime.
+Starting with version 3.x of the Azure Functions runtime, you can define retry policies for Timer, Kafka, Event Hubs, and Azure Cosmos DB triggers that are enforced by the Functions runtime.
The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.
-A retry policy is evaluated when a Timer, Kafka, or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.
+A retry policy is evaluated when a Timer, Kafka, Event Hubs, or Azure Cosmos DB-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.
> [!IMPORTANT] > Event Hubs checkpoints won't be written until the retry policy for the execution has finished. Because of this behavior, progress on the specific partition is paused until the current batch has finished.
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
Title: Azure Service Bus output bindings for Azure Functions
description: Learn to send Azure Service Bus messages from Azure Functions. ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 03/06/2023 Last updated : 01/15/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, powershell, python
This example shows a [C# function](dotnet-isolated-process-guide.md) that receiv
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/ServiceBus/ServiceBusReceivedMessageFunctions.cs" id="docsnippet_servicebus_readmessage":::
+&nbsp;
+<hr/>
+
+This example uses an HTTP trigger with an `OutputType` object to both send an HTTP response and write the output message.
+
+```csharp
+[Function("HttpSendMsg")]
+public async Task<OutputType> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequestData req, FunctionContext context)
+{
+ _logger.LogInformation($"C# HTTP trigger function processed a request for {context.InvocationId}.");
+
+ HttpResponseData response = req.CreateResponse(HttpStatusCode.OK);
+ await response.WriteStringAsync("HTTP response: Message sent");
+
+ return new OutputType()
+ {
+ OutputEvent = "MyMessage",
+ HttpResponse = response
+ };
+}
+```
+
+This code defines the multiple output type `OutputType`, which includes the Service Bus output binding definition on `OutputEvent`:
+
+```csharp
+ public class OutputType
+{
+ [ServiceBusOutput("TopicOrQueueName", Connection = "ServiceBusConnection")]
+ public string OutputEvent { get; set; }
+
+ public HttpResponseData HttpResponse { get; set; }
+}
+```
+ # [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that sends a Service Bus queue message:
-```cs
+```csharp
[FunctionName("ServiceBusOutput")] [return: ServiceBus("myqueue", Connection = "ServiceBusConnection")] public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
return input.Text; } ```
+&nbsp;
+<hr/>
+
+Instead of using the return statement to send the message, this HTTP trigger function returns an HTTP response that is different from the output message.
+
+```csharp
+[FunctionName("HttpTrigger1")]
+public static async Task<IActionResult> Run(
+[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+[ServiceBus("TopicOrQueueName", Connection = "ServiceBusConnection")] IAsyncCollector<string> message, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ await message.AddAsync("MyMessage");
+ await message.AddAsync("MyMessage2");
+
+ string responseMessage = "This HTTP triggered sent a message to Service Bus.";
+
+ return new OkObjectResult(responseMessage);
+}
+```
+ ::: zone-end
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Access the queue message via the parameter typed as [QueueMessage](/python/api/a
## <a name="message-metadata"></a>Metadata
-The queue trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code.
+The queue trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code, for language workers that provide this access to message metadata.
::: zone pivot="programming-language-csharp"
-The properties are members of the [CloudQueueMessage] class.
+The message metadata properties are members of the [CloudQueueMessage] class.
+The message metadata properties can be accessed from `context.triggerMetadata`.
+The message metadata properties can be accessed from the passed `$TriggerMetadata` parameter.
::: zone-end |Property|Type|Description| |--|-|--| |`QueueTrigger`|`string`|Queue payload (if a valid string). If the queue message payload is a string, `QueueTrigger` has the same value as the variable named by the `name` property in *function.json*.|
-|`DequeueCount`|`int`|The number of times this message has been dequeued.|
+|`DequeueCount`|`long`|The number of times this message has been dequeued.|
|`ExpirationTime`|`DateTimeOffset`|The time that the message expires.| |`Id`|`string`|Queue message ID.| |`InsertionTime`|`DateTimeOffset`|The time that the message was added to the queue.| |`NextVisibleTime`|`DateTimeOffset`|The time that the message will next be visible.| |`PopReceipt`|`string`|The message's pop receipt.|
+The following message metadata properties can be accessed from the passed binding parameter (`msg` in previous [examples](#example)).
+
+|Property|Description|
+|--|-|
+|`body`| Queue payload as a string.|
+|`dequeue_count`| The number of times this message has been dequeued.|
+|`expiration_time`|The time that the message expires.|
+|`id`| Queue message ID.|
+|`insertion_time`|The time that the message was added to the queue.|
+|`time_next_visible`|The time that the message will next be visible.|
+|`pop_receipt`|The message's pop receipt.|
++ [!INCLUDE [functions-storage-queue-connections](../../includes/functions-storage-queue-connections.md)] ## Poison messages
To handle poison messages manually, check the [dequeueCount](#message-metadata)
## Peek lock+ The peek-lock pattern happens automatically for queue triggers. As messages are dequeued, they are marked as invisible and associated with a 10-minute timeout managed by the Storage service. This timeout can't be changed. When the function starts, it starts processing a message under the following conditions.
azure-functions Functions Create First Java Gradle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-java-gradle.md
This article shows you how to build and publish a Java function project to Azure
To develop functions using Java, you must have the following installed: -- [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8
+- [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8, 11, 17 or 21. (Java 21 is currently supported in preview on Linux only)
- [Azure CLI] - [Azure Functions Core Tools](./functions-run-local.md#v2) version 2.6.666 or above - [Gradle](https://gradle.org/), version 6.8 and above
azure-functions Functions Create Maven Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-eclipse.md
This article shows you how to create a [serverless](https://azure.microsoft.com/
To develop a functions app with Java and Eclipse, you must have the following installed: -- [Java Developer Kit](https://www.azul.com/downloads/zulu/), version 8.
+- [Java Developer Kit](https://learn.microsoft.com/java/openjdk/download#openjdk-17), version 8, 11, 17 or 21. (Java 21 is currently supported in preview only on Linux)
- [Apache Maven](https://maven.apache.org), version 3.0 or above. - [Eclipse](https://www.eclipse.org/downloads/packages/), with Java and Maven support. - [Azure CLI](/cli/azure)
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md
Specifically, this article shows you:
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure) for Java, version 8, 11, or 17
+- An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure), version 8, 11, 17 or 21. (Java 21 is currently only supported in preview on Linux only)
- An [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) Ultimate Edition or Community Edition installed - [Maven 3.5.0+](https://maven.apache.org/download.cgi) - Latest [Function Core Tools](https://github.com/Azure/azure-functions-core-tools)
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Keep the following considerations in mind when working with slot deployments:
:::zone pivot="premium-plan,dedicated-plan" ## Secured deployments
-You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. You function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md).
+You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. You function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md).
+
+When creating a deployment that uses a secured storage account, you must both explicitly set the `WEBSITE_CONTENTSHARE` setting and create the file share resource named in this setting. Make sure you create a `Microsoft.Storage/storageAccounts/fileServices/shares` resource using the value of `WEBSITE_CONTENTSHARE`, as shown in this example ([ARM template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/azuredeploy.json#L467)|[Bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/main.bicep#L351)).
These projects provide both Bicep and ARM template examples of how to deploy your function apps in a virtual network, including with network access restrictions:
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
Title: Manually run a non HTTP-triggered Azure Functions description: Use an HTTP request to run a non-HTTP triggered Azure Functions Previously updated : 11/29/2023 Last updated : 01/15/2024 # Manually run a non HTTP-triggered function
In this example, replace `<APP_NAME>` and `<RESOURCE_GROUP>` with the name of yo
:::image type="content" source="./media/functions-manually-run-non-http/functions-manually-run-non-http-body.png" alt-text="Postman body settings." border="true":::
- The `<TRIGGER_INPUT>` you supply depends on the type of trigger. For services that use JSON payloads, such as Azure Service Bus, the test JSON payload should be escaped and serialized as a string. If you don't want to pass input data to the function, you must still supply an empty dictionary `{}` as the body of the POST request. For more information, see the reference article for the specific non-HTTP trigger.
+ The specific `<TRIGGER_INPUT>` you supply depends on the type of trigger, but it can only be a string, numeric, or boolean value. For services that use JSON payloads, such as Azure Service Bus, the test JSON payload should be escaped and serialized as a string.
+
+ If you don't want to pass input data to the function, you must still supply an empty dictionary `{}` as the body of the POST request. For more information, see the reference article for the specific non-HTTP trigger.
1. Select **Send**.
In this example, replace `<APP_NAME>` and `<RESOURCE_GROUP>` with the name of yo
:::image type="content" source="./media/functions-manually-run-non-http/azure-portal-functions-master-key-logs.png" alt-text="View the logs to see the master key test results." border="true":::
+The way that you access data sent to the trigger depends on the type of trigger and your function language. For more information, see the reference examples for your [specific trigger](functions-triggers-bindings.md).
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
You can host function apps in several ways:
Use the following resources to quickly get started with Azure Functions networking scenarios. These resources are referenced throughout the article.
-* ARM, Bicep, and Terraform templates:
+* ARM templates, Bicep files, and Terraform templates:
* [Private HTTP triggered function app](https://github.com/Azure-Samples/function-app-with-private-http-endpoint) * [Private Event Hubs triggered function app](https://github.com/Azure-Samples/function-app-with-private-eventhub) * ARM templates only:
To learn more, see [Virtual network service endpoints](../virtual-network/virtua
To restrict access to a specific subnet, create a restriction rule with a **Virtual Network** type. You can then select the subscription, virtual network, and subnet that you want to allow or deny access to.
-If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they are automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
+If service endpoints aren't already enabled with `Microsoft.Web` for the subnet that you selected, they're automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app is configured for service endpoints in anticipation of having them enabled later on the subnet.
Currently, you can use non-HTTP trigger functions from within a virtual network
### Premium plan with virtual network triggers
-When you run a Premium plan, you can connect non-HTTP trigger functions to services that run inside a virtual network. To do this, you must enable virtual network trigger support for your function app. The **Runtime Scale Monitoring** setting is found in the [Azure portal](https://portal.azure.com) under **Configuration** > **Function runtime settings**.
+The [Premium plan](functions-premium-plan.md) lets you create functions that are triggered by services inside a virtual network. These non-HTTP triggers are known as _virtual network triggers_.
+By default, virtual network triggers don't cause your function app to scale beyond their pre-warmed instance count. However, certain extensions support virtual network triggers that cause your function app to scale dynamically. You can enable this _dynamic scale monitoring_ in your function app for supported extensions in one of these ways:
+
+#### [Azure portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
-### [Azure CLI](#tab/azure-cli)
+1. Under **Settings** select **Configuration**, then in the **Function runtime settings** tab set **Runtime Scale Monitoring** to **On**.
-You can also enable virtual network triggers by using the following Azure CLI command:
+1. Select **Save** to update the function app configuration and restart the app.
++
+#### [Azure CLI](#tab/azure-cli)
```azurecli-interactive az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.functionsRuntimeScaleMonitoringEnabled=1 --resource-type Microsoft.Web/sites ```
-### [Azure PowerShell](#tab/azure-powershell)
-
-You can also enable virtual network triggers by using the following Azure PowerShell command:
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell-interactive $Resource = Get-AzResource -ResourceGroupName <resource_group> -ResourceName <function_app_name>/config/web -ResourceType Microsoft.Web/sites
$Resource | Set-AzResource -Force
> [!TIP]
-> Enabling virtual network triggers may have an impact on the performance of your application since your App Service plan instances will need to monitor your triggers to determine when to scale. This impact is likely to be very small.
+> Enabling the monitoring of virtual network triggers may have an impact on the performance of your application, though this impact is likely to be very small.
+
+Support for dynamic scale monitoring of virtual network triggers isn't available in version 1.x of the Functions runtime.
+
+The extensions in this table support dynamic scale monitoring of virtual network triggers. To get the best scaling performance, you should upgrade to versions that also support [target-based scaling](functions-target-based-scaling.md#premium-plan-with-runtime-scale-monitoring-enabled).
-Virtual network triggers are supported in version 2.x and above of the Functions runtime. The following non-HTTP trigger types are supported.
+| Extension (minimum version) | Runtime scale monitoring only | With [target-based scaling](functions-target-based-scaling.md#premium-plan-with-runtime-scale-monitoring-enabled) |
+|--|| |
+|[Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB)| > 3.0.5 | > 4.1.0 |
+|[Microsoft.Azure.WebJobs.Extensions.DurableTask](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask)| > 2.0.0 | n/a |
+|[Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventHubs)| > 4.1.0 | > 5.2.0 |
+|[Microsoft.Azure.WebJobs.Extensions.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus)| > 3.2.0 | > 5.9.0 |
+|[Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/) | > 3.0.10 | > 5.1.0<sup>*</sup> |
-| Extension | Minimum version |
-|--||
-|[Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/) | 3.0.10 or above |
-|[Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventHubs)| 4.1.0 or above|
-|[Microsoft.Azure.WebJobs.Extensions.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus)| 3.2.0 or above|
-|[Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB)| 3.0.5 or above|
-|[Microsoft.Azure.WebJobs.Extensions.DurableTask](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask)| 2.0.0 or above|
+<sup>*</sup> Queue storage only.
> [!IMPORTANT]
-> When you enable virtual network trigger support, only the trigger types shown in the previous table scale dynamically with your application. You can still use triggers that aren't in the table, but they're not scaled beyond their pre-warmed instance count. For the complete list of triggers, see [Triggers and bindings](./functions-triggers-bindings.md#supported-bindings).
+> When you enable virtual network trigger monitoring, only triggers for these extensions can cause your app to scale dynamically. You can still use triggers from extensions that aren't in this table, but they won't cause scaling beyond their pre-warmed instance count. For a complete list of all trigger and binding extensions, see [Triggers and bindings](./functions-triggers-bindings.md#supported-bindings).
### App Service plan and App Service Environment with virtual network triggers
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
Last updated 03/15/2023 ms.devlang: javascript # ms.devlang: javascript, typescript-+ zone_pivot_groups: programming-languages-set-functions-nodejs
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
## Troubleshoot
-See the [Node.js Troubleshoot guide](./functions-node-troubleshoot.md).
+See the [Node.js Troubleshoot guide](./functions-node-troubleshoot.md).
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
The following table shows current supported Java versions for each major version
| Functions version | Java versions (Windows) | Java versions (Linux) | | -- | -- | |
-| 4.x |17 <br/>11 <br/>8 |17 <br/>11 <br/>8 |
+| 4.x | 17 <br/>11 <br/>8 | 21 (Preview) <br/>17 <br/>11 <br/>8 |
| 3.x | 11 <br/>8 | 11 <br/>8 | | 2.x | 8 | n/a |
Unless you specify a Java version for your deployment, the Maven archetype defau
### Specify the deployment version
-You can control the version of Java targeted by the Maven archetype by using the `-DjavaVersion` parameter. The value of this parameter can be either `8` or `11`.
+You can control the version of Java targeted by the Maven archetype by using the `-DjavaVersion` parameter. The value of this parameter can be either `8`, `11`, `17` or `21`.
The Maven archetype generates a pom.xml that targets the specified Java version. The following elements in pom.xml indicate the Java version to use:
-| Element | Java 8 value | Java 11 value | Java 17 value | Description |
-| - | - | - | - | |
-| **`Java.version`** | 1.8 | 11 | 17 | Version of Java used by the maven-compiler-plugin. |
-| **`JavaVersion`** | 8 | 11 | 17 | Java version hosted by the function app in Azure. |
+| Element | Java 8 value | Java 11 value | Java 17 value | Java 21 value (Preview, Linux) | Description |
+| - | - | - | - | - | |
+| **`Java.version`** | 1.8 | 11 | 17 | 21 | Version of Java used by the maven-compiler-plugin. |
+| **`JavaVersion`** | 8 | 11 | 17 | 21 | Java version hosted by the function app in Azure. |
The following examples show the settings for Java 8 in the relevant sections of the pom.xml file:
The following example shows the operating system setting in the `runtime` sectio
## JDK runtime availability and support
-Microsoft and [Adoptium](https://adoptium.net/) builds of OpenJDK are provided and supported on Functions for Java 8 (Adoptium), 11 (MSFT) and 17(MSFT). These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and running Java SE applications.
+Microsoft and [Adoptium](https://adoptium.net/) builds of OpenJDK are provided and supported on Functions for Java 8 (Adoptium), Java 11, 17 and 21 (MSFT). These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and running Java SE applications.
For local development or testing, you can download the [Microsoft build of OpenJDK](/java/openjdk/download) or [Adoptium Temurin](https://adoptium.net/?variant=openjdk8&jvmVariant=hotspot) binaries for free. [Azure support](https://azure.microsoft.com/support/) for issues with the JDKs and function apps is available with a [qualified support plan](https://azure.microsoft.com/support/plans/).
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
To learn more, see the [example configurations for the supported extensions](#su
## Premium plan with runtime scale monitoring enabled
-In [runtime scale monitoring](functions-networking-options.md?tabs=azure-cli#premium-plan-with-virtual-network-triggers), the extensions handle target-based scaling. Hence, in addition to the function app runtime version requirement, your extension packages must meet the following minimum versions:
+When [runtime scale monitoring](functions-networking-options.md#premium-plan-with-virtual-network-triggers) is enabled, the extensions themselves handle dynamic scaling. This is because the [scale controller](event-driven-scaling.md#runtime-scaling) doesn't have access to services secured by a virtual network. After you enable runtime scale monitoring, you'll need to upgrade your extension packages to these minimum versions to unlock the extra target-based scaling functionality:
| Extension Name | Minimum Version Needed | | -- | - |
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
To learn more about specific language version support policy timeline, visit the
|--|--| |C# (in-process model) |[link](./functions-dotnet-class-library.md#supported-versions)| |C# (isolated worker model) |[link](./dotnet-isolated-process-guide.md#supported-versions)|
+|Java |[link](./update-language-versions.md#update-the-language-version)|
|Node |[link](./functions-reference-node.md#setting-the-node-version)| |PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)| |Python |[link](./functions-reference-python.md#python-version)|
azure-functions Migrate Service Bus Version 4 Version 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-service-bus-version-4-version-5.md
Title: Migrate Azure Service Bus extension for Azure Functions to version 5.x description: This article shows you how to upgrade your existing function apps using the Azure Service Bus extension version 4.x to be able to use version 5.x of the extension. + Last updated 01/12/2024 zone_pivot_groups: programming-languages-set-functions
The Azure Functions Azure Service Bus extension version 5 is built on top of the
## Next steps - [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)-- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
+- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
azure-health-insights Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/deploy-portal.md
Once deployment is complete, you can use the Azure portal to navigate to the new
2. Create a new **Resource group**. 3. Add a new Azure AI services account to your Resource group and search for **Health Insights**.
- ![Screenshot of how to create the new Azure AI Health Insights service.](media/create-service.png)
+ [ ![Screenshot of how to create the new Azure AI Health Insights service.](media/create-service.png)](media/create-service.png#lightbox)
or Use this [link](https://portal.azure.com/#create/Microsoft.CognitiveServicesHealthInsights) to create a new Azure AI services account.
Once deployment is complete, you can use the Azure portal to navigate to the new
- **Name**: Enter an Azure AI services account name. - **Pricing tier**: Select your pricing tier.
- ![Screenshot of how to create new Azure AI services account.](media/create-health-insights.png)
+ [ ![Screenshot of how to create new Azure AI services account.](media/create-health-insights.png)](media/create-health-insights.png#lightbox)
5. Navigate to your newly created service.
- ![Screenshot of the Overview of Azure AI services account.](media/created-health-insights.png)
+ [ ![Screenshot of the Overview of Azure AI services account.](media/created-health-insights.png)](media/created-health-insights.png#lightbox)
## Configure private endpoints
-With private endpoints, the network traffic between the clients on the VNet and the Azure AI services account run over the VNet and a private link on the Microsoft backbone network. This eliminates exposure from the public internet.
+With private endpoints, the network traffic between the clients on the VNet and the Azure AI services account run over the VNet and a private link on the Microsoft backbone network. Using private endpoints as described eliminates exposure from the public internet.
Once the Azure AI services account is successfully created, configure private endpoints from the Networking page under Resource Management.
-![Screenshot of Private Endpoint.](media/private-endpoints.png)
+[ ![Screenshot of Private Endpoint.](media/private-endpoints.png)](media/private-endpoints.png#lightbox)
## Next steps
To get started using Azure AI Health Insights, get started with one of the follo
>[!div class="nextstepaction"] > [Trial Matcher](trial-matcher/index.yml) +
+>[!div class="nextstepaction"]
+> [Radiology Insights](radiology-insights/index.yml)
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/get-started.md
Once deployment is complete, you use the Azure portal to navigate to the newly c
## Example request and results
-To send an API request, you need your Azure AI services account endpoint and key. You can also find a full view on the [request parameters here](../request-info.md)
+To send an API request, you need your Azure AI services account endpoint and key. You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
-![Screenshot of the Keys and Endpoints for the Onco-Phenotype.](../media/keys-and-endpoints.png)
+![[Screenshot of the Keys and Endpoints for the Onco-Phenotype.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox)
> [!IMPORTANT] > Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
GET http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jo
} ```
-More information on the [response information can be found here](../response-info.md)
+You can also find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/get-job)
+ ## Request validation
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/overview.md
Azure AI Health Insights is a Cognitive Service that provides prebuilt models th
## Available models
-There are currently two models available in Azure AI Health Insights:
+There are currently three models available in Azure AI Health Insights:
The [Trial Matcher](./trial-matcher/overview.md) model receives patients' data and clinical trials protocols, and provides relevant clinical trials based on eligibility criteria. The [Onco-Phenotype](./oncophenotype/overview.md) receives clinical records of oncology patients and outputs cancer staging, such as **clinical stage TNM** categories and **pathologic stage TNM categories** as well as **tumor site** and **histology**.
+The [Radiology Insights](./radiology-insights/overview.md) model receives patients' radiology report and provides quality checks with feedback on errors and mismatches to ensure critical findings are surfaced and presented using the full context of a radiology report. In addition, follow-up recommendations and clinical findings with measurements documented by the radiologist are flagged.
## Architecture ![Diagram that shows Azure AI Health Insights architecture.](media/architecture.png)
+ [ ![Diagram that shows Azure AI Health Insights architecture.](media/architecture.png)](media/architecture.png#lightbox)
-Azure AI Health Insights service receives patient data through multiple input channels. This can be unstructured healthcare data, FHIR resources or specific JSON format data. This in combination with the correct model configuration, such as ```includeEvidence```.
-With these input channels and configuration, the service can run the data through several health insights AI models, such as Trial Matcher or Onco-Phenotype.
+Azure AI Health Insights service receives patient data in different modalities, such as unstructured healthcare data, FHIR resources or specific JSON format data. In addition, the service receives a model configuration, such as ```includeEvidence``` parameter.
+With these input patient data and configuration, the service can run the data through the selected health insights AI model, such as Trial Matcher, Onco-Phenotype or Radiology Insights.
## Next steps
Review the following information to learn how to deploy Azure AI Health Insights
> [Onco-Phenotype](oncophenotype/overview.md) >[!div class="nextstepaction"]
-> [Trial Matcher](trial-matcher//overview.md)
+> [Trial Matcher](trial-matcher//overview.md)
+
+>[!div class="nextstepaction"]
+> [Radiology Insights](radiology-insights//overview.md)
azure-health-insights Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/faq.md
+
+ Title: Radiology Insights frequently asked questions
+
+description: Radiology Insights frequently asked questions
+++++ Last updated : 12/12/2023++
+# Radiology Insights Frequently Asked Questions
+
+- Does the Radiology Insights service take into account specific formatting like bold and italic?
+
+ Radiology Insights expects plain text, bolding or other formatting is not taken into account.
++
+- What happens when you process a document with non radiology content?
+
+ The Radiology Insights service processes any document as a radiology document.
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/get-started.md
+
+ Title: Use Radiology Insights (Preview)
+
+description: This article describes how to use the Radiology Insights model (Preview)
+++++ Last updated : 12/06/2023++++
+# Quickstart: Use the Radiology Insights (Preview)
+
+This quickstart provides an overview on how to use the Radiology Insights (Preview).
+
+## Prerequisites
+To use the Radiology Insights (Preview) model, you must have an Azure AI services account created.
+
+If you have no Azure AI services account, see [Deploy Azure AI Health Insights using the Azure portal.](../deploy-portal.md)
+
+Once deployment is complete, you use the Azure portal to navigate to the newly created Azure AI services account to see the details, including your Service URL.
+The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com.
+
+## Example request and results
+
+To send an API request, you need your Azure AI services account endpoint and key.
+
+You can find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
+
+![[Screenshot of the Keys and Endpoints for the Radiology Insights.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox)
+
+> [!IMPORTANT]
+> Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+## Example request
+
+### Starting with a request that contains a case
+
+You can use the data from this example, to test your first request to the Radiology Insights model.
+
+```url
+POST
+http://{cognitive-services-account-endpoint}/health-insights/radiology-insights/jobs?api-version=2023-09-01-preview
+Content-Type: application/json
+Ocp-Apim-Subscription-Key: {cognitive-services-account-key}
+```
+```json
+{
+ "configuration" : {
+ "inferenceOptions" : {
+ "followupRecommendationOptions" : {
+ "includeRecommendationsWithNoSpecifiedModality" : false,
+ "includeRecommendationsInReferences" : false,
+ "provideFocusedSentenceEvidence" : false
+ },
+ "findingOptions" : {
+ "provideFocusedSentenceEvidence" : false
+ }
+ },
+ "inferenceTypes" : [ "lateralityDiscrepancy" ],
+ "locale" : "en-US",
+ "verbose" : false,
+ "includeEvidence" : false
+ },
+ "patients" : [ {
+ "id" : "11111",
+ "info" : {
+ "sex" : "female",
+ "birthDate" : "1986-07-01T21:00:00+00:00",
+ "clinicalInfo" : [ {
+ "resourceType" : "Observation",
+ "status" : "unknown",
+ "code" : {
+ "coding" : [ {
+ "system" : "http://www.nlm.nih.gov/research/umls",
+ "code" : "C0018802",
+ "display" : "MalignantNeoplasms"
+ } ]
+ },
+ "valueBoolean" : "true"
+ } ]
+ },
+ "encounters" : [ {
+ "id" : "encounterid1",
+ "period" : {
+ "start" : "2021-08-28T00:00:00",
+ "end" : "2021-08-28T00:00:00"
+ },
+ "class" : "inpatient"
+ } ],
+ "patientDocuments" : [ {
+ "type" : "note",
+ "clinicalType" : "radiologyReport",
+ "id" : "docid1",
+ "language" : "en",
+ "authors" : [ {
+ "id" : "authorid1",
+ "name" : "authorname1"
+ } ],
+ "specialtyType" : "radiology",
+ "createdDateTime" : "2021-8-28T00:00:00",
+ "administrativeMetadata" : {
+ "orderedProcedures" : [ {
+ "code" : {
+ "coding" : [ {
+ "system" : "Https://loinc.org",
+ "code" : "26688-1",
+ "display" : "US BREAST - LEFT LIMITED"
+ } ]
+ },
+ "description" : "US BREAST - LEFT LIMITED"
+ } ],
+ "encounterId" : "encounterid1"
+ },
+ "content" : {
+ "sourceType" : "inline",
+ "value" : "Exam: US LT BREAST TARGETED\r\n\r\nTechnique: Targeted imaging of the right breast is performed.\r\n\r\nFindings:\r\n\r\nTargeted imaging of the left breast is performed from the 6:00 to the 9:00 position. \r\n\r\nAt the 6:00 position, 5 cm from the nipple, there is a 3 x 2 x 4 mm minimally hypoechoic mass with a peripheral calcification. This may correspond to the mammographic finding. No other cystic or solid masses visualized.\r\n"
+ }
+ } ]
+ } ]
+}
+```
+
+You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
+
+### Evaluating a response that contains a case
+
+You get the status of the job by sending a request to the Radiology Insights model by adding the job ID from the initial request in the URL.
+
+Example code snippet:
+
+```url
+GET
+http://{cognitive-services-account-endpoint}/health-insights/radiology-insights/jobs/d48b4f4d-939a-446f-a000-002a80aa58dc?api-version=2023-09-01-preview
+```
+
+```json
+{
+ "result": {
+ "patientResults": [
+ {
+ "patientId": "11111",
+ "inferences": [
+ {
+ "kind": "lateralityDiscrepancy",
+ "lateralityIndication": {
+ "coding": [
+ {
+ "system": "*SNOMED",
+ "code": "24028007",
+ "display": "RIGHT (QUALIFIER VALUE)"
+ }
+ ]
+ },
+ "discrepancyType": "orderLateralityMismatch"
+ }
+ ]
+ }
+ ]
+ },
+ "id": "862768cf-0590-4953-966b-1cc0ef8b8256",
+ "createdDateTime": "2023-12-18T12:25:37.8942771Z",
+ "expirationDateTime": "2023-12-18T12:42:17.8942771Z",
+ "lastUpdateDateTime": "2023-12-18T12:25:49.7221986Z",
+ "status": "succeeded"
+}
+```
+You can find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/get-job).
++
+## Data limits
+
+Limit, Value
+- Maximum # patients per request, 1
+- Maximum # patientdocuments per request, 1
+- Maximum # encounters per request, 1
+- Maximum # characters per patient, 50,000 for data[i].content.value all combined
+
+## Request validation
+
+Every request contains required and optional fields that should be provided to the Radiology Insights model. When you're sending data to the model, make sure that you take the following properties into account:
+
+Within a request:
+- patients should be set
+- patients should contain one entry
+- ID in patients entry should be set
+
+Within configuration:
+If set, configuration locale should be one of the following values (case-insensitive):
+- en-CA
+- en-US
+- en-AU
+- en-DE
+- en-IE
+- en-NZ
+- en-GB
++
+Within patients:
+- should contain one patientDocument entry
+- ID in patientDocument should be set
+- if encounters and/or info are used, ID should be set
++
+For the patientDocuments within a patient:
+- createdDateTime (serviceDate) should be set
+- Patient Document language should be EN (case-insensitive)
+- documentType should be set to Note
+- Patient Document clinicalType should be set to radiology report or pathology report
+- Patient Document specialtyType should be radiology or pathology
+- If set, orderedProcedures in administrativeMetadata should contain code -with code and display- and description
+- Document content shouldn't be blank/empty/null
++
+```json
+"patientDocuments" : [ {
+ "type" : "note",
+ "clinicalType" : "radiologyReport",
+ "id" : "docid1",
+ "language" : "en",
+ "authors" : [ {
+ "id" : "authorid1",
+ "name" : "authorname1"
+ } ],
+ "specialtyType" : "radiology",
+ "createdDateTime" : "2021-8-28T00:00:00",
+ "administrativeMetadata" : {
+ "orderedProcedures" : [ {
+ "code" : {
+ "coding" : [ {
+ "system" : "Https://loinc.org",
+ "code" : "41806-1",
+ "display" : "CT ABDOMEN"
+ } ]
+ },
+ "description" : "CT ABDOMEN"
+ } ],
+ "encounterId" : "encounterid1"
+ },
+ "content" : {
+ "sourceType" : "inline",
+ "value" : "CT ABDOMEN AND PELVIS\n\nProvided history: \n78 years old Female\nAbnormal weight loss\n\nTechnique: Routine protocol helical CT of the abdomen and pelvis were performed after the injection of intravenous nonionic iodinated contrast. Axial, Sagittal and coronal 2-D reformats were obtained. Oral contrast was also administered.\n\nFindings:\nLimited evaluation of the included lung bases demonstrates no evidence of abnormality. \n\nGallbladder is absent. "
+ }
+ } ]
+```
+++
+## Next steps
+
+To get better insights into the request and responses, you can read more on following pages:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
+
+>[!div class="nextstepaction"]
+> [Inference information](inferences.md)
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/inferences.md
+
+ Title: Radiology Insight inference information
+
+description: This article provides RI inference information.
+++++ Last updated : 12/12/2023++++
+# Inference information
+
+This document describes details of all inferences generated by application of RI to a radiology document.
+
+The Radiology Insights feature of Azure Health Insights uses natural language processing techniques to process unstructured medical radiology documents. It adds several types of inferences that help the user to effectively monitor, understand, and improve financial and clinical outcomes in a radiology workflow context.
+
+The types of inferences currently supported by the system are: AgeMismatch, SexMismatch, LateralityDiscrepancy, CompleteOrderDiscrepancy, LimitedOrderDiscrepancy, Finding, CriticalResult, FollowupRecommendation, RadiologyProcedure, Communication.
++
+## List of inferences in scope of RI
+
+- Age Mismatch
+- Laterality Discrepancy
+- Sex Mismatch
+- Complete Order Discrepancy
+- Limited Order Discrepancy
+- Finding
+- Critical Result
+- follow-up Recommendation
+- Communication
+- Radiology Procedure
+++
+To interact with the Radiology-Insights model, you can provide several model configuration parameters that modify the outcome of the responses. One of the configurations is ΓÇ£inferenceTypesΓÇ¥, which can be used if only part of the Radiology Insights inferences is required. If this list is omitted or empty, the model returns all the inference types.
+
+```json
+"configuration" : {
+ "inferenceOptions" : {
+ "followupRecommendationOptions" : {
+ "includeRecommendationsWithNoSpecifiedModality" : false,
+ "includeRecommendationsInReferences" : false,
+ "provideFocusedSentenceEvidence" : false
+ },
+ "findingOptions" : {
+ "provideFocusedSentenceEvidence" : false
+ }
+ },
+ "inferenceTypes" : [ "finding", "ageMismatch", "lateralityDiscrepancy", "sexMismatch", "completeOrderDiscrepancy", "limitedOrderDiscrepancy", "criticalResult", "followupRecommendation", "followupCommunication", "radiologyProcedure" ],
+ "locale" : "en-US",
+ "verbose" : false,
+ "includeEvidence" : true
+ }
+```
++
+**Age Mismatch**
+
+An age mismatch occurs when the document gives a certain age for the patient, which differs from the age that is calculated based on the patientΓÇÖs info birthdate and the encounter period in the request.
+- kind: RadiologyInsightsInferenceType.AgeMismatch;
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Laterality Discrepancy**
+
+A laterality mismatch is mostly flagged when the orderedProcedure is for a body part with a laterality and the text refers to the opposite laterality.
+Example: ΓÇ£x-ray right footΓÇ¥, ΓÇ£left foot is normalΓÇ¥
+- kind: RadiologyInsightsInferenceType.LateralityDiscrepancy
+- LateralityIndication: FHIR.R4.CodeableConcept
+- DiscrepancyType: LateralityDiscrepancyType
+
+There are three possible discrepancy types:
+- ΓÇ£orderLateralityMismatchΓÇ¥ means that the laterality in the text conflicts with the one in the order.
+- ΓÇ£textLateralityContradictionΓÇ¥ means that there's a body part with left or right in the finding section, and the same body part occurs with the opposite laterality in the impression section.
+- ΓÇ£textLateralityMissingΓÇ¥ means that the laterality mentioned in the order never occurs in the text.
++
+The lateralityIndication is a FHIR.R4.CodeableConcept. There are two possible values (SNOMED codes):
+- 20028007: RIGHT (QUALIFIER VALUE)
+- 7771000: LEFT (QUALIFIER VALUE)
+
+The meaning of this field is as follows:
+- For orderLateralityMismatch: concept in the text that the laterality was flagged for.
+- For textLateralityContradiction: concept in the impression section that the laterality was flagged for.
+- For ΓÇ£textLateralityMissingΓÇ¥, this field isn't filled in.
+
+A mismatch with discrepancy type ΓÇ£textLaterityMissingΓÇ¥ has no token extensions.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Sex Mismatch**
+This mismatch occurs when the document gives a different sex for the patient than stated in the patientΓÇÖs info in the request. If the patient info contains no sex, then the mismatch can also be flagged when there's contradictory language about the patientΓÇÖs sex in the text.
+- kind: RadiologyInsightsInferenceType.SexMismatch
+- sexIndication: FHIR.R4.CodeableConcept
+Field ΓÇ£sexIndicationΓÇ¥ contains one coding with a SNOMED concept for either MALE (FINDING) if the document refers to a male or FEMALE (FINDING) if the document refers to a female:
+- 248153007: MALE (FINDING)
+- 248152002: FEMALE (FINDING)
++
+<details><summary>Examples request/response json</summary>
+</details>
++++
+**Complete Order Discrepancy**
+CompleteOrderDiscrepancy is created if there's a complete orderedProcedure - meaning that some body parts need to be mentioned in the text, and possibly also measurements for some of them - and not all the body parts or their measurements are in the text.
+- kind: RadiologyInsightsInferenceType.CompleteOrderDiscrepancy
+- orderType: FHIR.R4.CodeableConcept
+- MissingBodyParts: Array FHIR.R4.CodeableConcept
+- missingBodyPartMeasurements: Array FHIR.R4.CodeableConcept
+
+Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes:
+- 24558-9: US Abdomen
+- 24869-0: US Pelvis
+- 24531-6: US Retroperitoneum
+- 24601-7: US breast
+
+Fields ΓÇ£missingBodyPartsΓÇ¥ and/or ΓÇ£missingBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are missing or whose measurements are missing. The token extensions refer to body parts or measurements that are present (or words that imply them).
+
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+
+**Limited Order Discrepancy**
+
+This inference is created if there's a limited order, meaning that not all body parts and measurements for a corresponding complete order should be in the text.
+- kind: RadiologyInsightsInferenceType.LimitedOrderDiscrepancy
+- orderType: FHIR.R4.CodeableConcept
+- PresentBodyParts: Array FHIR.R4.CodeableConcept
+- PresentBodyPartMeasurements: Array FHIR.R4.CodeableConcept
+
+Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes:
+- 24558-9: US Abdomen
+- 24869-0: US Pelvis
+- 24531-6: US Retroperitoneum
+- 24601-7: US breast
+
+Fields ΓÇ£presentBodyPartsΓÇ¥ and/or ΓÇ£presentBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are present or whose measurements are present. The token extensions refer to body parts or measurements that are present (or words that imply them).
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Finding**
+
+This inference is created for a medical problem (for example ΓÇ£acute infection of the lungsΓÇ¥) or for a characteristic or a nonpathologic finding of a body part (for example ΓÇ£stomach normalΓÇ¥).
+- kind: RadiologyInsightsInferenceType.finding
+- finding: FHIR.R4.Observation
+
+Finding: Section and ci_sentence
+Next to the token extensions, there can be an extension with url ΓÇ£sectionΓÇ¥. This extension has an inner extension with a display name that describes the section. The inner extension can also have a LOINC code.
+There can also be an extension with url ΓÇ£ci_sentenceΓÇ¥. This extension refers to the sentence containing the first token of the clinical indicator (that is, the medical problem), if any. The generation of such a sentence is switchable.
+
+Finding: fields within field ΓÇ£findingΓÇ¥
+list of fields within field ΓÇ£findingΓÇ¥, except ΓÇ£componentΓÇ¥:
+- status: is always set to ΓÇ£unknownΓÇ¥
+- resourceType: is always set to "ObservationΓÇ¥
+- interpretation: contains a sublist of the following SNOMED codes:
+- 7147002: NEW (QUALIFIER VALUE)
+- 36692007: KNOWN (QUALIFIER VALUE)
+- 260413007: NONE (QUALIFIER VALUE)
+- 260385009: NEGATIVE (QUALIFIER VALUE)
+- 723506003: RESOLVED (QUALIFIER VALUE)
+- 64957009: UNCERTAIN (QUALIFIER VALUE)
+- 385434005: IMPROBABLE DIAGNOSIS (CONTEXTUAL QUALIFIER) (QUALIFIER VALUE)
+- 60022001: POSSIBLE DIAGNOSIS (CONTEXTUAL QUALIFIER) (QUALIFIER VALUE)
+- 2931005: PROBABLE DIAGNOSIS (CONTEXTUAL QUALIFIER) (QUALIFIER VALUE)
+- 15841000000104: CANNOT BE EXCLUDED (QUALIFIER VALUE)
+- 260905004: CONDITION (ATTRIBUTE)
+- 441889009: DENIED (QUALIFIER VALUE)
+- 722291000000108: HISTORY (QUALIFIER VALUE)
+- 6493001: RECENT (QUALIFIER VALUE)
+- 2667000: ABSENT (QUALIFIER VALUE)
+- 17621005: NORMAL (QUALIFIER VALUE)
+- 263730007: CONTINUAL (QUALIFIER VALUE)
+
+In this list, the string before the colon is the code, and the string after the colon is the display name.
+If the value is ΓÇ£NONE (QUALIFIER VALUE)ΓÇ¥, the finding is absent. This value is, for example, ΓÇ£no sepsisΓÇ¥.
+category: if filled, this field contains an array with one element. It contains one of the following SNOMED concepts:
+- 439401001: DIAGNOSIS (OBSERVABLE ENTITY)
+- 404684003: CLINICAL FINDING (FINDING)
+- 162432007: SYMPTOM: GENERALIZED (FINDING)
+- 246501002: TECHNIQUE (ATTRIBUTE)
+- 91722005: PHYSICAL ANATOMICAL ENTITY (BODY STRUCTURE)
+
+code:
+- SNOMED code 404684003: CLINICAL FINDING (FINDING) (meaning that the finding has a clinical indicator)
+or
+- SNOMED code 123037004: BODY STRUCTURE (BODY STRUCTURE) (no clinical indicator.)
+
+Finding: field ΓÇ£componentΓÇ¥
+Much relevant information is in the components. The componentΓÇÖs ΓÇ£codeΓÇ¥ field contains one CodeableConcept with one SNOMED code.
++
+Component description:
+(some of the components are optional)
+
+Finding: component ΓÇ£subject of informationΓÇ¥
+This component has SNOMED code 131195008: SUBJECT OF INFORMATION (ATTRIBUTE). It also has the ΓÇ£valueCodeableConceptΓÇ¥ field filled. The value is a SNOMED code describing the medical problem that the finding pertains to.
+At least one ΓÇ£subject of informationΓÇ¥ component is present if and only if the ΓÇ£finding.codeΓÇ¥ field has 404684003: CLINICAL FINDING (FINDING). There can be several "subject of informationΓÇ¥ components, with different concepts in the ΓÇ£valueCodeableConceptΓÇ¥ field.
+
+Finding: component ΓÇ£anatomyΓÇ¥
+Zero or more components with SNOMED code ΓÇ£722871000000108: ANATOMY (QUALIFIER VALUE)ΓÇ¥. This component has field ΓÇ£valueCodeConceptΓÇ¥ filled with a SNOMED or radlex code. For example, for ΓÇ£lung infectionΓÇ¥ this component contains a code for the lungs.
+
+Finding: component ΓÇ£regionΓÇ¥
+Zero or more components with SNOMED code 45851105: REGION (ATTRIBUTE). Like anatomy, this component has field ΓÇ£valueCodeableConceptΓÇ¥ filled with a SNOMED or radlex code. Such a concept refers to the body region of the anatomy. For example, if the anatomy is a code for the vagina, the region may be a code for the female reproductive system.
+
+Finding: component ΓÇ£lateralityΓÇ¥
+Zero or more components with code 45651917: LATERALITY (ATTRIBUTE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to the laterality of the finding. For example, this component is filled for a finding pertaining to the right arm.
+
+Finding: component ΓÇ£change valuesΓÇ¥
+Zero or more components with code 288533004: CHANGE VALUES (QUALIFIER VALUE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to a size change in the finding (for example, a nodule that is growing or decreasing).
+
+Finding: component ΓÇ£percentageΓÇ¥
+At most one component with code 45606679: PERCENT (PROPERTY) (QUALIFIER VALUE). It has field ΓÇ£valueStringΓÇ¥ set with either a value or a range consisting of a lower and upper value, separated by ΓÇ£-ΓÇ£.
+
+Finding: component ΓÇ£severityΓÇ¥
+At most one component with code 272141005: SEVERITIES (QUALIFIER VALUE), indicating how severe the medical problem is. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list:
+- 255604002: MILD (QUALIFIER VALUE)
+- 6736007: MODERATE (SEVERITY MODIFIER) (QUALIFIER VALUE)
+- 24484000: SEVERE (SEVERITY MODIFIER) (QUALIFIER VALUE)
+- 371923003: MILD TO MODERATE (QUALIFIER VALUE)
+- 371924009: MODERATE TO SEVERE (QUALIFIER VALUE)
+
+Finding: component ΓÇ£chronicityΓÇ¥
+At most one component with code 246452003: CHRONICITY (ATTRIBUTE), indicating whether the medical problem is chronic or acute. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list:
+- 255363002: SUDDEN (QUALIFIER VALUE)
+- 90734009: CHRONIC (QUALIFIER VALUE)
+- 19939008: SUBACUTE (QUALIFIER VALUE)
+- 255212004: ACUTE-ON-CHRONIC (QUALIFIER VALUE)
+
+Finding: component ΓÇ£causeΓÇ¥
+At most one component with code 135650694: CAUSES OF HARM (QUALIFIER VALUE), indicating what the cause is of the medical problem. It has field ΓÇ£valueStringΓÇ¥ set to the strings of one or more tokens from the text, separated by ΓÇ£;;ΓÇ¥.
+
+Finding: component ΓÇ£qualifier valueΓÇ¥
+Zero or more components with code 362981000: QUALIFIER VALUE (QUALIFIER VALUE). This component refers to a feature of the medical problem.
+Every component has either:
+- Field ΓÇ£valueStringΓÇ¥ set with token strings from the text, separated by ΓÇ£;;ΓÇ¥
+- Or field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED code
+- Or no field set (then the meaning can be retrieved from the token extensions (rare occurrence))
+
+Finding: component ΓÇ£multipleΓÇ¥
+Exactly one component with code 46150521: MULTIPLE (QUALIFIER VALUE). It has field ΓÇ£valueBooleanΓÇ¥ set to true or false. This component indicates the difference between, for example, one nodule (multiple is false) or several nodules (multiple is true). This component has no token extensions.
+
+Finding: component ΓÇ£sizeΓÇ¥
+Zero or more components with code 246115007, "SIZE (ATTRIBUTE)". Even if there's just one size for a finding, there are several components if the size has two or three dimensions, for example, ΓÇ£2.1 x 3.3 cmΓÇ¥ or ΓÇ£1.2 x 2.2 x 1.5 cmΓÇ¥. There's a size component for every dimension.
+Every component has field ΓÇ£interpretationΓÇ¥ set to either SNOMED code 15240007: CURRENT or 9130008: PREVIOUS, depending on whether the size was measured during this visit or in the past.
+Every component has either field ΓÇ£valueQuantityΓÇ¥ or ΓÇ£valueRangeΓÇ¥ set.
+If ΓÇ£valueQuantityΓÇ¥ is set, then ΓÇ£valueQuantity.valueΓÇ¥ is always set. In most cases, ΓÇ£valueQuantity.unitΓÇ¥ is set. It's possible that ΓÇ£valueQuantity.comparatorΓÇ¥ is also set, to either ΓÇ£>ΓÇ¥, ΓÇ£<ΓÇ¥, ΓÇ£>=ΓÇ¥ or ΓÇ£<=ΓÇ¥. For example, the component is set to ΓÇ£<=ΓÇ¥ for ΓÇ£the tumor is up to 2 cmΓÇ¥.
+If ΓÇ£valueRangeΓÇ¥ is set, then ΓÇ£valueRange.lowΓÇ¥ and ΓÇ£valueRange.highΓÇ¥ are set to quantities with the same data as described in the previous paragraph. This field contains, for example, ΓÇ£The tumor is between 2.5 cm and 2.6 cm in size".
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Critical Result**
+This inference is made for a new medical problem that requires attention within a specific time frame, possibly urgently.
+- kind: RadiologyInsightsInferenceType.criticalResult
+- result: CriticalResult
+
+Field ΓÇ£result.descriptionΓÇ¥ gives a description of the medical problem, for example ΓÇ£MALIGNANCYΓÇ¥.
+Field ΓÇ£result.findingΓÇ¥, if set, contains the same information as the ΓÇ£findingΓÇ¥ field in a finding inference.
+
+Next to token extensions, there can be an extension for a section. This field contains the most specific section that the first token of the critical result is in (or to be precise, the first token that is in a section). This section is in the same format as a section for a finding.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Follow-up Recommendation**
+
+This inference is created when the text recommends a specific medical procedure or follow-up for the patient.
+- kind: RadiologyInsightsInferenceType.FollowupRecommendation
+- effectiveDateTime: utcDateTime
+- effectivePeriod: FHIR.R4.Period
+- Findings: Array RecommendationFinding
+- isConditional: boolean
+- isOption: boolean
+- isGuideline: boolean
+- isHedging: boolean
+
+recommendedProcedure: ProcedureRecommendation
+- follow up Recommendation: sentences
+Next to the token extensions, there can be an extension containing sentences. This behavior is switchable.
+- follow up Recommendation: boolean fields
+ΓÇ£isHedgingΓÇ¥ mean that the recommendation is uncertain, for example, ΓÇ£a follow-up could be doneΓÇ¥. ΓÇ£isConditionalΓÇ¥ is for input like ΓÇ£If the patient continues having pain, an MRI should be performed.ΓÇ¥
+ΓÇ£isOptionsΓÇ¥: is also for conditional input.
+ΓÇ£isGuidelineΓÇ¥ means that the recommendation is in a general guideline like the following:
+
+BI-RADS CATEGORIES:
+- (0) Incomplete: Needs more imaging evaluation
+- (1) Negative
+- (2) Benign
+- (3) Probably benign - Short interval follow-up suggested
+- (4) Suspicious abnormality - Biopsy should be considered
+- (5) Highly suggestive of malignancy - Appropriate action should be taken.
+- (6) Known biopsy-proven malignancy
+
+- follow up Recommendation: effectiveDateTime and effectivePeriod
+Field ΓÇ£effectiveDateTimeΓÇ¥ will be set when the procedure needs to be done (recommended) at a specific point in time. For example, ΓÇ£next WednesdayΓÇ¥. Field ΓÇ£effectivePeriodΓÇ¥ will be set if a specific period is mentioned, with a start and end datetime. For example, for ΓÇ£within six monthsΓÇ¥, the start datetime will be the date of service, and the end datetime will be the day six months after that.
+- follow up Recommendation: findings
+If set, field ΓÇ£findingsΓÇ¥ contains one or more findings that have to do with the recommendation. For example, a leg scan (procedure) can be recommended because of leg pain (finding).
+Every array element of field ΓÇ£findingsΓÇ¥ is a RecommendationFinding. Field RecommendationFinding.finding has the same information as a FindingInference.finding field.
+For field ΓÇ£RecommendationFinding.RecommendationFindingStatusΓÇ¥, see the OpenAPI specification for the possible values.
+Field ΓÇ£RecommendationFinding.criticalFindingΓÇ¥ is set if a critical result is associated with the finding. It then contains the same information as described for a critical result inference.
+- follow up Recommendation: recommended procedure
+Field ΓÇ£recommendedProcedureΓÇ¥ is either a GenericProcedureRecommendation, or an ImagingProcedureRecommendation. (Type ΓÇ£procedureRecommendationΓÇ¥ is a supertype for these two types.)
+A GenericProcedureRecommendation has the following:
+- Field ΓÇ£kindΓÇ¥ has value ΓÇ£genericProcedureRecommendationΓÇ¥
+- Field ΓÇ£descriptionΓÇ¥ has either value ΓÇ£MANAGEMENT PROCEDURE (PROCEDURE)ΓÇ¥ or ΓÇ£CONSULTATION (PROCEDURE)ΓÇ¥
+- Field ΓÇ£codeΓÇ¥ only contains an extension with tokens
+ An ImagingProcedureRecommendation has the following:
+- Field ΓÇ£kindΓÇ¥ has value ΓÇ£imagingProcedureRecommendationΓÇ¥
+- Field ΓÇ£imagingProceduresΓÇ¥ contains an array with one element of type ImagingProcedure.
+
+This type has the following fields, the first 2 of which are always filled:
+- ΓÇ£modalityΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.
+- ΓÇ£anatomyΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.
+- ΓÇ£laterality: a CodeableConcept containing at most one coding with a SNOMED code.
+- ΓÇ£contrastΓÇ¥: not set.
+- ΓÇ£viewΓÇ¥: not set.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Follow up Communication**
+
+This inference is created when findings or test results were communicated to a medical professional.
+- kind: RadiologyInsightsInferenceType.FollowupCommunication
+- dateTime: Array utcDateTime
+- recipient: Array MedicalProfessionalType
+- wasAcknowledged: boolean
+
+Field ΓÇ£wasAcknowledgedΓÇ¥ is set to true if the communication was verbal (nonverbal communication might not have reached the recipient yet and cannot be considered acknowledged). Field ΓÇ£dateTimeΓÇ¥ is set if the date-time of the communication is known. Field ΓÇ£recipientΓÇ¥ is set if the recipient(s) are known. See the OpenAPI spec for its possible values.
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Radiology Procedure**
+
+This inference is for the ordered radiology procedure(s).
+- kind: RadiologyInsightsInferenceType.RadiologyProcedure
+- procedureCodes: Array FHIR.R4.CodeableConcept
+- imagingProcedures: Array ImagingProcedure
+- orderedProcedure: OrderedProcedure
+
+Field ΓÇ£imagingProceduresΓÇ¥ contains one or more instances of an imaging procedure, as documented for the follow up recommendations.
+Field ΓÇ£procedureCodesΓÇ¥, if set, contains LOINC codes.
+Field ΓÇ£orderedProcedureΓÇ¥ contains the description(s) and the code(s) of the ordered procedure(s) as given by the client. The descriptions are in field ΓÇ£orderedProcedure.descriptionΓÇ¥, separated by ΓÇ£;;ΓÇ¥. The codes are in ΓÇ£orderedProcedure.code.codingΓÇ¥. In every coding in the array, only field ΓÇ£codingΓÇ¥ is set.
+
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+## Next steps
+
+To get better insights into the request and responses, read more on following page:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/model-configuration.md
+
+ Title: Radiology Insights model configuration
+
+description: This article provides Radiology Insights model configuration information.
+++++ Last updated : 12/06/2023+++
+# Radiology Insights model configuration
+
+To interact with the Radiology Insights model, you can provide several model configuration parameters that modify the outcome of the responses.
+
+> [!IMPORTANT]
+> Model configuration is applied to ALL the patients within a request.
+
+```json
+ "configuration": {
+ "inferenceOptions": {
+ "followupRecommendationOptions": {
+ "includeRecommendationsWithNoSpecifiedModality": false,
+ "includeRecommendationsInReferences": false,
+ "provideFocusedSentenceEvidence": false
+ },
+ "findingOptions": {
+ "provideFocusedSentenceEvidence": false
+ }
+ },
+ "locale": "en-US",
+ "verbose": false,
+ "includeEvidence": true
+ }
+```
+
+## Case finding
+
+Through the model configuration, the API allows you to seek evidence from the provided clinical documents as part of the inferences.
+
+**Include Evidence** |**Behavior**
+- |--
+true | Evidence is returned as part of the inferences
+false | No Evidence is returned
+
+## Inference Options
+
+**FindingOptions**
+- provideFocusedSentenceEvidence
+- type: boolean
+- Provide a single focused sentence as evidence for the finding, default is false.
+
+**FollowupRecommendationOptions**
+- includeRecommendationsWithNoSpecifiedModality
+ - type: boolean
+ - description: Include/Exclude follow-up recommendations with no specific radiologic modality, default is false.
++
+- includeRecommendationsInReferences
+ - type: boolean
+ - description: Include/Exclude follow-up recommendations in references to a guideline or article, default is false.
+
+- provideFocusedSentenceEvidence
+ - type: boolean
+ - description: Provide a single focused sentence as evidence for the recommendation, default is false.
+
+When includeEvidence is false, no evidence is returned.
+
+This configuration overrules includeRecommendationsWithNoSpecifiedModality and provideFocusedSentenceEvidence and no evidence is shown.
+
+When includeEvidence is true, it depends on the value set on the two other configurations whether the evidence of the inference or a single focused sentence is given as evidence.
+
+## Examples
++
+**Example 1**
+
+CDARecommendation_GuidelineFalseUnspecTrueLimited
+
+The includeRecommendationsWithNoSpecifiedModality is true, includeRecommendationsInReferences is false, provideFocusedSentenceEvidence for recommendations is true and includeEvidence is true.
+
+As a result, the model includes evidence for all inferences.
+- The model checks for follow-up recommendations with a specified modality.
+- The model checks for follow-up recommendations with no specific radiologic modality.
+- The model provides a single focused sentence as evidence for the recommendation.
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Example 2**
+
+CDARecommendation_GuidelineTrueUnspecFalseLimited
+
+The includeRecommendationsWithNoSpecifiedModality is false, includeRecommendationsInReferences is true, provideFocusedSentenceEvidence for findings is true and includeEvidence is true.
+
+As a result, the model includes evidence for all inferences.
+- The model checks for follow-up recommendations with a specified modality.
+- The model checks for a recommendation in a guideline.
+- The model provides a single focused sentence as evidence for the finding.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+## Next steps
+
+Refer to the following page to get better insights into the request and responses:
+
+>[!div class="nextstepaction"]
+> [Inference information](inferences.md)
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/overview.md
+
+ Title: What is Radiology Insights (Preview)
+
+description: Enable healthcare organizations to process radiology documents and add various inferences.
+++++ Last updated : 12/6/2023++++
+# What is Radiology Insights (Preview)?
+
+Radiology Insights is a model that aims to provide quality checks as feedback on errors and inconsistencies (mismatches).
+The model ensures that critical findings are identified and communicated using the full context of the report. Follow-up recommendations and clinical findings with measurements (sizes) documented by the radiologist are also identified.
+
+> [!IMPORTANT]
+> The Radiology Insights model is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTSΓÇ¥. The Radiology Insights model isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of the Radiology Insights model. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions.
+
+## Radiology Insights features
+
+To remain competitive and successful, healthcare organizations and radiology teams must have visibility into trends and outcomes. The focus is on radiology operational excellence and performance and quality.
+The Radiology Insights model extracts valuable information from radiology documents for a radiologist.
+
+**Identifying Mismatches**: A radiologist is provided with possible mismatches. These are identified by the model by comparing what the radiologist has documented in the radiology report and the information that was present in the metadata of the report.
+
+Mismatches can be identified for sex, age and body site laterality. Mismatches identify potential discrepancies between the dictated text and the provided metadata. They also identify potential inconsistencies within the dictated/written text. Inconsistencies are limited to gender, age, laterality and type of imaging.
+
+This enables the radiologist to rectify any potential inconsistencies during reporting. The system isn't aware of the image the radiologist is reporting on.
+
+This model does not provide any clinical judgment of the radiologist's interpretation of the image. The radiologist is responsible for the diagnosis and treatment of patient and the correct documentation thereof.
+
+**Providing Clinical Findings**: The model extracts as structured data two types of clinical findings: critical findings and actionable findings. Only clinical findings that are documented in the report are extracted. Clinical findings produced by the model aren't deduced from pieces of information in the report nor from the image. These findings merely serve as a potential reminder for the radiologist to communicate with the provider.
+
+The model produces two categories of clinical findings, Actionable Finding and Critical Result, and are based on the clinical finding, explicitly stated in the report, and criteria formulated by ACR (American College of Radiology). The model extracts all findings explicitly documented by the radiologist. The extracted findings may be used to alert a radiologist of possible clinical findings that need to be clearly communicated and acted on in a timely fashion by a healthcare professional. Customers may also utilize the extracted findings to populate downstream or related systems (such as EHRs or autoschedule functions).
+
+**Communicating Follow-up Recommendations**: A radiologist uncovers findings for which in some cases a follow-up is recommended. The documented recommendation is extracted and normalized by the model. It can be used for communication to a healthcare professional (physician).
+Follow-up recommendations aren't generated, deduced or proposed. The model merely extracts follow-up recommendation statements documented explicitly by the radiologist. Follow-up recommendations are normalized by coding to SNOMED.
+
+**Reporting Measurements**: A radiologist documents clinical findings with measurements. The model extracts clinically relevant information pertaining to the finding. The model extracts measurements the radiologist explicitly stated in the report.
+
+The model is simply searching for measurements reviewed by the radiologist. This info is extracted from the relevant text-based record and structures. The extracted and structured measurement data may be used to identify trends in measurements for a particular patient over time. Alternatively a customer could search a set of patients based on the measurement data extracted by the model.
+
+**Reports on Productivity and Key Quality Metrics**: The Radiology Insights model extracted information can be used to generate reports and support analytics for a team of radiologists.
+
+Based on the extracted information, dashboards and retrospective analyses can provide insight on productivity and key quality metrics.
+The insights can be used to guide improvement efforts, minimize errors, and improve report quality and consistency.
+
+The RI model isn't creating dashboards but delivers extracted information. The information can be aggregated by a user for research and administrative purposes. The model is stateless.
+
+## Language support
+
+The service currently supports the English language.
+
+## Limits and quotas
+
+For the Public Preview, you can select the Free F0 SKU. The official pricing will be released after Public Preview.
+
+## Next steps
+
+Get started using the Radiology Insights model:
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Support And Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/support-and-help.md
+
+ Title: Radiology Insights support and help options
+
+description: How to obtain help and support for questions and problems when you create applications that use with Radiology Insights model
+++++ Last updated : 12/12/2023++++
+# Radiology Insights model support and help options
+
+Are you just starting to explore the functionality of the Radiology Insights model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Azure AI Health Insights.
+
+## Create an Azure support request
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
+* [Azure portal for the United States government](https://portal.azure.us)
++
+## Post a question on Microsoft Q&A
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support.
azure-health-insights Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/transparency-note.md
+
+ Title: Transparency Note for Radiology Insights
+description: Transparency Note for Radiology Insights
++++ Last updated : 06/12/2023+++
+# Transparency Note for Radiology Insights (Preview)
+
+## What is a Transparency Note?
+
+An AI system includes technology and the people who will use it, the people are affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. MicrosoftΓÇÖs Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
+Microsoft’s Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the Microsoft AI principles.
+
+## The basics of Radiology Insights
+
+### Introduction
+
+Radiology Insights (RI) is a model that aims to provide quality checks as feedback on errors and inconsistencies (mismatches) and helps identify and communicate critical findings using the full context of the report. Follow-up recommendations and clinical findings with measurements (sizes) documented by the radiologist are also identified.
+
+- Radiology Insights is a built-in AI software model, delivered within Project Health Insights Azure AI service
+- Radiology Insights doesn't provide external references. As a Health Insights model, Radiology Insights provides inferences to the provided input, to be used as reference for profound understanding of the conclusions of the model.
+
+The Radiology Insights feature of Azure Health Insights uses natural language processing techniques to process unstructured medical radiology documents. It adds several types of inferences that help the user to effectively monitor, understand, and improve financial and clinical outcomes in a radiology workflow context.
+The types of inferences currently supported by the system are: AgeMismatch, SexMismatch, LateralityDiscrepancy, CompleteOrderDiscrepancy, LimitedOrderDiscrepancy, Finding, CriticalResult, FollowupRecommendation, RadiologyProcedure, Communication.
+These inferences can be used both to support clinical analytics or to provide real time assistance during the document creation process.
+
+- RI enables to slice and dice the radiology workflow data and create insights that matter, leading to actionable information.
+
+- RI enables to analyze the past and improve the future by generating meaningful insights that reveal strengths and pinpoint areas needing intervention.
+
+- RI enables to create quality checks and automated, inΓÇæline alerts for mismatches and possible critical findings.
+
+- RI improve followΓÇæup recommendation consistency with AIΓÇædriven, automated guidance support and quality checks that drive evidenceΓÇæbased clinical decisions.
+
+Radiology Insights can receive unstructured text in English as part of its current offering.
+
+Radiology Insights uses TA4H for NER, extraction of relations between identified entities, to surfaces assertions such as negation and conditionality, and to link detected entities to common vocabularies.
+
+### Key terms
+
+|Term | Definition |
+|--||
+|Document| The input of the RI model is a Radiology Clinical document, which next to the narrative information also contains meta-data containing patient info and procedure order specifications.|
+|Inference| The output of the RI model is a list of inferences or annotations added to the document processed.|
+|AgeMismatch| Annotation triggered when there's a discrepancy between age information in meta-data and narrative text.|
+|SexMismatch| Annotation triggered when there's a discrepancy between sex information in meta-data and narrative text (includes patient references, sex specific findings and sex specific body parts).|
+|LateralityDiscrepancy| Annotation triggered when there's a discrepancy between laterality information in meta-data and narrative text or between findings and impression section in report text.
+|CompleteOrderDiscrepancy| Annotation triggered when report text doesn't contain all relevant body parts according to information in the meta-data that a complete study is ordered.
+|LimitedOrderDiscrepancy| Annotation triggered when limited selection of body parts according to the procedure order present in meta-data should be checked, but report text includes all relevant body parts.
+|Finding| Annotation that identifies and highlights an assembly of clinical information pertaining to a, clinically relevant, notion found in the report text.
+|CriticalResult| Annotation that identifies and highlights findings in report text that should be communicated within a certain time limit according to regulatory compliance.
+|FollowupRecommendation| Annotation that identifies and highlights one or more recommendations in the report text and provides a normalization of each recommendation to a set of structured data fields.
+|RadiologyProcedure| Normalization of procedure order information present in meta-data using Loinc/Radlex codes.
+|Communication| Annotation that identifies and highlights when noted in report text that the findings are strict or nonstrictly communicated with the recipient.
+
+## Capabilities
+
+### System behavior
+
+The Radiology Insight adds several types of inferences/annotations to the original radiology clinical document. A document can trigger one or more annotations. Several instances of the same annotation in one document are possible.
+
+- AgeMismatch
+- SexMismatch
+- LateralityDiscrepancy
+- CompleteOrderDiscrepancy
+- LimitedOrderDiscrepancy
+- Finding
+- CriticalResult
+- FollowupRecommendation
+- RadiologyProcedure
+- Communication
++
+Example of a Clinical Radiology document with inferences:
+
+![[Screenshot of a radiology document with a Mismatch and Follow-up Recommendation inference.](../media/radiology-insights/radiology-doc-with-inferences.png)](../media/radiology-insights/radiology-doc-with-inferences.png#lightbox)
+
+### Functional description of the inferences in scope and examples
+
+**Age Mismatch**
+
+Age mismatches are identified based on comparison of available Patient age information within PatientΓÇÖs demographic meta-data and the report text. Conflicting age information are tagged in the text.
+
+**Sex Mismatch**
+
+Sex mismatches identification is based on a comparison of the Patient sex information within PatientΓÇÖs demographic meta-data on the one hand and on the other hand patient references (female/male, he/she/his/her), gender specific findings and gender-specific body parts in the text.
+Opposite gender terms are tagged in the report text.
+
+**Laterality Discrepancy**
+
+A laterality, defined as ΓÇ£LeftΓÇ¥ (Lt, lft) and ΓÇ£RightΓÇ¥ (rt, rght), along with an Anatomy (body parts) in the Procedure Description of the meta-data Procedure Order is used to create Laterality mismatches in the report.
+No Mismatches on past content.
+If only Laterality and no Anatomy is available in the Procedure Description, all opposite laterality in the text is tagged. For example: ΓÇ£left viewsΓÇ¥ in Procedure Description will list all ΓÇ£rightΓÇ¥ words in the report text.
+
+**CompleteOrder Discrepancy**
+
+Completeness mismatches can be made if the ordered procedure is an ultrasound for the ABDOMEN, RETROPERITONEAL, PELVIS, or US BREAST.
+A completeness mismatch is made if either the order is complete and the text isn't, or vice versa.
+
+**LimitedOrder Discrepancy**
+
+Completeness mismatches can be made if the ordered procedure is an ultrasound for the ABDOMEN, RETROPERITONEAL, PELVIS, or US BREAST.
+A completeness mismatch is made if either the order is complete and the text isn't, or vice versa.
+
+**Finding**
+
+A Finding is an NLU-based assembly of clinical information pertaining to a, clinically relevant, notion found in medical records. It's created as such that it's application-independent.
+A Finding inference consists out of different fields, all containing pieces to assemble a complete overview of what the Finding is.
+A Finding can consist out of the following fields:
+Clinical Indicator, AnatomyLateralityInfo about Size, Acuity, Severity, Cause, Status, Multiple ΓÇô check, RegionFeatures, Timing
+
+**Critical Result**
+
+Identifies and highlights potential critical results dictated in a report.
+Identifies and highlights potential ACR Actionable Findings dictated in a report.
+Only Identifies Critical Result in the report text (not in meta-data)
+The terms are based on Mass Coalition for the Prevention of Medical Errors:
+<http://www.macoalition.org/Initiatives/docs/CTRstarterSet.xls>.
+
+**FollowupRecommendation**
+
+This inference surfaces a potential visit that needs to be scheduled. Each recommendation contains one modality and one body part. In addition, it contains a time, a laterality, one or multiple findings and an indication that a conditional phrase is present (true or false).
+
+**RadiologyProcedure**
+
+Radiology Insights extracts information such as modality, body part, laterality, view and contrast from the procedure order. Ordered procedures are normalized using the Loinc codes using the LOINC/RSNA Radiology Playbook that is developed and maintained by the LOINC/RadLex Committee:
+<http://playbook.radlex.org/playbook/SearchRadlexAction>.
+
+**Communication**
+
+RI captures language in the text, typically a verb indicating communication in combination with a proper name (typical First and Last Name) or a reference to a clinician or nurse. There can be several such recipients.
+Communication to nonmedical staff (secretary, clerks, etc.) isn't tagged as communication unless the proper name of this person is mentioned.
+Language identified as past communication (for example in history sections) or future communication (for example "will be communicated") isn't tagged as communication.
++
+## Use cases
+
+Healthcare organizations and radiology teams must have visibility into trends and outcomes specific to radiology operations and performance, with constant attention to quality.
+The Radiology Insights model extracts valuable information from radiology documents for a radiologist.
+
+The scope of each of these use cases is always the current document the radiologist is dictating. There's no image analysis nor patient record information involved. The meta-data provides administrative context for the current report and is limited to patient age, patient sex, and the procedure that was ordered. (for example: CT of abdomen, MRI of the brain,…)
+
+Microsoft is providing this functionality as an API with the model that allows for the information in scope to be identified or extracted. The customer would incorporate the model into their own or third-party radiology reporting software and would determine the user interface for the information. Customers could be an ISV or a health system developing or modifying radiology reporting software for use within the health system.
+
+Thus, the specific use cases by customers and how the information would be presented or used by a radiologist may vary slightly from that described, but the descriptions illustrate the intended purpose of the API functionality.
+
+**Use Case 1 ΓÇô Identifying Mismatches**: A radiologist is provided with possible mismatches that are identified by the model between what the radiologist has documented in the radiology report and the information that was present in the meta-data of the report. Mismatches can be identified for sex, age and body site laterality. Mismatches identify potential discrepancies between the dictated text and the provided meta-data. They also identify potential inconsistencies within the dictated/written text. Inconsistencies are limited to gender, age, laterality and type of imaging. This is only to allow the radiologist to rectify any potential inconsistencies during reporting. The system isn't aware of the image the radiologist is reporting on. In no way does this model provides any clinical judgment of the radiologist's interpretation of the image. The radiologist is responsible for the diagnosis and treatment of patient and the correct documentation thereof.
+
+**Use Case 2 ΓÇô Providing Clinical Findings**: The model extracts as structured data two types of clinical findings: critical findings and actionable findings. Only clinical findings that are explicitly documented in the report by the radiologist are extracted by the model. Clinical findings produced by the model aren't deduced from pieces of information in the report nor from the image. These merely serve as a potential reminder for the radiologist to communicate with the provider.
+The model produces two categories of clinical findings, Actionable Finding and Critical Result, and is based on the clinical finding, explicitly stated in the report, and criteria formulated by ACR (American College of Radiology). The model will always extract all findings explicitly documented by the radiologist. The extracted findings may be used to alert a radiologist of possible clinical findings that need to be clearly communicated and acted on in a timely fashion by a healthcare professional. Customers may also utilize the extracted findings to populate downstream or related systems (such as EHRs or autoschedule functions).
+
+**Use Case 3 ΓÇô Communicating Follow-up Recommendations**: A radiologist uncovers findings for which in some cases a follow-up is recommended. The documented recommendation is extracted and normalized by the model for communication to a healthcare professional (physician).
+Follow-up recommendations aren't generated, deduced or proposed. The model merely extracts follow-up recommendation statements documented explicitly by the radiologist. Follow-up recommendations are normalized by coding to SNOMED.
+
+**Use Case 4 ΓÇô Reporting Measurements**: A radiologist documents clinical findings with measurements. The model extracts clinically relevant information pertaining to the finding. The model extracts measurements the radiologist explicitly stated in the report. The model is searching for measurements that have already been taken and reviewed by the radiologist. Extracting these measurements from the relevant text-based record and structures them. The extracted and structured measurement data may be used to see trends in measurements for a particular patient over time. A customer could search a set of patients based on the measurement data extracted by the model.
+
+**Use Case 5 - Reports on Productivity and Key Quality Metrics**: The Radiology Insights model extracted information (information extracted in use cases 1 to 5) can be used to generate reports and support analytics for a team of radiologists. Based on the extracted information, dashboards and retrospective analyses can provide updates on productivity and key quality metrics to guide improvement efforts, minimize errors, and improve report quality and consistency.
+The RI model isn't creating dashboards but delivers extracted information, not deduced, that could be aggregated by a user for research and administrative purposes. The model is stateless.
+
+### Considerations when choosing other use cases
+
+Radiology Insights is a valuable tool to extract knowledge from unstructured medical text and support the radiology documentation workflow. However, given the sensitive nature of health-related data, it's important to consider your use cases carefully. In all cases, a human should be making decisions assisted by the information the system returns, and in all cases, you should have a way to review the source data and correct errors. Here are some considerations when choosing a use case:
+
+- Avoid scenarios that use this service as a medical device, to provide clinical support, or as a diagnostic tool to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions without human intervention. A qualified medical professional should always do due diligence, verify source data that might influence patient care decisions and make decisions.
+
+- Avoid scenarios related to automatically granting or denying medical services or health insurance without human intervention. Because decisions that affect coverage levels are impactful, source data should always be verified in these scenarios.
+
+- Avoid scenarios that use personal health information for a purpose not permitted by patient consent or applicable law. Health information has special protections regarding privacy and consent. Make sure that all data you use has patient consent for the way you use the data in your system or you're otherwise compliant with applicable law as it relates to the use of health information.
+
+- Carefully consider using detected inferences to update patient records without human intervention. Make sure that there's always a way to report, trace, and correct any errors to avoid propagating incorrect data to other systems. Ensure that any updates to patient records are reviewed and approved by qualified professionals.
+
+- Carefully consider using detected inferences in patient billing without human intervention. Make sure that providers and patients always have a way to report, trace, and correct data that generates incorrect billing.
+
+- Radiology Insights isn't intended to be used for administrative functions.
+
+### Limitations
+
+The specific characteristics of the input radiology document are crucial to get actionable, accurate output from the RI model. Some of the items playing an important role in this are:
+
+- Languages: Currently RI capabilities are enabled for English text only.
+- Unknown words: radiology documents sometimes contain unknown abbreviations/words or out of context homonyms or spelling mistakes.
+- Input meta-data: RI expects for certain types of inferences that input information is available in the document or in the meta data of the document.
+- Templates and formatting: RI is developed using a real world, representative set of documents, but it's possible that specific use cases and/or document templates can cause challenges for the RI logic to be accurate. As an example, nested tables or complicated structures can cause suboptimal parsing.
+- Vocabulary & descriptions: RI is developed and tested on real world documents. However, natural language is rich and description of certain clinical facts can vary over time possibly impacting the output of the logic.
+
+### System performance
+
+The performance of the system can be assessed by computing statistics based on true positive, true negative, false positive, and false negative instances. In order to do so, a representative set of documents has to build, eventually annotated with the expected outcomes. Output of RI can be compared with the desired output to determine the accuracy numbers.
+
+The main reasons for Radiology Insights to trigger False Positive / False Negative output are:
+
+- Input document not containing all necessary meta information
+- Input document format and formatting (Section headings, Punctuation, ...)
+- Non English text (partial)
+- Unknown words (abbreviations, misspellings, …)
+- Issues with parsing complex formatting (nested tables, …)
+
+## Evaluation of Radiology Insights
+
+### Evaluation methods
+
+Radiology insight logic is developed and evaluated using a large set of real world clinical radiology documents. A train set of 5000+ docs annotated by human experts and is used to implement and refine the logic triggering the RI inferences. Part of this set is randomly sampled from a corpus provided by a US medical center and focused mostly on adult patients.
+
+The set used provides almost equal representation of US based male and female patients, and adequate representation of every age group. It should be noted that no further analysis of the training data representativeness (for example, geographic, demographic, or ethnographic representation) is done since the data doesn't includes that type of meta data. The train set and other evaluation sets used are constructed making sure that all types of inferences are present for different types of patient characteristics (Age, Sex).
+Accuracy or regression of the logic is tested using unit and functional tests covering the complete logic scope. Generalization of RI models is assessed by using left-out sets of documents sharing the same characteristics of the train set.
+
+Targeted minimum performance levels for each inference across the complete population are evaluated, tracked and reviewed with Subject matter experts.
+All underlying core NLP & NLU components are separately checked and reviewed using specific testsets.
+
+### Evaluation results
+
+Evaluation metrics used are precision, recall and f1 scoring when manual golden truth annotations are present.
+Regression testing is done via discrepancy analysis and human expert feedback cycles.
+It was observed that the inferences, and the medical info surfaced do add value in the intended use cases targeted, and have positive effect on the radiology workflow.
+
+Evaluating and integrating Radiology Insights for your use
+When you're getting ready to deploy Radiology Insights, the following activities help set you up for success:
+
+- Understand what it can do: Fully assess the capabilities of RI to understand its capabilities and limitations. Understand how it performs in your scenario and context.
+
+- Test with real, diverse data: Understand RI how performs in your scenario by thoroughly testing it by using real-life conditions and data that reflect the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.
+
+- Respect an individual's right to privacy: Only collect or use data and information from individuals for lawful and justifiable purposes. Use only the data and information that you have consent to use or are legally permitted to use.
+
+- Legal review: Obtain appropriate legal review of your solution, particularly if you use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and any risks that need to be mitigated prior to use. It's your responsibility to mitigate such risks and resolve any issues that might come up.
+
+- System review: If you plan to integrate and responsibly use an AI-powered product or feature into an existing system for software or customer or organizational processes, take time to understand how each part of your system is affected. Consider how your AI solution aligns with Microsoft Responsible AI principles.
+
+- Human in the loop: Keep a human in the loop and include human oversight as a consistent pattern area to explore. This means constant human oversight of the AI-powered product or feature and ensuring humans making any decisions that are based on the modelΓÇÖs output. To prevent harm and to manage how the AI model performs, ensure that humans have a way to intervene in the solution in real time.
+
+- Security: Ensure that your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access.
+
+- Customer feedback loop: Provide a feedback channel that users and individuals can use to report issues with the service after deployment. After you deploy an AI-powered product or feature, it requires ongoing monitoring and improvement. Have a plan and be ready to implement feedback and suggestions for improvement.
azure-health-insights Request Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/request-info.md
- Title: Azure AI Health Insights request info
-description: this article describes the required properties to interact with Azure AI Health Insights
----- Previously updated : 02/17/2023---
-# Azure AI Health Insights request info
-
-This page describes the request models and parameters that are used to interact with Azure AI Health Insights service.
-
-## Request
-The generic part of Azure AI Health Insights request, common to all models.
-
-Name |Required|Type |Description
|--||--
-`patients`|yes |Patient[]|The list of patients, including their clinical information and data.
--
-## Patient
-A patient record, including their clinical information and data.
-
-Name|Required|Type |Description
--|--||-
-`id` |yes |string |A given identifier for the patient. Has to be unique across all patients in a single request.
-`info`|no |PatientInfo |Patient structured information, including demographics and known structured clinical information.
-`data`|no |PatientDocument|Patient unstructured clinical data, given as documents.
---
-## PatientInfo
-Patient structured information, including demographics and known structured clinical information.
-
-Name |Required|Type |Description
-|--|-|--
-`gender` |no |string |[ female, male, unspecified ]
-`birthDate` |no |string |The patient's date of birth.
-`clinicalInfo`|no |ClinicalCodeElement|A piece of clinical information, expressed as a code in a clinical coding system.
-
-## ClinicalCodeElement
-A piece of clinical information, expressed as a code in a clinical coding system.
-
-Name |Required|Type |Description
-|--||-
-`system`|yes |string|The clinical coding system, for example ICD-10, SNOMED-CT, UMLS.
-`code` |yes |string|The code within the given clinical coding system.
-`name` |no |string|The name of this coded concept in the coding system.
-`value` |no |string|A value associated with the code within the given clinical coding system.
--
-## PatientDocument
-A clinical unstructured document related to a patient.
-
-Name |Required|Type |Description
-|--||--
-`type ` |yes |string |[ note, fhirBundle, dicom, genomicSequencing ]
-`clinicalType` |no |string |[ consultation, dischargeSummary, historyAndPhysical, procedure, progress, imaging, laboratory, pathology ]
-`id` |yes |string |A given identifier for the document. Has to be unique across all documents for a single patient.
-`language` |no |string |A 2 letter ISO 639-1 representation of the language of the document.
-`createdDateTime`|no |string |The date and time when the document was created.
-`content` |yes |DocumentContent|The content of the patient document.
-
-## DocumentContent
-The content of the patient document.
-
-Name |Required|Type |Description
--|--||-
-`sourceType`|yes |string|The type of the content's source.<br>If the source type is 'inline', the content is given as a string (for instance, text).<br>If the source type is 'reference', the content is given as a URI.[ inline, reference ]
-`value` |yes |string|The content of the document, given either inline (as a string) or as a reference (URI).
-
-## Next steps
-
-To get started using the service, you can
-
->[!div class="nextstepaction"]
-> [Deploy the service via the portal](deploy-portal.md)
azure-health-insights Response Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/response-info.md
- Title: Azure AI Health Insights response info
-description: this article describes the response from the service
----- Previously updated : 02/17/2023---
-# Azure AI Health Insights response info
-
-This page describes the response models and parameters that are returned by Azure AI Health Insights service.
--
-## Response
-The generic part of Azure AI Health Insights response, common to all models.
-
-Name |Required|Type |Description
-|--||
-`jobId` |yes |string|A processing job identifier.
-`createdDateTime` |yes |string|The date and time when the processing job was created.
-`expirationDateTime`|yes |string|The date and time when the processing job is set to expire.
-`lastUpdateDateTime`|yes |string|The date and time when the processing job was last updated.
-`status ` |yes |string|The status of the processing job. [ notStarted, running, succeeded, failed, partiallyCompleted ]
-`errors` |no |Error|An array of errors, if any errors occurred during the processing job.
-
-## Error
-
-Name |Required|Type |Description
--|--|-|
-`code` |yes |string |Error code
-`message` |yes |string |A human-readable error message.
-`target` |no |string |Target of the particular error. (for example, the name of the property in error.)
-`details` |no |collection|A list of related errors that occurred during the request.
-`innererror`|no |object |An object containing more specific information about the error.
-
-## Next steps
-
-To get started using the service, you can
-
->[!div class="nextstepaction"]
-> [Deploy the service via the portal](deploy-portal.md)
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md
Once deployment is complete, you use the Azure portal to navigate to the newly c
## Submit a request and get results To send an API request, you need your Azure AI services account endpoint and key.
-![Screenshot of the Keys and Endpoints for the Trial Matcher.](../media/keys-and-endpoints.png)
+
+![[Screenshot of the Keys and Endpoints for the Trial Matcher.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox)
> [!IMPORTANT]
-> The Trial Matcher is an asynchronous API. Trial Matcher prediction is performed upon receipt of the API request and the results are returned asynchronously. The API results are available for 1 hour from the time the request was ingested and is indicated in the response. After the time period, the results are purged and are no longer available for retrieval.
+> The Trial Matcher is an asynchronous API. Trial Matcher prediction is performed upon receipt of the API request and the results are returned asynchronously. The API results are available for 24 hours from the time the request was ingested and is indicated in the response. After the time period, the results are purged and are no longer available for retrieval.
### Example Request
Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key}
```
+You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/trial-matcher/create-job)
The response includes the operation-location in the response header. The value looks similar to the following URL: ```https://eastus.api.cognitive.microsoft.com/healthinsights/trialmatcher/jobs/b58f3776-c6cb-4b19-a5a7-248a0d9481ff?api_version=2022-01-01-preview```
An example response:
} ```
+You can also find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/trial-matcher/get-job)
## Data limits
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/inferences.md
+ # Trial Matcher inference information The result of the Trial Matcher model includes a list of inferences made regarding the patient. For each trial that was queried for the patient, the model returns an indication of whether the patient appears eligible or ineligible for the trial. If the model concluded the patient is ineligible for a trial, it also provides a piece of evidence to support its conclusion (unless the ```evidence``` flag was set to false).
+> [!NOTE]
+> The examples below are based on API version: 2023-03-01-preview.
+ ## Example model result ```json "inferences":[
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/model-configuration.md
Last updated 02/02/2023
++ # Trial Matcher model configuration The Trial Matcher includes a built-in Knowledge graph, which uses trials taken from [clinicaltrials.gov](https://clinicaltrials.gov/), and is being updated periodically.
The Trial Matcher includes a built-in Knowledge graph, which uses trials taken f
When you're matching patients to trials, you can define a list of filters to query a subset of clinical trials. Each filter can be defined based on ```trial conditions```, ```types```, ```recruitment statuses```, ```sponsors```, ```phases```, ```purposes```, ```facility names```, ```locations```, or ```trial IDs```. - Specifying multiple values for the same filter category results in a trial set that is a union of the two sets.
+> [!NOTE]
+> The examples below are based on API version: 2023-03-01-preview.
In the following configuration, the model queries trials that are in recruitment status ```recruiting``` or ```not yet recruiting```.
To provide a custom trial, the input to the Trial Matcher service should include
"Id":"CustomTrial1", "EligibilityCriteriaText":"INCLUSION CRITERIA:\n\n 1. Patients diagnosed with Diabetes\n\n2. patients diagnosed with cancer\n\nEXCLUSION CRITERIA:\n\n1. patients with RET gene alteration\n\n 2. patients taking Aspirin\n\n3. patients treated with Chemotherapy\n\n", "Demographics":{
- "AcceptedGenders":[
- "Female"
- ],
- "AcceptedAgeRange":{
+ "AcceptedSex":"female",
+ "acceptedAgeRange":{
"MinimumAge":{ "Unit":"Years", "Value":0
azure-health-insights Patient Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/patient-info.md
+
+ # Trial Matcher patient info Trial Matcher uses patient information to match relevant patient(s) with the clinical trial(s). You can provide the information in four different ways:
Trial Matcher uses patient information to match relevant patient(s) with the cli
- gradual Matching (question and answer) - JSON key/value
+> [!NOTE]
+> The examples below are based on API version: 2023-03-01-preview.
+ ## Unstructured clinical note Patient data can be provided to the Trial Matcher as an unstructured clinical note.
The Trial Matcher performs a prior step of language understanding to analyze the
When providing patient data in clinical notes, use ```note``` value for ```Patient.PatientDocument.type```. Currently, Trial Matcher only supports one clinical note per patient. ++ The following example shows how to provide patient information as an unstructured clinical note: ```json
Entity type concepts are concepts that are grouped by common entity types, such
When entity type concepts are sent by customers to the Trial Matcher as part of the patientΓÇÖs clinical info, customers are expected to concatenate the entity type string to the value, separated with a semicolon. + Example concept from neededClinicalInfo API response: ```json {
azure-health-insights Trial Matcher Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/trial-matcher-modes.md
Trial Matcher provides two main modes of operation to users of the service: a **
On the diagram, you can see how patients' or clinical trials can be found through the two different modes. ![Diagram that shows the Trial Matcher operation modes.](../media/trial-matcher/overview.png) -
+[ ![Diagram that shows the Trial Matcher operation modes.](../media/trial-matcher/overview.png)](../media/trial-matcher/overview.png#lightbox)
## Patient centric
azure-monitor Azure Monitor Agent Mma Removal Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md
Title: Azure Monitor Agent MMA legacy agent removal tool
-description: This article describes a PowerShell script used to remove MMA agent from systems that users have migrated to AMA.
+ Title: MMA Discovery and Removal Utility
+description: This article describes a PowerShell script to remove the legacy agent from systems that have migrated to the Azure Monitor Agent.
Last updated 01/09/2024
-# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
+# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from the Log Analytics Agent to the Azure Monitor Agent and track the status of the migration in my account.
-# MMA Discovery and Removal Tool (Preview)
-After you migrate your machines to AMA, you need to remove the MMA agent to avoid duplication of logs. AzTS MMA Discovery and Removal Utility can centrally remove MMA extension from Azure Virtual Machine (VMs), Azure Virtual Machine Scale Sets and Azure Arc Servers from a tenant.
-The utility works in two steps
-1. Discovery ΓÇô First the utility creates an inventory of all machines that have the MMA agents installed. We recommend that no new VMs, Virtual Machine Scale Sets or Azure Arc Servers with MMA extension are created while the utility is running.
-2. Removal - Second the utility selects machines with both MMA and AMA and removes the MMA extension. You can disable this step and run after validating the list of machines. There's an option remove from machines that only have the MMA agent, but we recommended that you first migrate all dependencies to AMA and then remove MMA.
+# MMA Discovery and Removal Utility
+
+After you migrate your machines to the Azure Monitor Agent (AMA), you need to remove the Log Analytics Agent (also called the Microsoft Management Agent or MMA) to avoid duplication of logs. The Azure Tenant Security Solution (AzTS) MMA Discovery and Removal Utility can centrally remove the MMA extension from Azure virtual machines (VMs), Azure virtual machine scale sets, and Azure Arc servers from a tenant.
+
+The utility works in two steps:
+
+1. *Discovery*: The utility creates an inventory of all machines that have the MMA installed. We recommend that you don't create any new VMs, virtual machine scale sets, or Azure Arc servers with the MMA extension while the utility is running.
+
+2. *Removal*: The utility selects machines that have both the MMA and the AMA and removes the MMA extension. You can disable this step and run it after you validate the list of machines. There's an option to remove the extension from machines that have only the MMA, but we recommend that you first migrate all dependencies to the AMA and then remove the MMA.
## Prerequisites
-You do all the setup steps in a [Visual Studio Code](https://code.visualstudio.com/) with the [PowerShell Extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
--
-## Download Deployment package
- The package contains:
-- Bicep templates, which contain resource configuration details that you create as part of setup. -- Deployment set up scripts, which provides the cmdlet to run installation. -- Download deployment package zip from [here](https://github.com/azsk/AzTS-docs/raw/main/TemplateFiles/AzTSMMARemovalUtilityDeploymentFiles.zip) to your local machine. -- Extract zip to local folder location.-- Unblock the files with this script.-
- ``` PowerShell
- Get-ChildItem -Path "<Extracted folder path>" -Recurse | Unblock-File
- ```
-
-## Set up the tool
-
-### [Single Tenant](#tab/Single)
-
-You perform set up in two steps:
-1. Go to deployment folder and load consolidated setup script. You must have **Owner** access on the subscription.
-
- ``` PowerShell
- CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
- . ".\MMARemovalUtilitySetupConsolidated.ps1"
- ```
-
-2. The Install-AzTSMMARemovalUtilitySolutionConsolidated does the following operations:
- - Installs required Az modules.
- - Set up remediation user-assigned managed identity.
- - Prompts and collects onboarding details for usage telemetry collection based on user preference.
- - Creates or updates the RG.
- - Creates or updates the resources with MIs assigned.
- - Creates or updates the monitoring dashboard.
- - Configures target scopes.
-
-You must log in to Azure Account using the following PowerShell command.
-``` PowerShell
-$TenantId = "<TenantId>"
-Connect-AzAccount -Tenant $TenantId
-```
-Run the setup script
-``` PowerShell
-$SetupInstallation = Install-AzTSMMARemovalUtilitySolutionConsolidated `
- -RemediationIdentityHostSubId <MIHostingSubId> `
- -RemediationIdentityHostRGName <MIHostingRGName> `
- -RemediationIdentityName <MIName> `
- -TargetSubscriptionIds @("<SubId1>","<SubId2>","<SubId3>") `
- -TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>") `
- -TenantScope `
- -SubscriptionId <HostingSubId> `
- -HostRGName <HostingRGName> `
- -Location <Location> `
- -AzureEnvironmentName <AzureEnvironmentName>
-```
+Do all the setup steps in [Visual Studio Code](https://code.visualstudio.com/) with the [PowerShell extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). You need:
-Parameters
+- Windows 10 or later, or Windows Server 2019 or later.
+- PowerShell 5.0 or later. Check the version by running `$PSVersionTable`.
+- PowerShell. The language must be set to `FullLanguage` mode. Check the mode by running `$ExecutionContext.SessionState.LanguageMode` in PowerShell. For more information, see the [PowerShell reference](/powershell/module/microsoft.powershell.core/about/about_language_modes?source=recommendations).
+- Bicep. The setup scripts use Bicep to automate the installation. Check the installation by running `bicep --version`. For more information, see [Install Bicep tools](/azure/azure-resource-manager/bicep/install#azure-powershell).
+- A [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) that has **Reader**, **Virtual Machine Contributor**, and **Azure Arc ScVmm VM Contributor** access on target scopes.
+- A new resource group to contain all the Azure resources that the setup automation creates automatically.
+- Appropriate permission on the configured scopes. To grant the remediation user-assigned managed identity with the previously mentioned roles on the target scopes, you must have **User Access Administrator** or **Owner** permission. For example, if you're configuring the setup for a particular subscription, you must have the **User Access Administrator** role assignment on that subscription so that the script can provide the permissions for the remediation user-assigned managed identity.
-|Param Name | Description | Required |
-|:-|:-|:-:|
-|RemediationIdentityHostSubId| Subscription ID to create remediation resources | Yes |
-|RemediationIdentityHostRGName| New ResourceGroup name to create remediation. Defaults to 'AzTS-MMARemovalUtility-RG'| No |
-|RemediationIdentityName| Name of the remediation MI| Yes |
-|TargetSubscriptionIds| List of target subscription ID(s) to run on | No |
-|TargetManagementGroupNames| List of target management group name(s) to run on | No|
-|TenantScope| Activate tenant scope and assigns roles using your tenant id| No|
-|SubscriptionId| Subscription ID where setup is installed| Yes|
-|HostRGName| New resource group name where remediation MI is created. Default value is 'AzTS-MMARemovalUtility-Host-RG'| No|
-|Location| Location DC where setup is created. Default value is 'EastUS2'| No|
-|AzureEnvironmentName| Azure environment where solution is to be installed: AzureCloud, AzureGovernmentCloud. Default value is 'AzureCloud'| No|
-
-### [MultiTenant](#tab/MultiTenant)
-
-In this section, we walk you through the steps for setting up multitenant AzTS MMA Removal Utility. This set up may take up to 30 minutes and has 9 steps
-
-1. Load setup script
-Point the current path to the folder containing the extracted deployment package and run the setup script.
-
- ``` PowerShell
- CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
- . ".\MMARemovalUtilitySetup.ps1"
+## Download the deployment package
+
+The deployment package contains:
+
+- Bicep templates, which contain resource configuration details that you create as part of setup.
+- Deployment setup scripts, which provide the cmdlet to run the installation.
+
+To install the package:
+
+1. Go to the [AzTS-docs GitHub repository](https://github.com/azsk/AzTS-docs/tree/main/TemplateFiles). Download the deployment package file, *AzTSMMARemovalUtilityDeploymentFiles.zip*, to your local machine.
+
+1. Extract the .zip file to your local folder location.
+
+1. Unblock the files by using this script:
+
+ ``` PowerShell
+ Get-ChildItem -Path "<Extracted folder path>" -Recurse | Unblock-File
+ ```
+
+## Set up the utility
+
+### [Single tenant](#tab/single-tenant)
+
+1. Go to the deployment folder and load the consolidated setup script. You must have **Owner** access on the subscription.
+
+ ``` PowerShell
+ CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+ . ".\MMARemovalUtilitySetupConsolidated.ps1"
+ ```
+
+1. Sign in to the Azure account by using the following PowerShell command:
+
+ ``` PowerShell
+ $TenantId = "<TenantId>"
+ Connect-AzAccount -Tenant $TenantId
+ ```
+
+1. Run the setup script to perform the following operations:
+
+ - Install required Az modules.
+ - Set up the remediation user-assigned managed identity.
+ - Prompt and collect onboarding details for usage telemetry collection based on user preference.
+ - Create or update the resource group.
+ - Create or update the resources with assigned managed identities.
+ - Create or update the monitoring dashboard.
+ - Configure target scopes.
+
+ ``` PowerShell
+ $SetupInstallation = Install-AzTSMMARemovalUtilitySolutionConsolidated `
+ -RemediationIdentityHostSubId <MIHostingSubId> `
+ -RemediationIdentityHostRGName <MIHostingRGName> `
+ -RemediationIdentityName <MIName> `
+ -TargetSubscriptionIds @("<SubId1>","<SubId2>","<SubId3>") `
+ -TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>") `
+ -TenantScope `
+ -SubscriptionId <HostingSubId> `
+ -HostRGName <HostingRGName> `
+ -Location <Location> `
+ -AzureEnvironmentName <AzureEnvironmentName>
+ ```
+
+ The script contains these parameters:
+
+ |Parameter name | Description | Required |
+ |:-|:-|:-:|
+ |`RemediationIdentityHostSubId`| Subscription ID to create remediation resources. | Yes |
+ |`RemediationIdentityHostRGName`| New resource group name to create remediation. Defaults to `AzTS-MMARemovalUtility-RG`.| No |
+ |`RemediationIdentityName`| Name of the remediation managed identity.| Yes |
+ |`TargetSubscriptionIds`| List of target subscription IDs to run on. | No |
+ |`TargetManagementGroupNames`| List of target management group names to run on. | No|
+ |`TenantScope`| Tenant scope for assigning roles via your tenant ID.| No|
+ |`SubscriptionId`| ID of the subscription where the setup is installed.| Yes|
+ |`HostRGName`| Name of the new resource group where the remediation managed identity is created. Default value is `AzTS-MMARemovalUtility-Host-RG`.| No|
+ |`Location`| Location domain controller where the setup is created. Default value is `EastUS2`.| No|
+ |`AzureEnvironmentName`| Azure environment where the solution is installed: `AzureCloud` or `AzureGovernmentCloud`. Default value is `AzureCloud`.| No|
+
+### [Multitenant](#tab/multitenant)
+
+This section walks you through the steps for setting up the multitenant AzTS MMA Discovery and Removal Utility. This setup might take up to 30 minutes.
+
+#### Load the setup script
+
+Point the current path to the folder that contains the extracted deployment package and run the setup script:
+
+``` PowerShell
+CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+. ".\MMARemovalUtilitySetup.ps1"
```
-2. Installing required Az modules.
-Az modules contain cmdlets to deploy Azure resources, which are used to create resources. Install the required Az PowerShell Modules using this command. For more details of Az Modules, refer [link](/powershell/azure/install-az-ps). You must point current path to the extracted folder location.
+#### Install required Az modules
+
+Az PowerShell modules contain cmdlets to deploy Azure resources. Install the required Az modules by using the following command. For more information about Az modules, see [How to install Azure PowerShell](/powershell/azure/install-az-ps). You must point the current path to the extracted folder location.
``` PowerShell Set-Prerequisites ```
-3. Set up multitenant identity
-The Microsoft Entra ID Application identity is used to associate the MEI Application using service principal. You perform the following operations. You must log in to the Microsoft Entra ID account where you want to install the Removal Utility setup using the PowerShell command.
- - Creates a new multitenant MEI application if not provided with pre-existing MEI application objectId.
- - Creates password credentials for the MEI application.
+#### Set up multitenant identity
+
+In this step, you set up a Microsoft Entra application identity by using a service principal. You must sign in to the Microsoft Entra account where you want to install the MMA Discovery and Removal Utility setup by using the PowerShell command.
+
+You perform the following operations:
+
+- Create a multitenant Microsoft Entra application if one isn't provided with a preexisting Microsoft Entra application object ID.
+- Create password credentials for the Microsoft Entra application.
``` PowerShell Disconnect-AzAccount
$Identity.ObjectId
$Identity.Secret ```
-Parameters
+The script contains these parameters:
-|Param Name| Description | Required |
+|Parameter name| Description | Required |
|:-|:-|:-:|
-| DisplayName | Display Name of the Remediation Identity| Yes |
-| ObjectId | Object Id of the Remediation Identity | No |
-| AdditionalOwnerUPNs | User Principal Names (UPNs) of the owners for the App to be created | No |
+| `DisplayName` | Display name of the remediation identity.| Yes |
+| `ObjectId` | Object ID of the remediation identity. | No |
+| `AdditionalOwnerUPNs` | User principal names (UPNs) of the owners for the app to be created. | No |
+
+#### Set up storage
-4. Set up secrets storage
-In this step you create secrets storage. You must have owner access on the subscription to create a new RG. You perform the following operations.
- - Creates or updates the resource group for Key Vault.
- - Creates or updates the Key Vault.
- - Store the secret.
+In this step, you set up storage for secrets. You must have **Owner** access on the subscription to create a resource group.
+
+You perform the following operations:
+
+- Create or update the resource group for a key vault.
+- Create or update the key vault.
+- Store the secret.
``` PowerShell $KeyVault = Set-AzTSMMARemovalUtilitySolutionSecretStorage `
$KeyVault.Outputs.secretURI.Value
$KeyVault.Outputs.logAnalyticsResourceId.Value ```
-Parameters
+The script contains these parameters:
-|Param Name|Description|Required?
+|Parameter name|Description|Required|
|:-|:-|:-|
-| SubscriptionId | Subscription ID where keyvault is created.| Yes |
-| ResourceGroupName | Resource group name where Key Vault is created. Should be in a different RG from the set up RG | Yes |
-|Location| Location DC where Key Vault is created. For better performance, we recommend creating all the resources related to set up to be in one location. Default value is 'EastUS2'| No |
-|KeyVaultName| Name of the Key Vault that is created.| Yes |
-|AADAppPasswordCredential| Removal Utility MEI application password credentials| Yes |
-
-5. Set up Installation
-This step install the MMA Removal Utility, which discovers and removes MMA agents installed on Virtual Machines. You must have owner access to the subscription where the setup is created. We recommend that you use a new resource group for the tool. You perform the following operations.
- - Prompts and collects onboarding details for usage telemetry collection based on user preference.
- - Creates the RG if it doesn't exist.
- - Creates or updates the resources with MIs.
- - Creates or updates the monitoring dashboard.
+| `SubscriptionId` | Subscription ID where the key vault is created.| Yes |
+| `ResourceGroupName` | Name of the resource group where the key vault is created. It should be a different resource group from the setup resource group. | Yes |
+|`Location`| Location domain controller where the key vault is created. For better performance, we recommend creating all the resources related to setup in one location. Default value is `EastUS2`.| No |
+|`KeyVaultName`| Name of the key vault that's created.| Yes |
+|`AADAppPasswordCredential`| Microsoft Entra application password credentials for the MMA Discovery and Removal Utility.| Yes |
+
+#### Set up installation
+
+In this step, you install the MMA Discovery and Removal Utility. You must have **Owner** access to the subscription where the setup is created. We recommend that you use a new resource group for the utility.
+
+You perform the following operations:
+
+- Prompt and collect onboarding details for usage telemetry collection based on user preference.
+- Create the resource group if it doesn't exist.
+- Create or update the resources with managed identities.
+- Create or update the monitoring dashboard.
``` PowerShell $Solution = Install-AzTSMMARemovalUtilitySolution `
$Solution = Install-AzTSMMARemovalUtilitySolution `
$Solution.Outputs.internalMIObjectId.Value ```
-Parameters
+The script contains these parameters:
-| Param Name | Description | Required |
+| Parameter name | Description | Required |
|:-|:-|:-|
-| SubscriptionId | Subscription ID where setup is created | Yes |
-| HostRGName | Resource group name where setup is created Default value is 'AzTS-MMARemovalUtility-Host-RG'| No |
-| Location | Location DC where setup is created. For better performance, we recommend hosting the MI and Removal Utility in the same location. Default value is 'EastUS2'| No |
-| SupportMultiTenant | Switch to support multitenant set up | No |
-| IdentityApplicationId | MEI application Id.| Yes |
-|I dentitySecretUri | MEI application secret uri| No |
+| `SubscriptionId` | ID of the subscription where the setup is created. | Yes |
+| `HostRGName` | Name of the resource group where the setup is created. Default value is `AzTS-MMARemovalUtility-Host-RG`.| No |
+| `Location` | Location domain controller where the setup is created. For better performance, we recommend hosting the managed identity and the MMA Discovery and Removal Utility in the same location. Default value is `EastUS2`.| No |
+| `SupportMultiTenant` | Switch to support multitenant setup. | No |
+| `IdentityApplicationId` | Microsoft Entra application ID.| Yes |
+| `IdentitySecretUri` | Microsoft Entra application secret URI.| No |
+
+#### Grant an internal remediation identity with access to the key vault
-6. Grant internal remediation identity with access to Key Vault
-In this step a user assigned managed ident is created to enable function apps to read the Key Vault for authentication. You must have Owner access to the RG.
+In this step, you create a user-assigned managed identity to enable function apps to read the key vault for authentication. You must have **Owner** access to the resource group.
``` PowerShell Grant-AzTSMMARemediationIdentityAccessOnKeyVault `
Grant-AzTSMMARemediationIdentityAccessOnKeyVault `
-DeployMonitoringAlert ```
-Parameters
+The script contains these parameters:
-| Param Name | Description | Required |
+| Parameter name | Description | Required |
|:-|:-|:-:|
-|SubscriptionId| Subscription ID where setup is created | Yes |
-|ResourceId| Resource Id of existing key vault | Yes |
-|UserAssignedIdentityObjectId| Object ID of your managed identity | Yes |
-|SendAlertsToEmailIds| User email Ids to whom alerts should be sent| No, Yes if DeployMonitoringAlert switch is enabled |
-| SecretUri | Key Vault SecretUri of the Removal Utility App's credentials | No, Yes if DeployMonitoringAlert switch is enabled |
-| LAWorkspaceResourceId | ResourceId of the LA Workspace associated with key vault| No, Yes if DeployMonitoringAlert switch is enabled.|
-| DeployMonitoringAlert | Create alerts on top of Key Vault auditing logs | No, Yes if DeployMonitoringAlert switch is enabled |
-
-7. Set up runbook for managing key vault IP ranges
-This step creates a secure Key Vault with public network access disabled. IP Ranges for function apps must be allowed access to the Key Vault. You must have owner access to the RG. You perform the following operations:
- - Creates or updates the automation account.
- - Grants access for automation account using system-assigned managed identity on Key Vault.
- - Set up the runbook with script to fetch the IP ranges published by Azure every week.
- - Runs the runbook one-time at the time of set up and schedule task to run every week.
-
-```
+|`SubscriptionId`| ID of the subscription where the setup is created. | Yes |
+|`ResourceId`| Resource ID of the existing key vault. | Yes |
+|`UserAssignedIdentityObjectId`| Object ID of your managed identity. | Yes |
+|`SendAlertsToEmailIds`| User email IDs to whom alerts should be sent.| No; yes if the `DeployMonitoringAlert` switch is enabled |
+| `SecretUri` | Key vault secret URI of the MMA Discovery and Removal Utility app's credentials. | No; yes if the `DeployMonitoringAlert` switch is enabled |
+| `LAWorkspaceResourceId` | Resource ID of the Log Analytics workspace associated with the key vault.| No; yes if the `DeployMonitoringAlert` switch is enabled.|
+| `DeployMonitoringAlert` | Creation of alerts on top of the key vault's auditing logs. | No; yes if the `DeployMonitoringAlert` switch is enabled |
+
+#### Set up a runbook for managing key vault IP ranges
+
+In this step, you create a secure key vault with public network access disabled. IP ranges for function apps must be allowed access to the key vault. You must have **Owner** access to the resource group.
+
+You perform the following operations:
+
+- Create or update the automation account.
+- Grant access for the automation account by using a system-assigned managed identity on the key vault.
+- Set up the runbook with a script to fetch the IP ranges that Azure publishes every week.
+- Run the runbook one time at the time of setup, and schedule a task to run every week.
+
+``` PowerShell
Set-AzTSMMARemovalUtilityRunbook ` -SubscriptionId <HostingSubId> ` -ResourceGroupName <HostingRGName> `
Set-AzTSMMARemovalUtilityRunbook `
-KeyVaultResourceId $KeyVault.Outputs.keyVaultResourceId.Value ```
-Parameters
+The script contains these parameters:
-|Param Name |Description | Required|
+|Parameter name |Description | Required|
|:-|:-|:-|
-|SubscriptionId| Subscription ID where the automation account and key vault are present.| Yes|
-|ResourceGroupName| Name of resource group where the automation account and key vault are | Yes|
-|Location| Location where your automation account is created. For better performance, we recommend creating all the resources related to setup in the same location. Default value is 'EastUS2'| No|
-|FunctionAppUsageRegion| Location of dynamic ip addresses that are allowed on keyvault. Default location is EastUS2| Yes|
-|KeyVaultResourceId| Resource ID of the keyvault for ip addresses that are allowed.| Yes|
+|`SubscriptionId`| ID of the subscription that includes the automation account and key vault.| Yes|
+|`ResourceGroupName`| Name of resource group that contains the automation account and key vault. | Yes|
+|`Location`| Location where your automation account is created. For better performance, we recommend creating all the resources related to setup in the same location. Default value is `EastUS2`.| No|
+|`FunctionAppUsageRegion`| Location of dynamic IP addresses that are allowed on the key vault. Default location is `EastUS2`.| Yes|
+|`KeyVaultResourceId`| Resource ID of the key vault for allowed IP addresses.| Yes|
+
+#### Set up SPNs and grant required roles for each tenant
-8. Set up SPN and grant required roles for each tenant
-In this step you create SPNs for each tenant and grant permission on each tenant. Set up requires Reader, Virtual Machine Contributor, and Azure Arc ScVmm VM contributor access on your scopes. Scopes Configured can be a Tenant/ManagementGroup(s)/Subscription(s) or both ManagementGroup(s) and Subscription(s).
-For each tenant, perform the steps and make sure you have enough permissions on the other tenant for creating SPNs. You must have **User Access Administrator (UAA) or Owner** on the configured scopes. For example, to run setup on subscription 'X' you have to have UAA role assignment on subscription 'X' to grant the SPN with the required permissions.
+In this step, you create service principal names (SPNs) for each tenant and grant permission on each tenant. Setup requires **Reader**, **Virtual Machine Contributor**, and **Azure Arc ScVmm VM Contributor** access on your scopes. Configured scopes can be tenant, management group, or subscription, or they can be both management group and subscription.
+
+For each tenant, perform the steps and make sure you have enough permissions on the other tenant for creating SPNs. You must have **User Access Administrator** or **Owner** permission on the configured scopes. For example, to run the setup on a particular subscription, you must have a **User Access Administrator** role assignment on that subscription to grant the SPN with the required permissions.
``` PowerShell $TenantId = "<TenantId>"
Grant-AzSKAzureRoleToMultiTenantIdentitySPN -AADIdentityObjectId $SPN.ObjectId `
-TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>") ```
-Parameters
-For Set-AzSKTenantSecuritySolutionMultiTenantIdentitySPN,
+The script contains these parameters for `Set-AzSKTenantSecuritySolutionMultiTenantIdentitySPN`:
-|Param Name | Description | Required |
+|Parameter name | Description | Required |
|:-|:-|:-:|
-|AppId| Your application Id that is created| Yes |
+|`AppId`| Your created application ID.| Yes |
-For Grant-AzSKAzureRoleToMultiTenantIdentitySPN,
+The script contains these parameters for `Grant-AzSKAzureRoleToMultiTenantIdentitySPN`:
-|Param Name | Description | Required|
+|Parameter name | Description | Required|
|:-|:-|:-:|
-| AADIdentityObjectId | Your identity object| Yes|
-| TargetSubscriptionIds| Your list of target subscription ID(s) to run set up on | No |
-| TargetManagementGroupNames | Your list of target management group name(s) to run set up on | No|
+| `AADIdentityObjectId` | Your identity object.| Yes|
+| `TargetSubscriptionIds`| Your list of target subscription IDs to run the setup on. | No |
+| `TargetManagementGroupNames` | Your list of target management group names to run the setup on. | No|
+
+#### Configure target scopes
-9. Configure target scopes
-You can configure target scopes using the `Set-AzTSMMARemovalUtilitySolutionScopes`
+You can configure target scopes by using `Set-AzTSMMARemovalUtilitySolutionScopes`:
``` PowerShell $ConfiguredTargetScopes = Set-AzTSMMARemovalUtilitySolutionScopes `
$ConfiguredTargetScopes = Set-AzTSMMARemovalUtilitySolutionScopes `
-ResourceGroupName <HostingRGName> ` -ScopesFilePath <ScopesFilePath> ```
-Parameters
-|Param Name|Description|Required|
+The script contains these parameters:
+
+|Parameter name|Description|Required|
|:-|:-|:-:|
-|SubscriptionId| Your subscription ID where setup is installed | Yes |
-|ResourceGroupName| Your resource group name where setup is installed| Yes|
-|ScopesFilePath| File path with target scope configurations. See scope configuration| Yes |
+|`SubscriptionId`| ID of your subscription where the setup is installed. | Yes |
+|`ResourceGroupName`| Name of your resource group where the setup is installed.| Yes|
+|`ScopesFilePath`| File path with target scope configurations.| Yes |
-Scope configuration file is a CSV file with a header row and three columns
+The scope configuration file is a CSV file with a header row and three columns:
| ScopeType | ScopeId | TenantId | |:|:|:|
-| Subscription | /subscriptions/abb5301a-22a4-41f9-9e5f-99badff261f8 | 72f988bf-86f1-41af-91ab-2d7cd011db47 |
-| Subscription | /subscriptions/71bdd12b-ae1d-499a-a4ea-e32d4c1d9c35 | e60f12c0-e1dc-4be1-8d86-e979a5527830 |
+| Subscription | `/subscriptions/abb5301a-22a4-41f9-9e5f-99badff261f8` | `72f988bf-86f1-41af-91ab-2d7cd011db47` |
+| Subscription | `/subscriptions/71bdd12b-ae1d-499a-a4ea-e32d4c1d9c35` | `e60f12c0-e1dc-4be1-8d86-e979a5527830` |
++
-## Run The Tool
+## Run the utility
-### [Discovery](#tab/Discovery)
+### [Discovery](#tab/discovery)
``` PowerShell Update-AzTSMMARemovalUtilityDiscoveryTrigger `
Update-AzTSMMARemovalUtilityDiscoveryTrigger `
-StartExtensionDiscoveryAfterMinutes 30 ```
-Parameters
+The script contains these parameters:
-|Param Name|Description|Required?
+|Parameter name|Description|Required|
|:-|:-|:-:|
-|SubscriptionId| Subscription ID where you installed the Utility | Yes|
-|ResourceGroupName| ResourceGroup name where you installed the Utility | Yes|
-|StartScopeResolverAfterMinutes| Time in minutes to wait before running resolver | Yes (Mutually exclusive with param '-StartScopeResolverImmediatley')|
-|StartScopeResolverImmediatley | Run resolver immediately | Yes (Mutually exclusive with param '-StartScopeResolverAfterMinutes') |
-|StartExtensionDiscoveryAfterMinutes | Time in minutes to wait to run discovery (should be after resolver is done) | Yes (Mutually exclusive with param '-StartExtensionDiscoveryImmediatley')|
-|StartExtensionDiscoveryImmediatley | Run extensions discovery immediately | Yes (Mutually exclusive with param '-StartExtensionDiscoveryAfterMinutes')|
+|`SubscriptionId`| ID of the subscription where you installed the utility. | Yes|
+|`ResourceGroupName`| Name of the resource group where you installed the utility. | Yes|
+|`StartScopeResolverAfterMinutes`| Time, in minutes, to wait before running the resolver. | Yes (mutually exclusive with `-StartScopeResolverImmediately`)|
+|`StartScopeResolverImmediately` | Indicator to run the resolver immediately. | Yes (mutually exclusive with `-StartScopeResolverAfterMinutes`) |
+|`StartExtensionDiscoveryAfterMinutes` | Time, in minutes, to wait to run discovery (should be after the resolver is done). | Yes (mutually exclusive with `-StartExtensionDiscoveryImmediatley`)|
+|`StartExtensionDiscoveryImmediatley` | Indicator to run extension discovery immediately. | Yes (mutually exclusive with `-StartExtensionDiscoveryAfterMinutes`)|
-### [Removal](#tab/Removal)
+### [Removal](#tab/removal)
By default, the removal phase is disabled. We recommend that you run it after validating the inventory of machines from the discovery step.+ ``` PowerShell Update-AzTSMMARemovalUtilityRemovalTrigger ` -SubscriptionId <HostingSubId> `
Update-AzTSMMARemovalUtilityRemovalTrigger `
-RemovalCondition 'CheckForAMAPresence' ```
-Parameters
+The script contains these parameters:
-| Param Name | Description | Required?
+| Parameter name | Description | Required |
|:-|:-|:-:|
-| SubscriptionId | Subscription ID where you installed the Utility | Yes |
-| ResourceGroupName | ResourceGroup name where you installed the Utility| Yes|
-| StartAfterMinutes | Time in minutes to wait before starting removal | Yes (Mutually exclusive with param '-StartImmediately')|
-| StartImmediately | Run removal phase immediately | Yes (Mutually exclusive with param '-StartAfterMinutes') |
-| EnableRemovalPhase | Enable removal phase | Yes (Mutually exclusive with param '-DisableRemovalPhase')|
-| RemovalCondition | MMA extension should be removed when:</br>ChgeckForAMAPresence AMA extension is present </br> SkipAMAPresenceCheck in all cases whether AMA extension is present or not) | No |
-| DisableRemovalPhase | Disable removal phase | Yes (Mutually exclusive with param '-EnableRemovalPhase')|
-
-**Know issues**
-- Removal of MMA agent in Virtual Machine Scale Set(VMSS) where orchestration mode is 'Uniform' depend on its upgrade policy. We recommend that you manually upgrade the instance if the policy is set to 'Manual.' -- If you get the error message, "The deployment MMARemovalenvironmentsetup-20233029T103026 failed with error(s). Showing 1 out of 1 error(s). Status Message: (Code:BadRequest) - We observed intermittent issue with App service deployment." Rerun the installation command with same parameter values. Command should proceed without any error in next attempt. -- Extension removal progress tile on Monitoring dashboards shows some failures - Progress tile groups failures by error code, some known error code, reason and next steps to resolve are listed: -
-| Error Code | Description/Reason | Next steps
-|:-|:-|:-|
-| AuthorizationFailed | Remediation Identity doesn't have permission to perform 'Extension delete' operation on VM(s), VMSS, Azure Arc Servers.| Grant 'VM Contributor' role to Remediation Identity on VM(s) and Grant 'Azure Arc ScVmm VM Contributor' role to Remediation Identity on VMSS and rerun removal phase.|
-| OperationNotAllowed | Resource(s) are in a de-allocated state or a Lock is applied on the resource(s) | Turn on failed resource(s) and/or Remove Lock and rerun removal phase |
-
-The utility collects error details in the Log Analytics workspace that was used during set up. Go to Log Analytics workspace > Select Logs and run following query:
+| `SubscriptionId` | ID of the subscription where you installed the utility. | Yes |
+| `ResourceGroupName` | Name of the resource group where you installed the utility.| Yes|
+| `StartAfterMinutes` | Time, in minutes, to wait before starting removal. | Yes (mutually exclusive with `-StartImmediately`)|
+| `StartImmediately` | Indicator to run the removal phase immediately. | Yes (mutually exclusive with `-StartAfterMinutes`) |
+| `EnableRemovalPhase` | Indicator to enable the removal phase. | Yes (mutually exclusive with `-DisableRemovalPhase`)|
+| `RemovalCondition` | Indicator that the MMA extension should be removed when the `CheckForAMAPresence` AMA extension is present. It's `SkipAMAPresenceCheck` in all cases, whether an AMA extension is present or not. | No |
+| `DisableRemovalPhase` | Indicator of disabling the removal phase. | Yes (mutually exclusive with `-EnableRemovalPhase`)|
+
+Here are known issues with removal:
+
+- Removal of the MMA in a virtual machine scale set where the orchestration mode is `Uniform` depends on its upgrade policy. We recommend that you manually upgrade the instance if the policy is set to `Manual`.
+- If you get the following error message, rerun the installation command with the same parameter values:
+
+ `The deployment MMARemovalenvironmentsetup-20233029T103026 failed with error(s). Showing 1 out of 1 error(s). Status Message: (Code:BadRequest) - We observed intermittent issue with App service deployment.`
+
+ The command should proceed without any error in the next attempt.
+- If the progress tile for extension removal shows failures on monitoring dashboards, use the following information to resolve them:
+
+ | Error code | Description/reason | Next steps|
+ |:-|:-|:-|
+ | `AuthorizationFailed` | The remediation identity doesn't have permission to perform an extension deletion operation on VMs, virtual machine scale sets, or Azure Arc servers.| Grant the **VM Contributor** role to the remediation identity on VMs. Grant the **Azure Arc ScVmm VM Contributor** role to the remediation identity on virtual machine scale sets. Then rerun the removal phase.|
+ | `OperationNotAllowed` | Resources are in a deallocated state, or a lock is applied on the resources. | Turn on failed resources and/or remove the lock, and then rerun the removal phase. |
+
+The utility collects error details in the Log Analytics workspace that you used during setup. Go to the Log Analytics workspace, select **Logs**, and then run the following query:
``` KQL let timeago = timespan(7d);
InventoryProcessingStatus_CL
| project ResourceId, ProcessingStatus_s, ProcessErrorDetails_s ```
-## [CleanUp](#tab/CleanUp)
+### [Cleanup](#tab/cleanup)
-The utility creates resources that you should clean up once you have remove MMA from your infrastructure. Execute the following steps to clean up.
- 1. Go to the folder containing the deployment package and load the cleanup script
+The MMA Discovery and Removal Utility creates resources that you should clean up after you remove the MMA from your infrastructure. Complete the following steps to clean up:
- ``` PowerShell
- CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
- . ".\MMARemovalUtilityCleanUpScript.ps1"
-```
+1. Go to the folder that contains the deployment package and load the cleanup script:
-2. Run the cleanup script
+ ``` PowerShell
+ CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+ . ".\MMARemovalUtilityCleanUpScript.ps1"
+ ```
-``` PowerShell
-Remove-AzTSMMARemovalUtilitySolutionResources `
- -SubscriptionId <HostingSubId> `
- -ResourceGroupName <HostingRGName> `
- [-DeleteResourceGroup `]
- -KeepInventoryAndProcessLogs
-```
+2. Run the cleanup script:
+
+ ``` PowerShell
+ Remove-AzTSMMARemovalUtilitySolutionResources `
+ -SubscriptionId <HostingSubId> `
+ -ResourceGroupName <HostingRGName> `
+ [-DeleteResourceGroup `]
+ -KeepInventoryAndProcessLogs
+ ```
-Parameters
+The script contains these parameters:
-|Param Name|Description|Required|
+|Parameter name|Description|Required|
|:-|:-|:-:|
-|SubscriptionId| Subscription ID that the Utility is deleting| Yes|
-|ResourceGroupName| ResourceGroup name, which is deleting| Yes|
-|DeleteResourceGroup| Boolean flag to delete entire resource group| Yes|
-|KeepInventoryAndProcessLogs| Boolean flag to exclude log analytics workspace and application insights. CanΓÇÖt be used with DeleteResourceGroup.| No|
+|`SubscriptionId`| ID of the subscription that you're deleting.| Yes|
+|`ResourceGroupName`| Name of the resource group that you're deleting.| Yes|
+|`DeleteResourceGroup`| Boolean flag to delete an entire resource group.| Yes|
+|`KeepInventoryAndProcessLogs`| Boolean flag to exclude the Log Analytics workspace and Application Insights. You can't use it with `DeleteResourceGroup`.| No|
++
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section lists all supported platforms and frameworks.
* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md) * [Azure App Service](./azure-web-apps.md) * [Azure Functions](../../azure-functions/functions-monitoring.md)
-* [Azure Spring Apps](../../spring-apps/how-to-application-insights.md)
+* [Azure Spring Apps](../../spring-apps/enterprise/how-to-application-insights.md)
* [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles #### Logging frameworks
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
For more information, see [Monitoring Azure Functions with Azure Monitor Applica
## Azure Spring Apps
-For more information, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](../../spring-apps/how-to-application-insights.md).
+For more information, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](../../spring-apps/enterprise/how-to-application-insights.md).
## Containers
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
Autoscale supports the following services.
| Azure Stream Analytics | [Autoscale streaming units (preview)](../../stream-analytics/stream-analytics-autoscale.md) | | Azure SignalR Service (Premium tier) | [Automatically scale units of an Azure SignalR service](../../azure-signalr/signalr-howto-scale-autoscale.md) | | Azure Machine Learning workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
-| Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/how-to-setup-autoscale.md) |
+| Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/enterprise/how-to-setup-autoscale.md) |
| Azure Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) | | Azure Service Bus | [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md) | | Azure Logic Apps - Integration service environment (ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
azure-monitor Container Insights Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-custom-metrics.md
Container insights collects [custom metrics](../essentials/metrics-custom-overvi
- Pin performance charts in Azure portal dashboards. - Take advantage of [metric alerts](../alerts/alerts-types.md#metric-alerts).
-> [!NOTE]
-> This article describes collection of custom metrics from Kubernetes clusters. You can also collect Prometheus metrics as described in [Collect Prometheus metrics with Container insights](container-insights-prometheus.md).
+> [!IMPORTANT]
+> These metrics will no longer be collected starting May 31, 2024 as described in [Container insights recommended alerts (custom metrics) (preview) retirement moving up to 31 May 2024](https://azure.microsoft.com/updates/container-insights-recommended-alerts-custom-metrics-preview-retirement-moving-up-to-31-may-2024). See [Enable Prometheus and Grafana](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) to enable collection of Prometheus metrics.
## Use custom metrics Custom metrics collected by Container insights can be accessed with the same methods as custom metrics collected from other data sources, including [metrics explorer](../essentials/metrics-getting-started.md) and [metrics alerts](../alerts/alerts-types.md#metric-alerts).
azure-monitor Container Insights Data Collection Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-data-collection-dcr.md
Title: Configure Container insights data collection using data collection rule description: Describes how you can configure cost optimization and other data collection for Container insights using a data collection rule. + Last updated 12/19/2023
resources
## Next steps -- See [Configure data collection in Container insights using ConfigMap](container-insights-data-collection-configmap.md) to configure data collection using ConfigMap instead of the DCR.
+- See [Configure data collection in Container insights using ConfigMap](container-insights-data-collection-configmap.md) to configure data collection using ConfigMap instead of the DCR.
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
The **event anomaly** analyzer groups similar events together for easier analysi
### Container optimizer The **container optimizer** analyzer shows containers with excessive cpu and memory limits and requests. Each tile can represent multiple containers with the same spec. For example, if a deployment creates 100 identical pods each with a container C1 and C2, then there will be a single tile for all C1 containers and a single tile for all C2 containers. Containers with set limits and requests are color-coded in a gradient from green to red.
+> [!IMPORTANT]
+> This view doesn't include containers in the **kube-system** namespace and doesn't support Windows Server nodes.
+>
+ The number on each tile represents how far the container limits/requests are from the optimal/suggested value. The closer the number is to 0 the better it is. Each tile has a color to indicate the following: - green: well set limits and requests
azure-monitor Kubernetes Monitoring Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-disable.md
Title: Disable monitoring of your Kubernetes cluster
description: Describes how to remove Container insights and scraping of Prometheus metrics from your Kubernetes cluster. Last updated 01/23/2024-+ ms.devlang: azurecli
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
Title: Enable monitoring for Azure Kubernetes Service (AKS) cluster
description: Learn how to enable Container insights and Managed Prometheus on an Azure Kubernetes Service (AKS) cluster. Last updated 11/14/2023-+
azure-monitor Prometheus Metrics Scrape Configuration Minimal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md
Following targets are **enabled/ON** by default - meaning you don't have to prov
- `nodeexporter` (`job=node`) - `kubelet` (`job=kubelet`) - `kube-state-metrics` (`job=kube-state-metrics`)
+- `controlplane-apiserver` (`job=controlplane-apiserver`)
+- `controlplane-etcd` (`job=controlplane-etcd`)
Following targets are available to scrape, but scraping isn't enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they're disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section - `core-dns` (`job=kube-dns`) - `kube-proxy` (`job=kube-proxy`) - `api-server` (`job=kube-apiserver`)
+- `controlplane-cluster-autoscaler` (`job=controlplane-cluster-autoscaler`)
+- `controlplane-kube-scheduler` (`job=controlplane-kube-scheduler`)
+- `controlplane-kube-controller-manager` (`job=controlplane-kube-controller-manager`)
> [!NOTE] > The default scrape frequency for all default targets and scrapes is `30 seconds`. You can override it per target using the [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-targets-scrape-interval-settings` section.
+> The control plane targets have a fixed scrape interval of `30 seconds` and cannot be overwritten.
> You can read more about four different configmaps used by metrics addon [here](prometheus-metrics-scrape-configuration.md) ## Configuration setting
The following metrics are allow-listed with `minimalingestionprofile=true` for d
- `node_time_seconds` - `node_uname_info"`
+**controlplane-apiserver**<br>
+- `apiserver_request_total`
+- `apiserver_cache_list_fetched_objects_total`
+- `apiserver_cache_list_returned_objects_total`
+- `apiserver_flowcontrol_demand_seats_average`
+- `apiserver_flowcontrol_current_limit_seats`
+- `apiserver_request_sli_duration_seconds_bucket`
+- `apiserver_request_sli_duration_seconds_count`
+- `apiserver_request_sli_duration_seconds_sum`
+- `process_start_time_seconds`
+- `apiserver_request_duration_seconds_bucket`
+- `apiserver_request_duration_seconds_count`
+- `apiserver_request_duration_seconds_sum`
+- `apiserver_storage_list_fetched_objects_total`
+- `apiserver_storage_list_returned_objects_total`
+- `apiserver_current_inflight_requests`
+
+**controlplane-etcd**<br>
+- `etcd_server_has_leader`
+- `rest_client_requests_total`
+- `etcd_mvcc_db_total_size_in_bytes`
+- `etcd_mvcc_db_total_size_in_use_in_bytes`
+- `etcd_server_slow_read_indexes_total`
+- `etcd_server_slow_apply_total`
+- `etcd_network_client_grpc_sent_bytes_total`
+- `etcd_server_heartbeat_send_failures_total`
+ ### Minimal ingestion for default OFF targets The following are metrics that are allow-listed with `minimalingestionprofile=true` for default OFF targets. These metrics are not collected by default as these targets are not scraped by default (due to being OFF by default). You can turn ON scraping for these targets using `default-scrape-settings-enabled.<target-name>=true`' using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section.
The following are metrics that are allow-listed with `minimalingestionprofile=tr
- `process_cpu_seconds_total` - `go_goroutines`
+**controlplane-cluster-autoscaler**<br>
+- `rest_client_requests_total`
+- `cluster_autoscaler_last_activity`
+- `cluster_autoscaler_cluster_safe_to_autoscale`
+- `cluster_autoscaler_scale_down_in_cooldown`
+- `cluster_autoscaler_scaled_up_nodes_total`
+- `cluster_autoscaler_unneeded_nodes_count`
+- `cluster_autoscaler_unschedulable_pods_count`
+- `cluster_autoscaler_nodes_count`
+- `cloudprovider_azure_api_request_errors`
+- `cloudprovider_azure_api_request_duration_seconds_bucket`
+- `cloudprovider_azure_api_request_duration_seconds_count`
+
+**controlplane-kube-scheduler**<br>
+- `scheduler_pending_pods`
+- `scheduler_unschedulable_pods`
+- `scheduler_pod_scheduling_attempts`
+- `scheduler_queue_incoming_pods_total`
+- `scheduler_preemption_attempts_total`
+- `scheduler_preemption_victims`
+- `scheduler_scheduling_attempt_duration_seconds`
+- `scheduler_schedule_attempts_total`
+- `scheduler_pod_scheduling_duration_seconds`
+
+**controlplane-kube-controller-manager**<br>
+- `rest_client_request_duration_seconds`
+- `rest_client_requests_total`
+- `workqueue_depth`
+ ## Next steps - [Learn more about customizing Prometheus metric scraping in Container insights](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-default.md
Following targets are **enabled/ON** by default - meaning you don't have to prov
- `nodeexporter` (`job=node`) - `kubelet` (`job=kubelet`) - `kube-state-metrics` (`job=kube-state-metrics`)
+- `controlplane-apiserver` (`job=controlplane-apiserver`)
+- `controlplane-etcd` (`job=controlplane-etcd`)
## Metrics collected from default targets
The following metrics are collected by default from each default target. All oth
- `kube_resource_labels` (ex - kube_pod_labels, kube_deployment_labels) - `kube_resource_annotations` (ex - kube_pod_annotations, kube_deployment_annotations)
+ **controlplane-apiserver (job=controlplane-apiserver)**<br>
+ - `apiserver_request_total`
+ - `apiserver_cache_list_fetched_objects_total`
+ - `apiserver_cache_list_returned_objects_total`
+ - `apiserver_flowcontrol_demand_seats_average`
+ - `apiserver_flowcontrol_current_limit_seats`
+ - `apiserver_request_sli_duration_seconds_bucket`
+ - `apiserver_request_sli_duration_seconds_count`
+ - `apiserver_request_sli_duration_seconds_sum`
+ - `process_start_time_seconds`
+ - `apiserver_request_duration_seconds_bucket`
+ - `apiserver_request_duration_seconds_count`
+ - `apiserver_request_duration_seconds_sum`
+ - `apiserver_storage_list_fetched_objects_total`
+ - `apiserver_storage_list_returned_objects_total`
+ - `apiserver_current_inflight_requests`
+
+ **controlplane-etcd (job=controlplane-etcd)**<br>
+ - `etcd_server_has_leader`
+ - `rest_client_requests_total`
+ - `etcd_mvcc_db_total_size_in_bytes`
+ - `etcd_mvcc_db_total_size_in_use_in_bytes`
+ - `etcd_server_slow_read_indexes_total`
+ - `etcd_server_slow_apply_total`
+ - `etcd_network_client_grpc_sent_bytes_total`
+ - `etcd_server_heartbeat_send_failures_total`
+ ## Default targets scraped for Windows Following Windows targets are configured to scrape, but scraping is not enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they are disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
If the data export rule includes an unsupported table, the configuration will su
| AACAudit | | | AACHttpRequest | | | AADB2CRequestLogs | |
+| AADCustomSecurityAttributeAuditLogs | |
| AADDomainServicesAccountLogon | | | AADDomainServicesAccountManagement | | | AADDomainServicesDirectoryServiceAccess | |
If the data export rule includes an unsupported table, the configuration will su
| ACSBillingUsage | | | ACSCallAutomationIncomingOperations | | | ACSCallAutomationMediaSummary | |
+| ACSCallClientMediaStatsTimeSeries | |
+| ACSCallClientOperations | |
+| ACSCallClosedCaptionsSummary | |
| ACSCallDiagnostics | | | ACSCallRecordingIncomingOperations | | | ACSCallRecordingSummary | |
If the data export rule includes an unsupported table, the configuration will su
| ACSEmailSendMailOperational | | | ACSEmailStatusUpdateOperational | | | ACSEmailUserEngagementOperational | |
+| ACSJobRouterIncomingOperations | |
| ACSNetworkTraversalDiagnostics | | | ACSNetworkTraversalIncomingOperations | | | ACSRoomsIncomingOperations | |
If the data export rule includes an unsupported table, the configuration will su
| AegDataPlaneRequests | | | AegDeliveryFailureLogs | | | AegPublishFailureLogs | |
+| AEWAssignmentBlobLogs | |
| AEWAuditLogs | | | AEWComputePipelinesLogs | |
+| AFSAuditLogs | |
+| AGCAccessLogs | |
| AgriFoodApplicationAuditLogs | | | AgriFoodFarmManagementLogs | | | AgriFoodFarmOperationLogs | |
If the data export rule includes an unsupported table, the configuration will su
| AgriFoodSensorManagementLogs | | | AgriFoodWeatherLogs | | | AGSGrafanaLoginEvents | |
+| AGWAccessLogs | |
+| AGWFirewallLogs | |
+| AGWPerformanceLogs | |
| AHDSDicomAuditLogs | | | AHDSDicomDiagnosticLogs | | | AHDSMedTechDiagnosticLogs | |
If the data export rule includes an unsupported table, the configuration will su
| AMSStreamingEndpointRequests | | | ANFFileAccess | | | Anomalies | |
+| AOIDatabaseQuery | |
+| AOIDigestion | |
+| AOIStorage | |
| ApiManagementGatewayLogs | | | AppAvailabilityResults | | | AppBrowserTimings | |
If the data export rule includes an unsupported table, the configuration will su
| AppServiceAntivirusScanAuditLogs | | | AppServiceAppLogs | | | AppServiceAuditLogs | |
+| AppServiceAuthenticationLogs | |
| AppServiceConsoleLogs | | | AppServiceEnvironmentPlatformLogs | | | AppServiceFileAuditLogs | |
If the data export rule includes an unsupported table, the configuration will su
| AppServiceServerlessSecurityPluginData | | | AppSystemEvents | | | AppTraces | |
+| ArcK8sAudit | |
+| ArcK8sAuditAdmin | |
+| ArcK8sControlPlane | |
| ASCAuditLogs | | | ASCDeviceEvents | | | ASimAuditEventLogs | | | ASimAuthenticationEventLogs | |
+| ASimDhcpEventLogs | |
| ASimDnsActivityLogs | |
-| ASimNetworkSessionLogs | |
+| ASimFileEventLogs | |
| ASimNetworkSessionLogs | | | ASimProcessEventLogs | |
+| ASimRegistryEventLogs | |
+| ASimUserManagementActivityLogs | |
| ASimWebSessionLogs | | | ASRJobs | | | ASRReplicatedItems | |
If the data export rule includes an unsupported table, the configuration will su
| AuditLogs | | | AutoscaleEvaluationsLog | | | AutoscaleScaleActionsLog | |
+| AVNMConnectivityConfigurationChange | |
+| AVNMIPAMPoolAllocationChange | |
| AVNMNetworkGroupMembershipChange | |
+| AVNMRuleCollectionChange | |
| AVSSyslog | | | AWSCloudTrail | | | AWSCloudWatch | |
If the data export rule includes an unsupported table, the configuration will su
| AZMSOperationalLogs | | | AZMSRunTimeAuditLogs | | | AZMSVnetConnectionEvents | |
+| AzureActivity | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
| AzureAssessmentRecommendation | | | AzureAttestationDiagnostics | |
+| AzureBackupOperations | |
| AzureDevOpsAuditing | |
+| AzureDiagnostics | |
| AzureLoadTestingOperation | |
+| AzureMetricsV2 | |
| BehaviorAnalytics | | | CassandraAudit | | | CassandraLogs | |
If the data export rule includes an unsupported table, the configuration will su
| ConfigurationData | Partial support. Some of the data is ingested through internal services that aren't supported in export. Currently, this portion is missing in export. | | ContainerAppConsoleLogs | | | ContainerAppSystemLogs | |
+| ContainerEvent | |
| ContainerImageInventory | |
+| ContainerInstanceLog | |
| ContainerInventory | | | ContainerLog | | | ContainerLogV2 | |
If the data export rule includes an unsupported table, the configuration will su
| DatabricksUnityCatalog | | | DatabricksWebTerminal | | | DatabricksWorkspace | |
+| DatabricksWorkspaceLogs | |
| DataTransferOperations | |
+| DataverseActivity | |
+| DCRLogErrors | |
+| DCRLogTroubleshooting | |
+| DevCenterBillingEventLogs | |
| DevCenterDiagnosticLogs | |
+| DevCenterResourceOperationLogs | |
| DeviceEvents | | | DeviceFileCertificateInfo | | | DeviceFileEvents | |
If the data export rule includes an unsupported table, the configuration will su
| DeviceTvmSoftwareVulnerabilitiesKB | | | DnsEvents | | | DnsInventory | |
+| DNSQueryLogs | |
| DSMAzureBlobStorageLogs | | | DSMDataClassificationLogs | | | DSMDataLabelingLogs | |
-| DynamicEventCollection | |
| Dynamics365Activity | | | DynamicSummary | |
+| EGNFailedMqttConnections | |
+| EGNFailedMqttPublishedMessages | |
+| EGNFailedMqttSubscriptions | |
+| EGNMqttDisconnections | |
+| EGNSuccessfulMqttConnections | |
| EmailAttachmentInfo | | | EmailEvents | | | EmailPostDeliveryEvents | | | EmailUrlInfo | | | EnrichedMicrosoft365AuditLogs | |
+| ETWEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
| Event | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. | | ExchangeAssessmentRecommendation | | | ExchangeOnlineAssessmentRecommendation | | | FailedIngestion | | | FunctionAppLogs | | | GCPAuditLogs | |
+| GoogleCloudSCC | |
| HDInsightAmbariClusterAlerts | | | HDInsightAmbariSystemMetrics | | | HDInsightGatewayAuditLogs | |
If the data export rule includes an unsupported table, the configuration will su
| KubePVInventory | | | KubeServices | | | LAQueryLogs | |
+| LASummaryLogs | |
+| LinuxAuditLog | |
| LogicAppWorkflowRuntime | | | McasShadowItReporting | | | MCCEventLogs | |
If the data export rule includes an unsupported table, the configuration will su
| MicrosoftAzureBastionAuditLogs | | | MicrosoftDataShareReceivedSnapshotLog | | | MicrosoftDataShareSentSnapshotLog | |
+| MicrosoftDataShareShareLog | |
| MicrosoftGraphActivityLogs | | | MicrosoftHealthcareApisAuditLogs | | | MicrosoftPurviewInformationProtection | |
+| MNFDeviceUpdates | |
+| MNFSystemStateMessageUpdates | |
+| NCBMBreakGlassAuditLogs | |
+| NCBMSecurityDefenderLogs | |
+| NCBMSecurityLogs | |
+| NCBMSystemLogs | |
+| NCCKubernetesLogs | |
+| NCCVMOrchestrationLogs | |
+| NCSStorageAlerts | |
+| NCSStorageLogs | |
| NetworkAccessTraffic | |
+| NetworkMonitoring | |
+| NGXOperationLogs | |
| NSPAccessLogs | | | NTAIpDetails | | | NTANetAnalytics | |
If the data export rule includes an unsupported table, the configuration will su
| PowerBIDatasetsTenant | | | PowerBIDatasetsWorkspace | | | PowerBIReportUsageWorkspace | |
+| PowerPlatformAdminActivity | |
| PowerPlatformConnectorActivity | | | PowerPlatformDlpActivity | | | ProjectActivity | |
If the data export rule includes an unsupported table, the configuration will su
| PurviewScanStatusLogs | | | PurviewSecurityLogs | | | REDConnectionEvents | |
+| RemoteNetworkHealthLogs | |
| ResourceManagementPublicAccessLogs | | | SCCMAssessmentRecommendation | | | SCOMAssessmentRecommendation | | | SecureScoreControls | | | SecureScores | | | SecurityAlert | |
+| SecurityAttackPathData | |
| SecurityBaseline | | | SecurityBaselineSummary | | | SecurityDetection | |
If the data export rule includes an unsupported table, the configuration will su
| SecurityRegulatoryCompliance | | | SentinelAudit | | | SentinelHealth | |
+| ServiceFabricOperationalEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
+| ServiceFabricReliableActorEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
+| ServiceFabricReliableServiceEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
+| SfBAssessmentRecommendation | |
| SharePointOnlineAssessmentRecommendation | | | SignalRServiceDiagnosticLogs | | | SigninLogs | |
If the data export rule includes an unsupported table, the configuration will su
| Usage | | | UserAccessAnalytics | | | UserPeerAnalytics | |
+| VCoreMongoRequests | |
| VIAudit | | | VIIndexing | |
+| VMConnection | Partial support. Some of the data is ingested through internal services that aren't supported in export. Currently, this portion is missing in export. |
| W3CIISLog | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. | | WaaSDeploymentStatus | | | WaaSInsiderStatus | |
If the data export rule includes an unsupported table, the configuration will su
| WebPubSubConnectivity | | | WebPubSubHttpRequest | | | WebPubSubMessaging | |
+| Windows365AuditLogs | |
| WindowsClientAssessmentRecommendation | | | WindowsEvent | | | WindowsFirewall | |
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-monitor Workbooks Interactive Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-interactive-reports.md
Title: Create interactive reports with Azure Monitor Workbooks description: This article explains how to create interactive reports in Azure Workbooks. + Last updated 01/08/2024
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
-+ Last updated 10/02/2023
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* East Asia * East US 2 * France Central
+* Germany West Central
* North Central US * North Europe * Southeast Asia * Switzerland North * Switzerland West
+* Sweden Central
* UAE North
+* UK South
* US Gov Arizona * US Gov Texas * US Gov Virginia
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
Title: Bicep functions - string
description: Describes the functions to use in a Bicep file to work with strings. Previously updated : 07/07/2023 Last updated : 01/31/2024 # String functions for Bicep
The output from the preceding example with the default values is:
`first(arg1)`
-Returns the first character of the string, or first element of the array.
+Returns the first character of the string, or first element of the array. If an empty string is given, the function results in an empty string. In the case of an empty array, the function returns `null`.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
Title: Use MSBuild to convert Bicep to JSON description: Use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON. Previously updated : 09/26/2022 Last updated : 01/31/2024
# Quickstart: Use MSBuild to convert Bicep to JSON
-This article describes how to use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON. The examples use MSBuild from the command line with C# project files that convert Bicep to JSON. The project files are examples that can be used in an MSBuild continuous integration (CI) pipeline.
+Learn the process of utilizing [MSBuild](/visualstudio/msbuild/msbuild) for the conversion of Bicep files to Azure Resource Manager JSON templates (ARM templates). Additionally, MSBuild can be utilized for the conversion of [Bicep parameter files](./parameter-files.md?tabs=Bicep) to [Azure Resource Manager parameter files](../templates/parameter-files.md) with the NuGet packages version 0.23.x or later. The provided examples demonstrate the use of MSBuild from the command line with C# project files for the conversion. These project files serve as examples that can be utilized in an MSBuild continuous integration (CI) pipeline.
## Prerequisites
-You'll need the latest versions of the following software:
+You need the latest versions of the following software:
-- [Visual Studio](/visualstudio/install/install-visual-studio). The free community version will install .NET 6.0, .NET Core 3.1, .NET SDK, MSBuild, .NET Framework 4.8, NuGet package manager, and C# compiler. From the installer, select **Workloads** > **.NET desktop development**.-- [Visual Studio Code](https://code.visualstudio.com/) with the extensions for [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
+- [Visual Studio](/visualstudio/install/install-visual-studio), or [Visual Studio Code](./install.md#visual-studio-code-and-bicep-extension). The Visual Studio community version, available for free, installs .NET 6.0, .NET Core 3.1, .NET SDK, MSBuild, .NET Framework 4.8, NuGet package manager, and C# compiler. From the installer, select **Workloads** > **.NET desktop development**. With Visual Studio Code, you also need the extensions for [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
- [PowerShell](/powershell/scripting/install/installing-powershell) or a command-line shell for your operating system.
-## MSBuild tasks and CLI packages
+## MSBuild tasks and Bicep packages
-If your existing continuous integration (CI) pipeline relies on [MSBuild](/visualstudio/msbuild/msbuild), you can use MSBuild tasks and CLI packages to convert Bicep files into ARM template JSON.
-
-The functionality relies on the following NuGet packages. The latest NuGet package versions match the latest Bicep CLI version.
+From your continuous integration (CI) pipeline, you can use MSBuild tasks and CLI packages to convert Bicep files and Bicep parameter files into JSON. The functionality relies on the following NuGet packages:
| Package Name | Description | | - |- |
-| [Azure.Bicep.MSBuild](https://www.nuget.org/packages/Azure.Bicep.MSBuild) | Cross-platform MSBuild task that invokes the Bicep CLI and compiles Bicep files into ARM template JSON. |
+| [Azure.Bicep.MSBuild](https://www.nuget.org/packages/Azure.Bicep.MSBuild) | Cross-platform MSBuild task that invokes the Bicep CLI and compiles Bicep files into ARM JSON templates. |
| [Azure.Bicep.CommandLine.win-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.win-x64) | Bicep CLI for Windows. | | [Azure.Bicep.CommandLine.linux-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.linux-x64) | Bicep CLI for Linux. | | [Azure.Bicep.CommandLine.osx-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.osx-x64) | Bicep CLI for macOS. |
-### Azure.Bicep.MSBuild package
-
-When referenced in a project file's `PackageReference` the `Azure.Bicep.MSBuild` package imports the `Bicep` task that's used to invoke the Bicep CLI. The package converts its output into MSBuild errors and the `BicepCompile` target that's used to simplify the `Bicep` task's usage. By default the `BicepCompile` runs after the `Build` target and compiles all `@(Bicep)` items and places the output in `$(OutputPath)` with the same file name and the _.json_ extension.
-
-The following example compiles _one.bicep_ and _two.bicep_ files in the same directory as the project file and places the compiled _one.json_ and _two.json_ in the `$(OutputPath)` directory.
+You can find the latest version from these pages. For example:
-```xml
-<ItemGroup>
- <Bicep Include="one.bicep" />
- <Bicep Include="two.bicep" />
-</ItemGroup>
-```
-You can override the output path per file using the `OutputFile` metadata on `Bicep` items. The following example will recursively find all _main.bicep_ files and place the compiled _.json_ files in `$(OutputPath)` under a subdirectory with the same name in `$(OutputPath)`:
+The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) version.
-```xml
-<ItemGroup>
- <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
-</ItemGroup>
-```
+- **Azure.Bicep.MSBuild**
-More customizations can be performed by setting one of the following properties in your project:
+ When included in project file's `PackageReference` property, the `Azure.Bicep.MSBuild` package imports the Bicep task used for invoking the Bicep CLI.
+
+ ```xml
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="0.24.24" />
+ ...
+ </ItemGroup>
-| Property Name | Default Value | Description |
-| - |- | - |
-| `BicepCompileAfterTargets` | `Build` | Used as `AfterTargets` value for the `BicepCompile` target. Change the value to override the scheduling of the `BicepCompile` target in your project. |
-| `BicepCompileDependsOn` | None | Used as `DependsOnTargets` value for the `BicepCompile` target. This property can be set to targets that you want `BicepCompile` target to depend on. |
-| `BicepCompileBeforeTargets` | None | Used as `BeforeTargets` value for the `BicepCompile` target. |
-| `BicepOutputPath` | `$(OutputPath)` | Set this property to override the default output path for the compiled ARM template. `OutputFile` metadata on `Bicep` items takes precedence over this value. |
+ ```
+
+ The package transforms the output of Bicep CLI into MSBuild errors and imports the `BicepCompile` target to streamline the usage of the Bicep task. By default, the `BicepCompile` runs after the `Build` target, compiling all @(Bicep) items and @(BicepParam) items. It then deposits the output in `$(OutputPath)` with the same filename and a _.json_ extension.
-The `Azure.Bicep.MSBuild` requires the `BicepPath` property to be set either in order to function. You may set it by referencing the appropriate `Azure.Bicep.CommandLine.*` package for your operating system or manually by installing the Bicep CLI and setting the `BicepPath` environment variable or MSBuild property.
+ The following example shows project file setting for compiling _main.bicep_ and _main.bicepparam_ files in the same directory as the project file and places the compiled _main.json_ and _main.parameters.json_ in the `$(OutputPath)` directory.
-### Azure.Bicep.CommandLine packages
+ ```xml
+ <ItemGroup>
+ <Bicep Include="main.bicep" />
+ <BicepParam Include="main.bicepparam" />
+ </ItemGroup>
+ ```
-The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. When referenced in a project file via a `PackageReference`, the `Azure.Bicep.CommandLine.*` packages set the `BicepPath` property to the full path of the Bicep executable for the platform. The reference to this package may be omitted if Bicep CLI is installed through other means and the `BicepPath` environment variable or MSBuild property are set accordingly.
+ You can override the output path per file using the `OutputFile` metadata on `Bicep` items. The following example recursively finds all _main.bicep_ files and places the compiled _.json_ files in `$(OutputPath)` under a subdirectory with the same name in `$(OutputPath)`:
-### SDK-based examples
+ ```xml
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ <BicepParam Include="**\main.bicepparam" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).parameters.json" />
+ </ItemGroup>
+ ```
-The following examples contain a default Console App SDK-based C# project file that was modified to convert Bicep files into ARM templates. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+ More customizations can be performed by setting one of the following properties to the `PropertyGroup` in your project:
-The .NET Core 3.1 and .NET 6 examples are similar. But .NET 6 uses a different format for the _Program.cs_ file. For more information, see [.NET 6 C# console app template generates top-level statements](/dotnet/core/tutorials/top-level-templates).
+ | Property Name | Default Value | Description |
+ | - |- | - |
+ | `BicepCompileAfterTargets` | `Build` | Used as `AfterTargets` value for the `BicepCompile` target. Change the value to override the scheduling of the `BicepCompile` target in your project. |
+ | `BicepCompileDependsOn` | None | Used as `DependsOnTargets` value for the `BicepCompile` target. This property can be set to targets that you want `BicepCompile` target to depend on. |
+ | `BicepCompileBeforeTargets` | None | Used as `BeforeTargets` value for the `BicepCompile` target. |
+ | `BicepOutputPath` | `$(OutputPath)` | Set this property to override the default output path for the compiled ARM template. `OutputFile` metadata on `Bicep` items takes precedence over this value. |
-### .NET 6
+ For the `Azure.Bicep.MSBuild` to operate, it's required to have an environment variable named `BicepPath` set. See the next bullet item for configuring `BicepPath`.
-In this example, the `RootNamespace` property contains a placeholder value. When you create a project file, the value matches your project's name.
+- **Azure.Bicep.CommandLine**
-```xml
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>net6.0</TargetFramework>
- <RootNamespace>net6-sdk-project-name</RootNamespace>
- <ImplicitUsings>enable</ImplicitUsings>
- <Nullable>enable</Nullable>
- </PropertyGroup>
+ The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. The following example references the package for Windows.
+ ```xml
<ItemGroup> <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
- <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
- </ItemGroup>
+ ...
+ </ItemGroup>
+ ```
- <ItemGroup>
- <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
- </ItemGroup>
-</Project>
-```
+ When referenced in a project file, the `Azure.Bicep.CommandLine.*` packages automatically set the `BicepPath` property to the full path of the Bicep executable for the platform. The reference to this package can be omitted if Bicep CLI is installed through other means. For this case, instead of referencing an `Azure.Bicep.Commandline` package, you can either configure an environment variable called `BicepPath` or add `BicepPath` to the `PropertyGroup`, for example on Windows:
+
+ ```xml
+ <PropertyGroup>
+ <BicepPath>c:\users\john\.Azure\bin\bicep.exe</BicepPath>
+ ...
+ </PropertyGroup>
+ ```
-### .NET Core 3.1
+ On Linux:
-```xml
-<Project Sdk="Microsoft.NET.Sdk">
+ ```xml
<PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>netcoreapp3.1</TargetFramework>
+ <BicepPath>/usr/local/bin/bicep</BicepPath>
+ ...
</PropertyGroup>
+ ```
- <ItemGroup>
- <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
- <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
- </ItemGroup>
+### Project file examples
- <ItemGroup>
- <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
- </ItemGroup>
-</Project>
-```
+The following examples show how to configure C# console application project files for converting Bicep files and Bicep parameter files to JSON. Replace `__LATEST_VERSION__` with the latest version of the [Bicep NuGet packages](https://www.nuget.org/packages/Azure.Bicep.Core/) in the following examples. See [MSBuild tasks and Bicep packages](#msbuild-tasks-and-bicep-packages) for finding the latest version.
-### NoTargets SDK
+#### SDK-based example
-The following example contains a project that converts Bicep files into ARM templates using [Microsoft.Build.NoTargets](https://www.nuget.org/packages/Microsoft.Build.NoTargets). This SDK allows creation of standalone projects that compile only Bicep files. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+The .NET Core 3.1 and .NET 6 examples are similar. But .NET 6 uses a different format for the _Program.cs_ file. For more information, see [.NET 6 C# console app template generates top-level statements](/dotnet/core/tutorials/top-level-templates).
-For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files), specify a version like `Microsoft.Build.NoTargets/3.5.6`.
+<a id="net-6"></a>
+- **.NET 6**
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <OutputType>Exe</OutputType>
+ <TargetFramework>net6.0</TargetFramework>
+ <RootNamespace>net6-sdk-project-name</RootNamespace>
+ <ImplicitUsings>enable</ImplicitUsings>
+ <Nullable>enable</Nullable>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ <BicepParam Include="**\main.bicepparam" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).parameters.json" />
+ </ItemGroup>
+ </Project>
+ ```
+
+ The `RootNamespace` property contains a placeholder value. When you create a project file, the value matches your project's name.
+
+<a id="net-core-31"></a>
+- **.NET Core 3.1**
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <OutputType>Exe</OutputType>
+ <TargetFramework>netcoreapp3.1</TargetFramework>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ <BicepParam Include="**\main.bicepparam" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).parameters.json" />
+ </ItemGroup>
+ </Project>
+ ```
+
+<a id="notargets-sdk"></a>
+#### NoTargets SDK example
+
+The [Microsoft.Build.NoTargets](https://github.com/microsoft/MSBuildSdks/blob/main/src/NoTargets/README.md) MSBuild project SDK allows project tree owners the ability to define projects that don't compile an assembly. This SDK allows creation of standalone projects that compile only Bicep files.
```xml
-<Project Sdk="Microsoft.Build.NoTargets/__LATEST_VERSION__">
+<Project Sdk="Microsoft.Build.NoTargets/__LATEST_MICROSOFT.BUILD.NOTARGETS.VERSION__">
<PropertyGroup> <TargetFramework>net48</TargetFramework> </PropertyGroup>
For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files)
<ItemGroup> <Bicep Include="main.bicep"/>
+ <BicepParam Include="main.bicepparam"/>
</ItemGroup> </Project> ```
-### Classic framework
+The latest `Microsoft.Build.NoTargets` version can be found at [https://www.nuget.org/packages/Microsoft.Build.NoTargets](https://www.nuget.org/packages/Microsoft.Build.NoTargets). For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files), specify a version like `Microsoft.Build.NoTargets/3.7.56`.
+
+```xml
+<Project Sdk="Microsoft.Build.NoTargets/3.7.56">
+ ...
+</Project>
+```
-The following example converts Bicep to JSON inside a classic project file that's not SDK-based. Only use the classic example if the previous examples don't work for you. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+<a id="classic-framework"></a>
+#### Classic framework example
-In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` properties contain placeholder values. When you create a project file, a unique GUID is created, and the name values match your project's name.
+Use the classic example only if the previous examples don't work for you. In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` properties contain placeholder values. When you create a project file, a unique GUID is created, and the name values match your project's name.
```xml <?xml version="1.0" encoding="utf-8"?>
In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` propertie
<ItemGroup> <None Include="App.config" /> <Bicep Include="main.bicep" />
+ <BicepParam Include="main.bicepparam" />
</ItemGroup> <ItemGroup> <PackageReference Include="Azure.Bicep.CommandLine.win-x64">
In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` propertie
## Convert Bicep to JSON
-The following examples show how MSBuild converts a Bicep file to JSON. Follow the instructions to create one of the project files for .NET, .NET Core 3.1, or Classic framework. Then continue to create the Bicep file and run MSBuild.
+These examples demonstrate the conversion of a Bicep file and a Bicep parameter file to JSON using MSBuild. Start by creating a project file for .NET, .NET Core 3.1, or the Classic framework. Then, generate the Bicep file and the Bicep parameter file before running MSBuild.
+
+### Create project
# [.NET](#tab/dotnet) Build a project in .NET with the dotnet CLI.
-1. Open Visual Studio code and select **Terminal** > **New Terminal** to start a PowerShell session.
-1. Create a directory named _bicep-msbuild-demo_ and go to the directory. This example uses _C:\bicep-msbuild-demo_.
+1. Open Visual Studio Code and select **Terminal** > **New Terminal** to start a PowerShell session.
+1. Create a directory named _msBuildDemo_ and go to the directory. This example uses _C:\msBuildDemo_.
```powershell
- New-Item -Name .\bicep-msbuild-demo -ItemType Directory
- Set-Location -Path .\bicep-msbuild-demo
+ Set-Location -Path C:\
+ New-Item -Name .\msBuildDemo -ItemType Directory
+ Set-Location -Path .\msBuildDemo
``` 1. Run the `dotnet` command to create a new console with the .NET 6 framework.
Build a project in .NET with the dotnet CLI.
dotnet new console --framework net6.0 ```
- The project file uses the same name as your directory, _bicep-msbuild-demo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
+ The command creates a project file using the same name as your directory, _msBuildDemo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
-1. Replace the contents of _bicep-msbuild-demo.csproj_ with the [.NET 6](#net-6) or [NoTargets SDK](#notargets-sdk) examples.
-1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+1. Open _msBuildDemo.csproj_ with an editor, and replace the content with the [.NET 6](#net-6) or [NoTargets SDK](#notargets-sdk) example, and also replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
1. Save the file. # [.NET Core 3.1](#tab/netcore31) Build a project in .NET Core 3.1 using the dotnet CLI.
-1. Open Visual Studio code and select **Terminal** > **New Terminal** to start a PowerShell session.
-1. Create a directory named _bicep-msbuild-demo_ and go to the directory. This example uses _C:\bicep-msbuild-demo_.
+1. Open Visual Studio Code and select **Terminal** > **New Terminal** to start a PowerShell session.
+1. Create a directory named _msBuildDemo_ and go to the directory. This example uses _C:\msBuildDemo_.
```powershell
- New-Item -Name .\bicep-msbuild-demo -ItemType Directory
- Set-Location -Path .\bicep-msbuild-demo
+ Set-Location -Path C:\
+ New-Item -Name .\msBuildDemo -ItemType Directory
+ Set-Location -Path .\msBuildDemo
```
-1. Run the `dotnet` command to create a new console with the .NET 6 framework.
+1. Run the `dotnet` command to create a new console with the .NET 6 framework.
```powershell dotnet new console --framework netcoreapp3.1 ```
- The project file is named the same as your directory, _bicep-msbuild-demo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
+ The project file is named the same as your directory, _msBuildDemo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
-1. Replace the contents of _bicep-msbuild-demo.csproj_ with the [.NET Core 3.1](#net-core-31) or [NoTargets SDK](#notargets-sdk) examples.
+1. Replace the contents of _msBuildDemo.csproj_ with the [.NET Core 3.1](#net-core-31) or [NoTargets SDK](#notargets-sdk) example.
1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages. 1. Save the file.
To create the project file and dependencies, use Visual Studio.
1. Open Visual Studio. 1. Select **Create a new project**. 1. For the C# language, select **Console App (.NET Framework)** and select **Next**.
-1. Enter a project name. For this example, use _bicep-msbuild-demo_ for the project.
+1. Enter a project name. For this example, use _msBuildDemo_ for the project.
1. Select **Place solution and project in same directory**. 1. Select **.NET Framework 4.8**. 1. Select **Create**.
-If you know how to unload a project and reload a project, you can edit _bicep-msbuild-demo.csproj_ in Visual Studio.
+If you know how to unload a project and reload a project, you can edit _msBuildDemo.csproj_ in Visual Studio. Otherwise, edit the project file in Visual Studio Code.
-Otherwise, edit the project file in Visual Studio Code.
-
-1. Open Visual Studio Code and go to the _bicep-msbuild-demo_ directory.
-1. Replace _bicep-msbuild-demo.csproj_ with the [Classic framework](#classic-framework) code sample.
+1. Open Visual Studio Code and go to the _msBuildDemo_ directory.
+1. Replace _msBuildDemo.csproj_ with the [Classic framework](#classic-framework) code example.
1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages. 1. Save the file.
Otherwise, edit the project file in Visual Studio Code.
### Create Bicep file
-You'll need a Bicep file that will be converted to JSON.
-
-1. Use Visual Studio Code and create a new file.
-1. Copy the following sample and save it as _main.bicep_ in the _C:\bicep-msbuild-demo_ directory.
-
-```bicep
-@allowed([
- 'Premium_LRS'
- 'Premium_ZRS'
- 'Standard_GRS'
- 'Standard_GZRS'
- 'Standard_LRS'
- 'Standard_RAGRS'
- 'Standard_RAGZRS'
- 'Standard_ZRS'
-])
-@description('Storage account type.')
-param storageAccountType string = 'Standard_LRS'
-
-@description('Location for all resources.')
-param location string = resourceGroup().location
-
-var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
-
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-05-01' = {
- name: storageAccountName
- location: location
- sku: {
- name: storageAccountType
- }
- kind: 'StorageV2'
-}
+You need a Bicep file and a BicepParam file to be converted to JSON.
+
+1. Create a _main.bicep_ file in the same folder as the project file, for example: _C:\msBuildDemo_ directory, with the following content:
+
+ ```bicep
+ @allowed([
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GRS'
+ 'Standard_GZRS'
+ 'Standard_LRS'
+ 'Standard_RAGRS'
+ 'Standard_RAGZRS'
+ 'Standard_ZRS'
+ ])
+ @description('Storage account type.')
+ param storageAccountType string = 'Standard_LRS'
+
+ @description('Location for all resources.')
+ param location string = resourceGroup().location
+
+ var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
+
+ resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: storageAccountType
+ }
+ kind: 'StorageV2'
+ }
+
+ output storageAccountNameOutput string = storageAccount.name
+ ```
+
+1. Create a _main.bicepparam_ file in the _C:\msBuildDemo_ directory with the following content:
+
+ ```bicep
+ using './main.bicep'
+
+ param prefix = '{prefix}'
+ ```
+
+ Replace `{prefix}` with a string value used as a prefix for the storage account name.
-output storageAccountNameOutput string = storageAccount.name
-```
### Run MSBuild
-Run MSBuild to convert the Bicep file to JSON.
+Run MSBuild to convert the Bicep file and the Bicep parameter file to JSON.
1. Open a Visual Studio Code terminal session.
-1. In the PowerShell session, go to the _C:\bicep-msbuild-demo_ directory.
+1. In the PowerShell session, go to the folder that contains the project file. For example, the _C:\msBuildDemo_ directory.
1. Run MSBuild. ```powershell
- MSBuild.exe -restore .\bicep-msbuild-demo.csproj
+ MSBuild.exe -restore .\msBuildDemo.csproj
``` The `restore` parameter creates dependencies needed to compile the Bicep file during the initial build. The parameter is optional after the initial build.
-1. Go to the output directory and open the _main.json_ file that should look like the sample.
+ To use the .NET Core:
+
+ ```powershell
+ dotnet build .\msBuildDemo.csproj
+ ```
+
+ or
+
+ ```powershell
+ dotnet restore .\msBuildDemo.csproj
+ ```
+
+1. Go to the output directory and open the _main.json_ file that should look like the following example.
MSBuild creates an output directory based on the SDK or framework version:
Run MSBuild to convert the Bicep file to JSON.
} ```
+1. The _main.parameters.json_ file should look like:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "prefix": {
+ "value": "mystore"
+ }
+ }
+}
+```
+ If you make changes or want to rerun the build, delete the output directory so new files can be created. ## Clean up resources
-When you're finished with the files, delete the directory. For this example, delete _C:\bicep-msbuild-demo_.
+When you're finished with the files, delete the directory. For this example, delete _C:\msBuildDemo_.
```powershell
-Remove-Item -Path "C:\bicep-msbuild-demo" -Recurse
+Remove-Item -Path "C:\msBuildDemo" -Recurse
``` ## Next steps
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resource providers for compute services are:
| Resource provider namespace | Azure service | | | - |
-| Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/overview.md) |
+| Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/enterprise/overview.md) |
| Microsoft.AVS | [Azure VMware Solution](../../azure-vmware/index.yml) | | Microsoft.Batch | [Batch](../../batch/index.yml) | | Microsoft.ClassicCompute | Classic deployment model virtual machine |
The resource providers for compute services are:
| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) | | Microsoft.LabServices | [Azure Lab Services](../../lab-services/index.yml) | | Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) |
-| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/overview.md) |
+| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/enterprise/overview.md) |
| Microsoft.Quantum | [Azure Quantum](https://azure.microsoft.com/services/quantum/) | | Microsoft.SerialConsole - [registered by default](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) | | Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) |
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply to [Azure role-based access control (Azure RBAC)](../
## Azure Spring Apps limits
-To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/quotas.md).
+To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/enterprise/quotas.md).
## Azure Storage limits
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | storageaccounts | **Yes** | **Yes** | **Yes**<br/><br/> [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
+> | storageaccounts | **Yes** | **Yes** | [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
## Microsoft.StorageCache
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Template Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-string.md
Title: Template functions - string
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with strings. Previously updated : 05/22/2023 Last updated : 01/31/2024 # String functions for ARM templates
The output from the preceding example with the default values is:
`first(arg1)`
-Returns the first character of the string, or first element of the array.
+Returns the first character of the string, or first element of the array. If an empty string is given, the function results in an empty string. In the case of an empty array, the function returns `null`.
In Bicep, use the [first](../bicep/bicep-functions-string.md#first) function.
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
This tutorial continues on the chat room application introduced in [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md). Complete that quickstart first to set up your chat room.
-In this tutorial, you can discover the process of creating your own authentication method and integrate it with the Microsoft Azure SignalR Service.
+In this tutorial, learn how to create and integrate your authentication method using Microsoft Azure SignalR Service.
-The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach lacks effectiveness in real-world, as it fails to prevent malicious users who might assume false identities from gaining access to sensitive data.
+The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach is ineffective in the real-world because malicious users can use fake identities to access sensitive data.
-[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate.
+[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After GitHub account authentication, the account information will be added as a cookie to be used by the web client to authenticate.
For more information on the OAuth authentication APIs provided through GitHub, see [Basics of Authentication](https://developer.github.com/v3/guides/basics-of-authentication/).
To complete this tutorial, you must have the following prerequisites:
- An account created on [GitHub](https://github.com/) - [Git](https://git-scm.com/) - [.NET Core SDK](https://dotnet.microsoft.com/download)-- [Azure Cloud Shell](../cloud-shell/quickstart.md) configured for the bash environment.-- Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository.
+- [Azure Cloud Shell](../cloud-shell/quickstart.md) configured for the bash environment
+- Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository
## Create an OAuth app
In this section, you implement a `Login` API that authenticates clients using th
### Update the Hub class
-By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token isn't associated with an authenticated identity.
+By default, web client connects to SignalR Service using an internal access token. This access token isn't associated with an authenticated identity.
Basically, it's anonymous access. In this section, you turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim.
In this section, you turn on real authentication by adding the `Authorize` attri
![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
- You're prompted to authorize the chat app's access to your GitHub account. Select the **Authorize** button.
+ You prompt to authorize the chat app's access to your GitHub account. Select the **Authorize** button.
![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png)
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
description: A tutorial to walk through how to use Azure Web PubSub service and
+ Last updated 01/12/2024
azure-web-pubsub Socket Io Howto Integrate Apim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-howto-integrate-apim.md
keywords: Socket.IO, Socket.IO on Azure, webapp Socket.IO, Socket.IO integration
-+ Last updated 1/11/2024
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
bastion Quickstart Developer Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md
description: Learn how to deploy Bastion using the Developer SKU.
Previously updated : 01/11/2024 Last updated : 01/31/2024
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer SKU. After Bastion is deployed, you can connect to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+The following diagram shows the architecture for Azure Bastion and the Developer SKU.
++ > [!IMPORTANT] > During Preview, Bastion Developer SKU is free of charge. Pricing details will be released at GA for a usage-based pricing model.
batch Batch Pool Cloud Service To Virtual Machine Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-cloud-service-to-virtual-machine-configuration.md
Title: Migrate Batch pool configuration from Cloud Services to Virtual Machines description: Learn how to update your pool configuration to the latest and recommended configuration Previously updated : 09/03/2021 Last updated : 01/30/2024 # Migrate Batch pool configuration from Cloud Services to Virtual Machine
Cloud Services Configuration pools don't support some of the current Batch featu
If your Batch solutions currently use 'cloudServiceConfiguration' pools, we recommend changing to 'virtualMachineConfiguration' as soon as possible. This will enable you to benefit from all Batch capabilities, such as an expanded [selection of VM series](batch-pool-vm-sizes.md), Linux VMs, [containers](batch-docker-container-workloads.md), [Azure Resource Manager virtual networks](batch-virtual-network.md), and [node disk encryption](disk-encryption.md).
+> [!IMPORANT]
+> Azure [Batch account certificates](credential-access-key-vault.md) are deprecated and will be retired after the
+> same February 29, 2024 date as `cloudServiceConfiguration` pools. If you are using Batch account certificates,
+> [migrate your Batch account certificates to Azure Key Vault](batch-certificate-migration-guide.md) at the same
+> time as migrating your pool configuration.
+ ## Create a pool using Virtual Machine Configuration You can't switch an existing active pool that uses 'cloudServiceConfiguration' to use 'virtualMachineConfiguration'. Instead, you'll need to create new pools. Once you've created your new 'virtualMachineConfiguration' pools and replicated all of your jobs and tasks, you can delete the old 'cloudServiceConfiguration'pools that you're no longer using.
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
The following are known limitations in Chaos Studio.
- Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md) are also required. -- **Supported VM operating systems** - If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems:-
- - Windows Server 2019, Windows Server 2016, and Windows Server 2012 R2
- - Red Hat Enterprise Linux 8, Red Hat Enterprise Linux 8.2, openSUSE Leap 15.2, CentOS 8, Debian 10 Buster (with unzip installation required), Oracle Linux 8.3, and Ubuntu Server 18.04 LTS
-- **Hardened Linux untested** - The Chaos Studio agent isn't currently tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).-- **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers:
- * **Windows:** Microsoft Edge, Google Chrome, and Firefox
- * **MacOS:** Safari, Google Chrome, and Firefox
+- **Version support** - Review the [Azure Chaos Studio version compatibility](chaos-studio-versions.md) page for more information on operating system, browser, and integration version compatibility.
- **Terraform** - Chaos Studio doesn't support Terraform at this time. - **PowerShell modules** - Chaos Studio doesn't have dedicated PowerShell modules at this time. For PowerShell, use our REST API - **Azure CLI** - Chaos Studio doesn't have dedicated AzCLI modules at this time. Use our REST API from AzCLI
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Get started with Azure Chaos Studio by using a virtual machine (VM) shutdown ser
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- A Linux VM. If you don't have a VM, [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md).
+- A Linux VM running an operating system in the [Azure Chaos Studio version compatibility](chaos-studio-versions.md) list. If you don't have a VM, [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md).
## Register the Chaos Studio resource provider If it's your first time using Chaos Studio, you must first register the Chaos Studio resource provider before you onboard the resources and create an experiment. You must do these steps for each subscription where you use Chaos Studio:
Create an Azure resource and ensure that it's one of the supported [fault provid
1. Search for **Virtual Machine Contributor** and select the role. Select **Next**. ![Screenshot that shows choosing the role for the VM.](images/quickstart-virtual-machine-contributor.png)+
+1. Select **Managed identity** option
+
1. Choose **Select members** and search for your experiment name. Select your experiment and choose **Select**. ![Screenshot that shows selecting the experiment.](images/quickstart-select-experiment-role-assignment.png)
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- A virtual machine. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
+- A virtual machine running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
- A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- A Linux VM. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
+- A Linux VM running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
- A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity *that was assigned to the target VM or virtual machine scale set*. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-versions.md
+
+ Title: Azure Chaos Studio compatibility
+description: Understand the compatibility of Azure Chaos Studio with operating systems and tools.
+++ Last updated : 01/26/2024++++
+# Azure Chaos Studio version compatibility
+
+The following reference shows relevant version support and compatibility for features within Chaos Studio.
+
+## Operating systems supported by the agent
+
+The Chaos Studio agent is tested for compatibility with the following operating systems on Azure virtual machines. This testing involves deploying an Azure virtual machine with the specified SKU, installing the agent as a virtual machine extension, and validating the output of the available agent-based faults.
+
+| Operating system | Chaos agent compatibility | Notes |
+|::|::|::|
+| Windows Server 2019 | Γ£ô | |
+| Windows Server 2016 | Γ£ô | |
+| Windows Server 2012 R2 | Γ£ô | |
+| Red Hat Enterprise Linux 8 | Γ£ô | Currently tested up to 8.9 |
+| openSUSE Leap 15.2 | Γ£ô | |
+| CentOS 8 | Γ£ô | |
+| Debian 10 Buster | Γ£ô | Installation of `unzip` utility required |
+| Oracle Linux 8.3 | Γ£ô | |
+| Ubuntu Server 18.04 LTS | Γ£ô | |
+
+The agent isn't currently tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).
+
+If an operating system isn't currently listed, you may still attempt to install, use, and troubleshoot the virtual machine extension, agent, and agent-based capabilities, but Chaos Studio cannot guarantee behavior or support for an unlisted operating system.
+
+To request validation and support on more operating systems or versions, use the [Chaos Studio Feedback Community](https://aka.ms/ChaosStudioFeedback).
+
+## Chaos Mesh compatibility
+
+Faults within Azure Kubernetes Service resources currently integrate with the open-source project [Chaos Mesh](https://chaos-mesh.org/), which is part of the [Cloud Native Computing Foundation](https://www.cncf.io/projects/chaosmesh/). Review [Create a chaos experiment that uses a Chaos Mesh fault to kill AKS pods with the Azure portal](chaos-studio-tutorial-aks-portal.md) for more details on using Azure Chaos Studio with Chaos Mesh.
+
+Find Chaos Mesh's support policy and release dates here: [Supported Releases](https://chaos-mesh.org/supported-releases/).
+
+Chaos Studio currently tests with the following version combinations.
+
+| Chaos Studio fault version | Kubernetes version | Chaos Mesh version | Notes |
+|::|::|::|::|
+| 2.1 | 1.25.11 | 2.5.1 | |
+
+The *Chaos Studio fault version* column refers to the individual fault version for each AKS Chaos Mesh fault used in the experiment JSON, for example `urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1`. If a past version of the corresponding Chaos Studio fault remains available from the Chaos Studio API (for example, `...podChaos/1.0`), it is within support.
+
+## Browser compatibility
+
+Review the Azure portal documentation on [Supported devices](../azure-portal/azure-portal-supported-browsers-devices.md) for more information on browser support.
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md
-zone_pivot_groups: acs-azcli-azp-azpnew-java-net-python-csharp-js
+zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js
# Quickstart: Get and manage phone numbers
zone_pivot_groups: acs-azcli-azp-azpnew-java-net-python-csharp-js
[!INCLUDE [Azure portal](./includes/phone-numbers-portal.md)] ::: zone-end - ::: zone pivot="programming-language-csharp" [!INCLUDE [Azure portal](./includes/phone-numbers-net.md)] ::: zone-end
communication-services Migrating To Azure Communication Services Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md
Title: Tutorial - Migrating from Twilio video to ACS
-description: In this tutorial, you learn how to migrate your calling product from Twilio to Azure Communication Services.
+description: Learn how to migrate a calling product from Twilio to Azure Communication Services.
# Migration Guide from Twilio Video to Azure Communication Services
-This article provides guidance on how to migrate your existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) for WebJS. Twilio Video and Azure Communication Services' calling SDK for WebJS are both cloud-based platforms that enable developers to add voice and video calling features to their web applications. However, there are some key differences between them that may affect your choice of platform or require some changes to your existing code if you decide to migrate. In this article, we will compare the main features and functionalities of both platforms and provide some guidance on how to migrate your existing Twilio Video implementation to Azure Communication Services' Calling SDK for WebJS.
+This article describes how to migrate an existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) for WebJS. Both Twilio Video and Azure Communication Services' Calling SDK for WebJS are also cloud-based platforms that enable developers to add voice and video calling features to their web applications.
+
+However, there are some key differences between them that may affect your choice of platform or require some changes to your existing code if you decide to migrate. In this article, we compare the main features and functions of both platforms and provide some guidance on how to migrate your existing Twilio Video implementation to Azure Communication Services' Calling SDK for WebJS.
## Key features of the Azure Communication Services calling SDK -- Addressing - Azure Communication Services provides [identities](../concepts/identity-model.md) for authentication and addressing communication endpoints. These identities are used within Calling APIs, providing clients with a clear view of who is connected to a call (the roster).-- Encryption - The Calling SDK safeguards traffic by encrypting it and preventing tampering along the way.-- Device Management and Media - The SDK handles the management of audio and video devices, efficiently encodes content for transmission, and supports both screen and application sharing.-- PSTN - The SDK can initiate voice calls with the traditional Public Switched Telephone Network (PSTN), [using phone numbers acquired either in the Azure portal](../quickstarts/telephony/get-phone-number.md) or programmatically.-- Teams Meetings ΓÇô Azure Communication Services is equipped to [join Teams meetings](../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with Teams voice and its video calls.-- Notifications - Azure Communication Services provides APIs for notifying clients of incoming calls, allowing your application to listen to events (for example, incoming calls) even when your application is not running in the foreground.-- User Facing Diagnostics (UFD) - Azure Communication Services utilizes [events](../concepts/voice-video-calling/user-facing-diagnostics.md) designed to provide insights into underlying issues that could affect call quality, allowing developers to subscribe to triggers such as weak network signals or muted microphones for proactive issue awareness.-- Media Statics - Provides comprehensive insights into VoIP and video call [metrics](../concepts/voice-video-calling/media-quality-sdk.md), including call quality information, empowering developers to enhance communication experiences.-- Video Constraints - Azure Communication Services offers APIs that control [video quality among other parameters](../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls. By adjusting parameters like resolution and frame rate, the SDK supports different call situations for varied levels of video quality.
+- **Addressing** - Azure Communication Services provides [identities](../concepts/identity-model.md) for authentication and addressing communication endpoints. These identities are used within Calling APIs, providing clients with a clear view of who is connected to a call (the roster).
+- **Encryption** - The Calling SDK safeguards traffic by encrypting it and preventing tampering along the way.
+- **Device Management and Media enablement** - The SDK manages audio and video devices, efficiently encodes content for transmission, and supports both screen and application sharing.
+- **PSTN calling** - You can use the SDK to initiate voice calling using the traditional Public Switched Telephone Network (PSTN), [using phone numbers acquired either in the Azure portal](../quickstarts/telephony/get-phone-number.md) or programmatically.
+- **Teams Meetings** ΓÇô Azure Communication Services is equipped to [join Teams meetings](../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with Teams voice and video calls.
+- **Notifications** - Azure Communication Services provides APIs to notify clients of incoming calls. This allows your application to listen for events (such as incoming calls) even when your application isn't running in the foreground.
+- **User Facing Diagnostics** - Azure Communication Services uses [events](../concepts/voice-video-calling/user-facing-diagnostics.md) designed to provide insights into underlying issues that could affect call quality. You can subscribe your application to triggers such as weak network signals or muted microphones for proactive issue awareness.
+- **Media Quality Statistics** - Provides comprehensive insights into VoIP and video call [metrics](../concepts/voice-video-calling/media-quality-sdk.md). Metrics include call quality information, empowering developers to enhance communication experiences.
+- **Video Constraints** - Azure Communication Services offers APIs that control [video quality among other parameters](../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls. The SDK supports different call situations for varied levels of video quality, so developers can adjust parameters like resolution and frame rate.
-**For a more detailed understanding of the capabilities of the Calling SDK for different platforms, consult** [**this document**](../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities)**.**
+**For a more detailed understanding of the Calling SDK for different platforms, see** [**this document**](../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities)**.**
If you're embarking on a new project from the ground up, see the [Quickstarts of the Calling SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web). **Prerequisites:**
-1. **Azure Account:** Confirm that you have an active subscription in your Azure account. New users can create a free Azure account [here](https://azure.microsoft.com/free/).
-2. **Node.js 18:** Ensure Node.js 18 is installed on your system; download can be found right [here](https://nodejs.org/en).
-3. **Communication Services Resource:** Set up a [Communication Services Resource](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp) via your Azure portal and note down your connection string.
-4. **Azure CLI:** You can get the Azure CLI installer from [here](/cli/azure/install-azure-cli-windows?tabs=azure-cli)..
+1. **Azure Account:** Make sure that your Azure account is active. New users can create a free account at [Microsoft Azure](https://azure.microsoft.com/free/).
+2. **Node.js 18:** Ensure Node.js 18 is installed on your system. Download from [Node.js](https://nodejs.org/en).
+3. **Communication Services Resource:** Set up a [Communication Services Resource](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp) via your Azure portal and note your connection string.
+4. **Azure CLI:** Follow the instructions at [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows?tabs=azure-cli)..
5. **User Access Token:** Generate a user access token to instantiate the call client. You can create one using the Azure CLI as follows: ```console az communication identity token issue --scope voip --connection-string "yourConnectionString" ```
-For more information, see the guide on how to [Use Azure CLI to Create and Manage Access Tokens](../quickstarts/identity/access-tokens.md?pivots=platform-azcli).
+For more information, see [Use Azure CLI to Create and Manage Access Tokens](../quickstarts/identity/access-tokens.md?pivots=platform-azcli).
For Video Calling as a Teams user: -- You also can use Teams identity. For instructions on how to generate an access token for a Teams User, [follow this guide](../quickstarts/manage-teams-identity.md?pivots=programming-language-javascript).-- Obtain the Teams thread ID for call operations using the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). Additional information on how to create a chat thread ID can be found [here](/graph/api/chat-post?preserve-view=true&tabs=javascript&view=graph-rest-1.0#example-2-create-a-group-chat).
+- You can also use Teams identity. To generate an access token for a Teams User, see [Manage teams identity](../quickstarts/manage-teams-identity.md?pivots=programming-language-javascript).
+- Obtain the Teams thread ID for call operations using the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). For information about creating a thread ID, see [Create chat - Microsoft Graph v1.0 > Example2: Create a group chat](/graph/api/chat-post?preserve-view=true&tabs=javascript&view=graph-rest-1.0#example-2-create-a-group-chat).
### UI library
-The UI Library simplifies the process of creating modern communication user interfaces using Azure Communication Services. It offers a collection of ready-to-use UI components that you can easily integrate into your application.
+The UI library simplifies the process of creating modern communication user interfaces using Azure Communication Services. It offers a collection of ready-to-use UI components that you can easily integrate into your application.
-This prebuilt set of controls facilitates the creation of aesthetically pleasing designs using [Fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and the development of audio/video communication experiences. If you wish to explore more about the UI Library, check out the [overview page](../concepts/ui-library/ui-library-overview.md), where you find comprehensive information about both web and mobile platforms.
+This open source prebuilt set of controls enables you to create aesthetically pleasing designs using [Fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and develop high quality audio/video communication experiences. For more information, check out the [Azure Communications Services UI Library overview](../concepts/ui-library/ui-library-overview.md). The overview includes comprehensive information about both web and mobile platforms.
### Calling support
-The Azure Communication Services Calling SDK supports the following streaming configurations:
+The Azure Communication Services calling SDK supports the following streaming configurations:
| Limit | Web | Windows/Android/iOS | ||-|--|
The Azure Communication Services Calling SDK supports the following streaming co
## Call Types in Azure Communication Services
-Azure Communication Services offers various call types. The type of call you choose impacts your signaling schema, the flow of media traffic, and your pricing model. Further details can be found [here](../concepts/voice-video-calling/about-call-types.md).
+Azure Communication Services offers various call types. The type of call you choose impacts your signaling schema, the flow of media traffic, and your pricing model. For more information, see [Voice and video concepts](../concepts/voice-video-calling/about-call-types.md).
-- Voice Over IP (VoIP) - This type of call involves one user of your application calling another over an internet or data connection. Both signaling and media traffic are routed over the internet.-- Public Switched Telephone Network (PSTN) - When your users interact with a traditional telephone number, calls are facilitated via PSTN voice calling. In order to make and receive PSTN calls, you need to introduce telephony capabilities to your Azure Communication Services resource. Here, signaling and media employ a mix of IP-based and PSTN-based technologies to connect your users.-- One-to-One Call - When one of your users connects with another through our SDKs. The call can be established via either VoIP or PSTN.-- Group Call - Involved when three or more participants connect. Any combination of VoIP and PSTN-connected users can partake in a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot.-- Rooms Call - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, please refer to the [conceptual documentation](../concepts/rooms/room-concept.md).
+- **Voice Over IP (VoIP)** - When a user of your application calls another over an internet or data connection. Both signaling and media traffic are routed over the internet.
+- **Public Switched Telephone Network (PSTN)** - When your users call a traditional telephone number, calls are facilitated via PSTN voice calling. To make and receive PSTN calls, you need to introduce telephony capabilities to your Azure Communication Services resource. Here, signaling and media employ a mix of IP-based and PSTN-based technologies to connect your users.
+- **One-to-One Calls** - When one of your users connects with another through our SDKs. The call can be established via either VoIP or PSTN.
+- **Group Calls** - Happens when three or more participants connect in a single call. Any combination of VoIP and PSTN-connected users can be on a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot.
+- **Rooms Call** - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, see the [Rooms overview](../concepts/rooms/room-concept.md).
## Installation
npm install @azure/communication-common npm install @azure/communication-calling
### Remove the Twilio SDK from the project
-You can remove the Twilio SDK from your project by uninstalling the package
+You can remove the Twilio SDK from your project by uninstalling the package.
```console npm uninstall twilio-video ```
-## Object model
+## Object Model
The following classes and interfaces handle some of the main features of the Azure Communication Services Calling SDK: | **Name** | **Description** | |--|-| | CallClient | The main entry point to the Calling SDK. |
-| AzureCommunicationTokenCredential | Implements the CommunicationTokenCredential interface, which is used to instantiate the CallAgent. |
-| CallAgent | Used to start and manage calls. |
-| Device Manager | Used to manage media devices. |
-| Call | Used for representing a Call. |
-| LocalVideoStream | Used for creating a local video stream for a camera device on the local system. |
-| RemoteParticipant | Used for representing a remote participant in the Call. |
-| RemoteVideoStream | Used for representing a remote video stream from a Remote Participant. |
-| LocalAudioStream | Represents a local audio stream for a local microphone device |
-| AudioOptions | Audio options, which are provided when making an outgoing call or joining a group call |
-| AudioIssue | Represents the end of call survey audio issues, example responses would be NoLocalAudio - the other participants were unable to hear me, or LowVolume - the callΓÇÖs audio volume was low |
-
-When using in a Teams implementation there are a few differences:
+| AzureCommunicationTokenCredential | Implements the `CommunicationTokenCredential` interface, which is used to instantiate the CallAgent. |
+| CallAgent | Start and manage calls. |
+| Device Manager | Manage media devices. |
+| Call | Represents a Call. |
+| LocalVideoStream | Create a local video stream for a camera device on the local system. |
+| RemoteParticipant | Represents a remote participant in the Call. |
+| RemoteVideoStream | Represents a remote video stream from a Remote Participant. |
+| LocalAudioStream | Represents a local audio stream for a local microphone device. |
+| AudioOptions | Audio options, provided to a participant when making an outgoing call or joining a group call. |
+| AudioIssue | Represents the end of call survey audio issues. Example responses might be `NoLocalAudio` - the other participants were unable to hear me, or `LowVolume` - the call audio volume was too low. |
+
+When using ACS calling in a Teams call, there are a few differences:
- Instead of `CallAgent` - use `TeamsCallAgent` for starting and managing Teams calls. - Instead of `Call` - use `TeamsCall` for representing a Teams Call.
Using the `CallClient`, initialize a `CallAgent` instance. The `createCallAgent`
#### Twilio
-Twilio doesn't have a Device Manager analog, tracks are being created using the systemΓÇÖs default device. For customization, you should obtain the desired source track via:
+Twilio doesn't have a Device Manager analog. Tracks are created using the systemΓÇÖs default device. To customize a device, obtain the desired source track via:
```javascript navigator.mediaDevices.getUserMedia() ```
callClient = new CallClient();
const callAgent = await callClient.createCallAgent(tokenCredential, {displayName: 'optional user name'}); ```
-You can use the getDeviceManager method on the CallClient instance to access deviceManager.
+You can use the `getDeviceManager` method on the `CallClient` instance to access `deviceManager`.
-```javascript
const deviceManager = await callClient.getDeviceManager();
+```javascript
// Get a list of available video devices for use. const localCameras = await deviceManager.getCameras();
twilioRoom = await twilioVideo.connect('token', { name: 'roomName', audio: false
### Azure Communication Services
-To create and start a call, use one of the APIs on `callAgent` and provide a user that you created through the Communication Services identity SDK.
+To create and start a call, use one of the `callAgent` APIs and provide a user that you created through the Communication Services identity SDK.
-Call creation and start are synchronous. The `call` instance allows you to subscribe to call events - subscribe to `stateChanged` event for value changes.
+Call creation and start are synchronous. The `call` instance enables you to subscribe to call events. Subscribe to the `stateChanged` event for value changes.
```javascript call.on('stateChanged', async () =\> { console.log(\`Call state changed: \${call.state}\`) });
-``````
+```
### Azure Communication Services 1:1 Call
-To call another Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's CommunicationUserIdentifier that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md).
+To call another Azure Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's `CommunicationUserIdentifier` that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md).
```javascript const userCallee = { communicationUserId: '\<Azure_Communication_Services_USER_ID\>' }; const oneToOneCall = callAgent.startCall([userCallee]);
const oneToOneCall = callAgent.startCall([userCallee]);
### Azure Communication Services Room Call
-To join a `room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the join method and pass the context instance.
+To join a `Room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the `join` method and pass the context instance.
```javascript const context = { roomId: '\<RoomId\>' }; const call = callAgent.join(context); ```
-A **room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **rooms**, you can read the [conceptual documentation](../concepts/rooms/room-concept.md) or follow the [quick start guide](../quickstarts/rooms/join-rooms-call.md).
+A **Room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **Rooms**, see the [Rooms overview](../concepts/rooms/room-concept.md), or see [Quickstart: Join a room call](../quickstarts/rooms/join-rooms-call.md).
-### Azure Communication Services group Call
+### Azure Communication Services Group Call
-To start a new group call or join an ongoing group call, use the `join` method and pass an object with a groupId property. The `groupId` value has to be a GUID.
+To start a new group call or join an ongoing group call, use the `join` method and pass an object with a `groupId` property. The `groupId` value must be a GUID.
```javascript const context = { groupId: '\<GUID\>'}; const call = callAgent.join(context);
const call = callAgent.join(context);
### Azure Communication Services Teams call
-Start a synchronous one-to-one or group call with `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events.
+Start a synchronous one-to-one or group call using the `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events.
```javascript const userCallee = { microsoftTeamsUserId: '\<MICROSOFT_TEAMS_USER_ID\>' }; const oneToOneCall = teamsCallAgent.startCall(userCallee);
const oneToOneCall = teamsCallAgent.startCall(userCallee);
### Twilio
-The Twilio Video SDK the Participant is being created after joining the room, and it doesn't have any information about other rooms.
+When using Twilio Video SDK, the Participant is created after joining the room; and it doesn't have any information about other rooms.
### Azure Communication Services
callAgent.on('incomingCall', async (call) =\>{
The `incomingCall` event includes an `incomingCall` instance that you can accept or reject.
-When starting/joining/accepting a call with video on, if the specified video camera device is being used by another process or if it's disabled in the system, the call starts with video off, and a `cameraStartFailed:` true call diagnostic will be raised.
+When starting, joining, or accepting a call with *video on*, if the specified video camera device is being used by another process or if it's disabled in the system, the call starts with *video off*, and returns a `cameraStartFailed: true` call diagnostic.
```javascript const incomingCallHandler = async (args: { incomingCall: IncomingCall }) => {
const incomingCallHandler = async (args: { incomingCall: IncomingCall }) => {
// Get incoming call ID var incomingCallId = incomingCall.id
- // Get information about this Call. This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment. To use this api please use 'beta' release of Azure Communication Services Calling Web SDK
+ // Get information about this Call.
var callInfo = incomingCall.info; // Get information about caller
callAgentInstance.on('incomingCall', incomingCallHandler);
```
-After starting a call, joining a call, or accepting a call, you can also use the callAgents' `callsUpdated` event to be notified of the new Call object and start subscribing to it.
+After starting a call, joining a call, or accepting a call, you can also use the `callAgent` `callsUpdated` event to be notified of the new `Call` object and start subscribing to it.
```javascript callAgent.on('callsUpdated', (event) => { event.added.forEach((call) => {
callAgent.on('callsUpdated', (event) => {
}); ```
-For Azure Communication Services Teams implementation, check how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call).
+For Azure Communication Services Teams implementation, see how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call).
## Adding participants to call
call.remoteParticipants; // [remoteParticipant, remoteParticipant....]
**Add participant:**
-To add a participant to a call, you can use `addParticipant`. Provide one of the Identifier types. It synchronously returns the remoteParticipant instance.
+To add a participant to a call, you can use `addParticipant`. Provide one of the Identifier types. It synchronously returns the `remoteParticipant` instance.
The `remoteParticipantsUpdated` event from Call is raised when a participant is successfully added to the call. ```javascript
const remoteParticipant = call.addParticipant(userIdentifier);
**Remove participant:**
-To remove a participant from a call, you can invoke `removeParticipant`. You have to pass one of the Identifier types. This method resolves asynchronously after the participant is removed from the call. The participant is also removed from the `remoteParticipants` collection.
+To remove a participant from a call, use `removeParticipant`. You need to pass one of the Identifier types. This method resolves asynchronously after the participant is removed from the call. The participant is also removed from the `remoteParticipants` collection.
```javascript const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; await call.removeParticipant(userIdentifier);
const videoTrack = await twilioVideo.createLocalVideoTrack({ constraints });
const videoTrackPublication = await localParticipant.publishTrack(videoTrack, { options }); ```
-Camera is enabled by default, however it can be disabled and enabled back if necessary:
+The camera is enabled by default. It can be disabled and enabled back if necessary:
```javascript videoTrack.disable(); ```
-Or
+Or:
```javascript videoTrack.enable(); ```
-Later created video track should be attached locally:
+If there's a later created video track, attach it locally:
```javascript const videoElement = videoTrack.attach(); const localVideoContainer = document.getElementById( localVideoContainerId ); localVideoContainer.appendChild(videoElement);- ```
-Twilio Tracks rely on default input devices and reflect the changes in defaults. However, to change an input device, the previous Video Track should be unpublished:
+Twilio Tracks rely on default input devices and reflect the changes in defaults. To change an input device, you need to unpublish the previous Video Track:
```javascript localParticipant.unpublishTrack(videoTrack); ```
-And a new Video Track with the correct constraints should be created.
+Then create a new Video Track with the correct constraints.
#### Azure Communication Services
-To start a video while on a call, you have to enumerate cameras using the getCameras method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and then pass the `LocalVideoStream` object into the `startVideo` method of an existing call object:
+To start a video while on a call, you need to enumerate cameras using the `getCameras` method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and pass the `LocalVideoStream` object into the `startVideo` method of an existing call object:
```javascript const deviceManager = await callClient.getDeviceManager();
const localVideoStream = new LocalVideoStream(camera);
await call.startVideo(localVideoStream); ```
-After you successfully start sending video, a LocalVideoStream instance of type Video is added to the localVideoStreams collection on a call instance.
+After you successfully start sending video, a `LocalVideoStream` instance of type Video is added to the `localVideoStreams` collection on a call instance.
```javascript const localVideoStream = call.localVideoStreams.find( (stream) =\> { return stream.mediaStreamType === 'Video'} ); ```
-To stop local video while on a call, pass the localVideoStream instance that's being used for video:
+To stop local video while on a call, pass the `localVideoStream` instance that's being used for video:
```javascript await call.stopVideo(localVideoStream); ```
-You can switch to a different camera device while a video is sending by invoking switchSource on a localVideoStream instance:
+You can switch to a different camera device while a video is sending by calling `switchSource` on a `localVideoStream` instance:
```javascript const cameras = await callClient.getDeviceManager().getCameras();
localVideoStream.switchSource(camera);
If the specified video device is being used by another process, or if it's disabled in the system: -- While in a call, if your video is off and you start video using call.startVideo(), this method throws a `SourceUnavailableError` and `cameraStartFailed` will be set to true.-- A call to the `localVideoStream.switchSource()` method causes `cameraStartFailed` to be set to true. Our [Call Diagnostics guide](../concepts/voice-video-calling/call-diagnostics.md) provides additional information on how to diagnose call related issues.
+- While in a call, if your video is off and you start video using `call.startVideo()`, this method returns a `SourceUnavailableError` and `cameraStartFailed` will be set to true.
+- A call to the `localVideoStream.switchSource()` method causes `cameraStartFailed` to be set to true. See the [Call Diagnostics guide](../concepts/voice-video-calling/call-diagnostics.md) for more information about how to diagnose call-related issues.
-To verify if the local video is on or off you can use `isLocalVideoStarted` API, which returns true or false:
+To verify whether the local video is *on* or *off* you can use the `isLocalVideoStarted` API, which returns true or false:
```javascript call.isLocalVideoStarted; ```
-To listen for changes to the local video, you can subscribe and unsubscribe to the `isLocalVideoStartedChanged` event
+To listen for changes to the local video, you can subscribe and unsubscribe to the `isLocalVideoStartedChanged` event:
```javascript // Subscribe to local video event
call.off('isLocalVideoStartedChanged', () => {
```
-### Rendering a remote user video
+### Rendering a remote user's video
#### Twilio
-As soon as a Remote Participant publishes a Video Track, it needs to be attached. `trackSubscribed` event on Room or Remote Participant allows you to detect when the track can be attached:
+As soon as a Remote Participant publishes a Video Track, it needs to be attached. The `trackSubscribed` event on Room or Remote Participant enables you to detect when the track can be attached:
```javascript twilioRoom.on('participantConneted', (participant) => {
const remoteVideoStream: RemoteVideoStream = call.remoteParticipants[0].videoStr
const streamType: MediaStreamType = remoteVideoStream.mediaStreamType; ```
-To render `RemoteVideoStream`, you have to subscribe to its `isAvailableChanged` event. If the `isAvailable` property changes to true, a remote participant is sending a stream. After that happens, create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance by using the asynchronous createView method. You can then attach `view.target` to any UI element.
+To render `RemoteVideoStream`, you need to subscribe to its `isAvailableChanged` event. If the `isAvailable` property changes to true, a remote participant is sending a stream. After that happens, create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance by using the asynchronous `createView` method. You can then attach `view.target` to any UI element.
-Whenever availability of a remote stream changes, you can destroy the whole `VideoStreamRenderer` or a specific `VideoStreamRendererView`. If you do decide to keep them it will result in displaying a blank video frame.
+Whenever availability of a remote stream changes, you can destroy the whole `VideoStreamRenderer` or a specific `VideoStreamRendererView`. If you do decide to keep them, it displays a blank video frame.
```javascript // Reference to the html's div where we would display a grid of all remote video streams from all participants.
subscribeToRemoteVideoStream = async (remoteVideoStream) => {
console.log(`Remote video stream size changed: new height: ${remoteVideoStream.size.height}, new width: ${remoteVideoStream.size.width}`); }); }- ```
-Subscribe to the remote participant's videoStreamsUpdated event to be notified when the remote participant adds new video streams and removes video streams.
+Subscribe to the remote participant's `videoStreamsUpdated` event to be notified when the remote participant adds new video streams and removes video streams.
```javascript remoteParticipant.on('videoStreamsUpdated', e => {
remoteParticipant.on('videoStreamsUpdated', e => {
// Unsubscribe from remote participant's video streams }); });- ``` ### Virtual background #### Twilio
-To use Virtual Background, Twilio helper library should be installed:
+To use Virtual Background, install Twilio helper library:
```console npm install @twilio/video-processors ```
-New Processor instance should be created and loaded:
+Create and load a new `Processor` instance:
```javascript import { GaussianBlurBackgroundProcessor } from '@twilio/video-processors';
const blurProcessor = new GaussianBlurBackgroundProcessor({ assetsPath: virtualB
await blurProcessor.loadModel(); ```
-As soon as the model is loaded the background can be added to the video track via addProcessor method:
-```javascript
-videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' });
-```
+As soon as the model is loaded, you can add the background to the video track using the `addProcessor` method:
+
+| videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' }); |
+||
#### Azure Communication Services
if (backgroundBlurSupported) {
} ```
-For background replacement with an image you need to provide the URL of the image you want as the background to this effect. Currently supported image formats are: png, jpg, jpeg, tiff, bmp, and current supported aspect ratio is 16:9
+For background replacement with an image you need to provide the URL of the image you want as the background to this effect. Supported image formats are: PNG, JPG, JPEG, TIFF, and BMP. The supported aspect ratio is 16:9.
```javascript const backgroundImage = 'https://linkToImageFile';
if (backgroundReplacementSupported) {
} ```
-Changing the image for this effect can be done by passing it via the configured method:
+Change the image for this effect by passing it via the configured method:
```javascript const newBackgroundImage = 'https://linkToNewImageFile';
await backgroundReplacementEffect.configure({
}); ```
-Switching effects can be done using the same method on the video effects feature API:
+To switch effects, use the same method on the video effects feature API:
```javascript // Switch to background blur
await videoEffectsFeatureApi.startEffects(backgroundBlurEffect);
await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); ```
-At any time if you want to check what effects are active, you can use the `activeEffects` property. The `activeEffects` property returns an array with the names of the currently active effects and returns an empty array if there are no affects active.
+At any time, if you want to check which effects are active, use the `activeEffects` property. The `activeEffects` property returns an array with the names of the currently active effects and returns an empty array if there are no effects active.
```javascript
-// Using the video effects feature API
+// Using the video effects feature api
const currentActiveEffects = videoEffectsFeatureApi.activeEffects; ```
const audioTrack = await twilioVideo.createLocalAudioTrack({ constraints });
const audioTrackPublication = await localParticipant.publishTrack(audioTrack, { options }); ```
-Microphone is enabled by default, however it can be disabled and enabled back if necessary:
+The microphone is enabled by default. You can disable and enable it back as needed:
```javascript audioTrack.disable(); ```
Or
audioTrack.enable(); ```
-Created Audio Track should be attached by Local Participant the same way as Video Track:
+Any created Audio Track should be attached by Local Participant the same way as Video Track:
```javascript const audioElement = audioTrack.attach();
twilioRoom.on('participantConneted', (participant) => {
}); ```
-Or
+Or:
```javascript twilioRoom..on('trackSubscribed', (track, publication, participant) => {
twilioRoom..on('trackSubscribed', (track, publication, participant) => {
```
-It is impossible to mute incoming audio in Twilio Video SDK.
+It isn't possible to mute incoming audio in Twilio Video SDK.
#### Azure Communication Services
await call.unmuteIncomingAudio();
```
-### Detecting Dominant speaker
+### Detecting dominant speaker
#### Twilio
-To detect the loudest Participant in the Room, Dominant Speaker API can be used. It can be enabled in the connection options when joining the Group Room with at least 2 participants:
+To detect the loudest Participant in the Room, use the Dominant Speaker API. You can enable it in the connection options when joining the Group Room with at least 2 participants:
```javascript twilioRoom = await twilioVideo.connect('token', { name: 'roomName',
dominantSpeaker: true
}); ```
-When the loudest speaker in the Room will change, the dominantSpeakerChanged event is emitted:
+When the loudest speaker in the Room changes, the `dominantSpeakerChanged` event is emitted:
```javascript twilioRoom.on('dominantSpeakerChanged', (participant) => {
twilioRoom.on('dominantSpeakerChanged', (participant) => {
#### Azure Communication Services
-Dominant speakers for a call are an extended feature of the core Call API and allows you to obtain a list of the active speakers in the call. This is a ranked list, where the first element in the list represents the last active speaker on the call and so on.
+Dominant speakers for a call are an extended feature of the core Call API. It enables you to obtain a list of the active speakers in the call. This is a ranked list, where the first element in the list represents the last active speaker on the call and so on.
In order to obtain the dominant speakers in a call, you first need to obtain the call dominant speakers feature API object: ```javascript
Next you can obtain the list of the dominant speakers by calling `dominantSpeake
let dominantSpeakers: DominantSpeakersInfo = callDominantSpeakersApi.dominantSpeakers; ```
-Also, you can subscribe to the `dominantSpeakersChanged` event to know when the dominant speakers list has changed.
+You can also subscribe to the `dominantSpeakersChanged` event to know when the dominant speakers list changes.
+ ```javascript const dominantSpeakersChangedHandler = () => {
callDominantSpeakersApi.on('dominantSpeakersChanged', dominantSpeakersChangedHan
## Enabling screen sharing ### Twilio
-To share the screen in Twilio Video, source track should be obtained via navigator.mediaDevices
+To share the screen in Twilio Video, obtain the source track via `navigator.mediaDevices`:
Chromium-based browsers: ```javascript
const stream = await navigator.mediaDevices.getUserMedia({ mediaSource: 'screen'
const track = stream.getTracks()[0]; ```
-Obtain the screen share track can then be published and managed the same way as casual Video Track (see the ΓÇ£VideoΓÇ¥ section).
+Obtain the screen share track, then you can publish and manage it the same way as the casual Video Track (see the ΓÇ£VideoΓÇ¥ section).
### Azure Communication Services
-To start screen sharing while on a call, you can use asynchronous API `startScreenSharing`:
+To start screen sharing while on a call, you can use the asynchronous API `startScreenSharing`:
```javascript await call.startScreenSharing(); ```
-After successfully starting to sending screen sharing, a `LocalVideoStream` instance of type `ScreenSharing` is created and is added to the `localVideoStreams` collection on the call instance.
+After successfully starting to sending screen sharing, a `LocalVideoStream` instance of type `ScreenSharing` is created and added to the `localVideoStreams` collection on the call instance.
```javascript const localVideoStream = call.localVideoStreams.find( (stream) => { return stream.mediaStreamType === 'ScreenSharing'} ); ```
-To stop screen sharing while on a call, you can use asynchronous API `stopScreenSharing`:
+To stop screen sharing while on a call, you can use the asynchronous API `stopScreenSharing`:
```javascript await call.stopScreenSharing(); ```
-To verify if screen sharing is on or off, you can use `isScreenSharingOn` API, which returns true or false:
+To verify whether screen sharing is on or off, you can use `isScreenSharingOn` API, which returns true or false:
```javascript call.isScreenSharingOn; ```
-To listen for changes to the screen share, you can subscribe and unsubscribe to the `isScreenSharingOnChanged` event
+To listen for changes to the screen share, subscribe and unsubscribe to the `isScreenSharingOnChanged` event:
```javascript // Subscribe to screen share event
call.off('isScreenSharingOnChanged', () => {
### Twilio
-To collect real-time media stats, the getStats method can be used.
+To collect real-time media stats, use the `getStats`` method.
```javascript const stats = twilioRoom.getStats(); ``` ### Azure Communication Services
-Media quality statistics is an extended feature of the core Call API. You first need to obtain the mediaStatsFeature API object:
+Media quality statistics is an extended feature of the core Call API. You first need to obtain the `mediaStatsFeature` API object:
```javascript const mediaStatsFeature = call.feature(Features.MediaStats);
const mediaStatsFeature = call.feature(Features.MediaStats);
To receive the media statistics data, you can subscribe `sampleReported` event or `summmaryReported` event: -- `sampleReported` event triggers every second. It's suitable as a data source for UI display or your own data pipeline.-- `summmaryReported` event contains the aggregated values of the data over intervals, which is useful when you just need a summary.
+- `sampleReported` event triggers every second. Suitable as a data source for UI display or your own data pipeline.
+- `summmaryReported` event contains the aggregated values of the data over intervals. Useful when you just need a summary.
-If you want control over the interval of the summmaryReported event, you need to define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions`. Otherwise, the SDK uses default values.
+If you want control over the interval of the `summmaryReported` event, you need to define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions`. Otherwise, the SDK uses default values.
```javascript const mediaStatsCollectorOptions: SDK.MediaStatsCollectorOptions = { aggregationInterval: 10,
mediaStatsCollector.on('summaryReported', (summary) => {
}); ```
-In case you don't need to use the media statistics collector, you can call dispose method of `mediaStatsCollector`.
+If you don't need to use the media statistics collector, you can call the dispose method of `mediaStatsCollector`.
```javascript mediaStatsCollector.dispose(); ```
-It's not necessary to call the dispose method of `mediaStatsCollector` every time the call ends, as the collectors are reclaimed internally when the call ends.
+You don't need to call the dispose method of `mediaStatsCollector` every time a call ends. The collectors are reclaimed internally when the call ends.
-You can learn more about media quality statistics [here](../concepts/voice-video-calling/media-quality-sdk.md?pivots=platform-web).
+For more information, see [Media quality statistics](../concepts/voice-video-calling/media-quality-sdk.md?pivots=platform-web).
## Diagnostics ### Twilio
-To test connectivity, Twilio offers Preflight API - a test call is performed to identify signaling and media connectivity issues.
+To test connectivity, Twilio offers Preflight API. This is a test call performed to identify signaling and media connectivity issues.
-To launch the test, an access token is required:
+An access token is required to launch the test:
```javascript const preflightTest = twilioVideo.runPreflight(token);
preflightTest.on('failed', (error, report) => {
preflightTest.on('completed', (report) => { console.log(`Preflight test report: ${report}`); });- ```
-Another way to identify network issues during the call is Network Quality API, which monitors Participant's network and provides quality metrics. It can be enabled in the connection options when joining the Group Room:
+Another way to identify network issues during the call is by using the Network Quality API, which monitors a Participant's network and provides quality metrics. You can enable it in the connection options when a participant joins the Group Room:
```javascript twilioRoom = await twilioVideo.connect('token', {
twilioRoom = await twilioVideo.connect('token', {
}); ```
-When the network quality for Participant changes, a `networkQualityLevelChanged` event will be emitted:
+When the network quality for Participant changes, it generates a `networkQualityLevelChanged` event:
```javascript participant.on(networkQualityLevelChanged, (networkQualityLevel, networkQualityStats) => { // Processing Network Quality stats
participant.on(networkQualityLevelChanged, (networkQualityLevel, networkQualityS
``` ### Azure Communication Services
-Azure Communication Services provides a feature called `"User Facing Diagnostics" (UFD)` that can be used to examine various properties of a call to determine what the issue might be. User Facing Diagnostics are events that are fired off that could indicate due to some underlying issue (poor network, the user has their microphone muted) that a user might have a poor experience.
+Azure Communication Services provides a feature called `"User Facing Diagnostics" (UFD)` that you can use to examine various properties of a call to identify the issue. User Facing Diagnostics events could be caused by some underlying issue (poor network, the user has their microphone muted) that could cause a user to have a poor call experience.
-User-facing diagnostics is an extended feature of the core Call API and allows you to diagnose an active call.
+User-facing diagnostics is an extended feature of the core Call API and enables you to diagnose an active call.
```javascript const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics); ```
-Subscribe to the diagnosticChanged event to monitor when any user-facing diagnostic changes:
+Subscribe to the `diagnosticChanged`` event to monitor when any user-facing diagnostic changes:
```javascript /** * Each diagnostic has the following data:
const diagnosticChangedListener = (diagnosticInfo: NetworkDiagnosticChangedEvent
userFacingDiagnostics.network.on('diagnosticChanged', diagnosticChangedListener); userFacingDiagnostics.media.on('diagnosticChanged', diagnosticChangedListener);- ```
-You can learn more about User Facing Diagnostics and the different diagnostic values available in [this article](../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
+To learn more about User Facing Diagnostics and the different diagnostic values available, see [User Facing Diagnostics](../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
-ACS also provides a pre-call diagnostics API. To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method.
+Azure Communication Services also provides a precall diagnostics API. To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. Then you can access the `PreCallDiagnostics` feature and the `startTest` method.
```javascript import { CallClient, Features} from "@azure/communication-calling";
const tokenCredential = new AzureCommunicationTokenCredential("INSERT ACCESS TOK
const preCallDiagnosticsResult = await callClient.feature(Features.PreCallDiagnostics).startTest(tokenCredential); ```
-The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a PreCallDiagnosticsResult object.
+The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `PreCallDiagnosticsResult` object.
```javascript export declare type PreCallDiagnosticsResult = {
export declare type PreCallDiagnosticsResult = {
}; ```
-You can learn more about ensuring precall readiness [here](../concepts/voice-video-calling/pre-call-diagnostics.md).
-
+You can learn more about ensuring precall readiness in [Pre-Call diagnostics](../concepts/voice-video-calling/pre-call-diagnostics.md).
## Event listeners
-### Twilio
+Twilio
```javascript twilioRoom.on('participantConneted', (participant) => {
container-apps Azure Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md
Previously updated : 12/05/2023 Last updated : 01/30/2024
ARM64 based clusters aren't supported at this time.
- Fix to image pull secret retrieval issues - Update placement of Envoy to distribute across available nodes where possible - When container apps fail to provision as a result of revision conflicts, set the provisioning state to failed
-
+
+### Container Apps extension v1.30.6 (January 2024)
+
+ - Update KEDA to v2.12
+ - Update Envoy SC image to v1.0.4
+ - Update Dapr image to v1.11.6
+ - Added default response timeout for Envoy routes to 1800 seconds
+ - Changed Fluent bit default log level to warn
+ - Delay deletion of job pods to ensure log emission
+ - Fixed issue for job pod deletion for failed job executions
+ - Ensure jobs in suspended state also have failed pods deleted
+ - Update to not resolve HTTPOptions for TCP applications
+ - Allow applications to listen on HTTP or HTTPS
+ - Add ability to suspend jobs
+ - Fixed issue where KEDA scaler was failing to create job after stopped job execution
+ - Add startingDeadlineSeconds to Container App Job in case of cluster reboot
+ - Removed heavy logging in Envoy access log server
+ - Updated Monitoring Configuration version for Azure Container Apps on Azure Arc enabled Kubernetes
+
## Next steps [Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md)
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
You can get started building your first container app [using the quickstarts](ge
[Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps
-[Azure Spring Apps](../spring-apps/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+[Azure Spring Apps](../spring-apps/enterprise/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following tables describe how to configure a collection of NSG allow rules.
# [Consumption only environment](#tab/consumption-only)
+>[!Note]
+> When using Consumption only environments, all [outbound ports required by Azure Kubernetes Service](/azure/aks/outbound-rules-control-egress#required-outbound-network-rules-and-fqdns-for-aks-clusters) are also required for your container app.
+ | Protocol | Source | Source ports | Destination | Destination ports | Description | |--|--|--|--|--|--| | TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> |
container-apps Ingress How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md
Disable ingress for your container app by omitting the `ingress` configuration p
::: zone-end
-## <a name="use-additional-tcp-ports"></a>Use additional TCP ports (preview)
+## <a name="use-additional-tcp-ports"></a>Use additional TCP ports
You can expose additional TCP ports from your application. To learn more, see the [ingress concept article](ingress-overview.md#additional-tcp-ports).
-> [Note]
-> To use this preview feature, you must have the container apps CLI extension. Run `az extension add -n containerapp` in order to install the latest version of the container apps CLI extension.
+> [!NOTE]
+> To use this feature, you must have the container apps CLI extension. Run `az extension add -n containerapp` in order to install the latest version of the container apps CLI extension.
::: zone pivot="azure-cli"
container-apps Ingress Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md
With TCP ingress enabled, your container app:
- Is accessible to other container apps in the same environment via its name (defined by the `name` property in the Container Apps resource) and exposed port number. - Is accessible externally via its fully qualified domain name (FQDN) and exposed port number if the ingress is set to "external".
-## <a name="additional-tcp-ports"></a>Additional TCP ports (preview)
+## <a name="additional-tcp-ports"></a>Additional TCP ports
-In addition to the main HTTP/TCP port for your container apps, you might expose additional TCP ports to enable applications that accept TCP connections on multiple ports. This feature is in preview.
+In addition to the main HTTP/TCP port for your container apps, you might expose additional TCP ports to enable applications that accept TCP connections on multiple ports.
> [!NOTE]
-> As the feature is in preview, make sure you are using the latest preview version of the container apps CLI extension.
+> This feature requires using the latest preview version of the container apps CLI extension.
The following apply to additional TCP ports: - Additional TCP ports can only be external if the app itself is set as external and the container app is using a custom VNet.
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Azure Container Instances
container-registry Container Registry Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-streaming.md
description: "Artifact streaming is a feature in Azure Container Registry to enh
+ Last updated 12/14/2023- #customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
Follow the steps to create artifact streaming in the [Azure portal](https://port
[az-acr-artifact-streaming-operation-cancel]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-cancel [az-acr-artifact-streaming-operation-show]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-show [az-acr-artifact-streaming-update]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-update-
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
az group create --name myResourceGroup --location eastus
## Create a container registry
-Create a container registry using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, *myContainerRegistry008* is used. Update this to a unique value.
+Create a container registry using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, *mycontainerregistry008* is used. Update this to a unique value.
```azurecli-interactive az acr create --resource-group myResourceGroup \
- --name myContainerRegistry008 --sku Basic
+ --name mycontainerregistry008 --sku Basic
``` This example creates a *Basic* registry, a cost-optimized option for developers learning about Azure Container Registry. For details on available service tiers, see [Container registry service tiers][container-registry-skus].
Run the [az acr build][az-acr-build] command, which builds the image and, after
```azurecli-interactive az acr build --image sample/hello-world:v1 \
- --registry myContainerRegistry008 \
+ --registry mycontainerregistry008 \
--file Dockerfile . ```
Now quickly run the image you built and pushed to your registry. Here you use [a
The following example uses $Registry to specify the endpoint of the registry where you run the command: ```azurecli-interactive
-az acr run --registry myContainerRegistry008 \
+az acr run --registry mycontainerregistry008 \
--cmd '$Registry/sample/hello-world:v1' ```
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
copilot Get Monitoring Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-monitoring-information.md
Title: Get information about Azure Monitor logs using Microsoft Copilot for Azure (preview) description: Learn about scenarios where Microsoft Copilot for Azure (preview) can provide information about Azure Monitor metrics and logs. Previously updated : 11/15/2023 Last updated : 01/30/2024
You can ask Microsoft Copilot for Azure (preview) questions about logs collected by [Azure Monitor](/azure/azure-monitor/).
-When asked about logs for a particular resource, Microsoft Copilot for Azure (preview) generates an example KQL expression and allows you to further explore the data in Azure Monitor logs. This capability is available for all customers using Log Analytics, and can be used in the context of a particular Azure Kubernetes Service cluster that is using Azure Monitor logs.
+When asked about logs for a particular resource, Microsoft Copilot for Azure (preview) generates an example KQL expression and allows you to further explore the data in Azure Monitor logs. This capability is available for all customers using Log Analytics, and can be used in the context of a particular Azure Kubernetes Service (AKS) cluster that uses Azure Monitor logs.
-When you ask Microsoft Copilot for Azure (preview) about logs, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify the resource for which you want information.
+To get details about your container logs, start on the **Logs** page for your AKS cluster.
[!INCLUDE [scenario-note](includes/scenario-note.md)]
When you ask Microsoft Copilot for Azure (preview) about logs, it automatically
## Sample prompts
-Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor logs. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information.
+Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor logs for an AKS cluster. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information.
- "Are there any errors in container logs?" - "Show logs for the last day of pod <provide_pod_name> under namespace <provide_namespace>"
Here are a few examples of the kinds of prompts you can use to get information a
## Next steps - Explore [capabilities](capabilities.md) of Microsoft Copilot for Azure (preview).-- Learn more about [Azure Monitor](/azure/azure-monitor/).
+- Learn more about [Azure Monitor](/azure/azure-monitor/) and [how to use it with AKS clusters](/azure/aks/monitor-aks).
- [Request access](https://aka.ms/MSCopilotforAzurePreview) to Microsoft Copilot for Azure (preview).
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
az cosmosdb sql container merge \
For **shared throughput databases**, start the merge by using `az cosmosdb sql database merge`.
-```azurecli
-az cosmosdb sql database merge \
- --account-name '<cosmos-account-name>'
- --name '<cosmos-database-name>'
- --resource-group '<resource-group-name>'
+```azurecli-interactive
+az cosmosdb sql database merge `
+ --resource-group "<resource-group-name>" `
+ --name "<database-name>" `
+ --account-name "<cosmos-db-account-name>"
```
-```http
-POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/sqlDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview
-```
+```azurecli-interactive
+databaseId=$(az cosmosdb sql database show `
+ --resource-group "<resource-group-name>" `
+ --name "<database-name>" `
+ --account-name "<cosmos-db-account-name>" `
+ --query "id" `
+ --output "tsv"
+)
+
+endpoint="https://management.azure.com$databaseId/partitionMerge?api-version=2023-11-15-preview"
+
+az rest `
+ --method "POST" `
+ --url $endpoint `
+ --body "{}"
#### [API for MongoDB](#tab/mongodb/azure-powershell)
az cosmosdb mongodb collection merge \
-For **shared-throughput databases**, start the merge by using [`az cosmosdb mongodb database merge`](/cli/azure/cosmosdb/mongodb/database?view=azure-cli-latest).
+For **shared-throughput databases**, start the merge by using [`az cosmosdb mongodb database merge`](/cli/azure/cosmosdb/mongodb/database).
-```azurecli
+```azurecli-interactive
az cosmosdb mongodb database merge \ --account-name '<cosmos-account-name>' --name '<cosmos-database-name>'
az cosmosdb mongodb database merge \
```
-```http
+```http-interactive
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/mongodbDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview ```
cosmos-db Change Feed Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-modes.md
Latest version mode is a persistent record of changes made to items from creates
## All versions and deletes change feed mode (preview)
-All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts. Learn more about how to [sign up for the preview](#get-started).
+All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts. Learn more about how to [sign up for the preview](?tabs=all-versions-and-deletes#get-started).
## Change feed use cases
cosmos-db Client Metrics Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/client-metrics-java.md
+ Last updated 12/14/2023
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md
ms.devlang: golang+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
ms.devlang: java+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
ms.devlang: javascript+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Tutorial Spark Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-spark-connector.md
+ Last updated 01/17/2024 zone_pivot_groups: programming-languages-spark-all-minus-sql-r-csharp
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-springboot-azure-kubernetes-service.md
# Tutorial - Spring Boot Application with Azure Cosmos DB for NoSQL and Azure Kubernetes Service [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
+> [!NOTE]
+> For Spring Boot applications, we recommend using Azure Spring Apps. However, you can still use Azure Kubernetes Service as a destination.
+ In this tutorial, you will set up and deploy a Spring Boot application that exposes REST APIs to perform CRUD operations on data in Azure Cosmos DB (API for NoSQL account). You will package the application as Docker image, push it to Azure Container Registry, deploy to Azure Kubernetes Service and test the application. ## Pre-requisites
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
cosmos-db Reference Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-terraform.md
+ Last updated 01/02/2024
Terraform provides documentation for all supported Azure Cosmos DB for PostgreSQ
## Next steps * See [the latest documentation for Terraform's Azure provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs).
-* Learn to [use Azure CLI authentication in Terraform](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli).
+* Learn to [use Azure CLI authentication in Terraform](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli).
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
# Installs Node and the npm packages saved in your package.json file in the build
- - task: NodeTool@0
+ - task: UseNode@1
inputs:
- versionSpec: '14.x'
+ version: '18.x'
displayName: 'Install Node.js' - task: Npm@1
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
For other scenarios than binary file copy, copy activity rerun starts from the b
While copying data from source to sink, in scenarios like data lake migration, you can also choose to preserve the metadata and ACLs along with data using copy activity. See [Preserve metadata](copy-activity-preserve-metadata.md) for details.
+## Add metadata tags to file based sink
+When the sink is Azure Storage based (Azure data lake storage or Azure Blob Storage), we can opt to add some metadata to the files. These metadata will be appearing as part of the file properties as Key-Value pairs.
+For all the types of file based sinks, you can add metadata involving dynamic content using the pipeline parameters, system variables, functions and variables.
+In addition to this, for binary file based sink, you have the option to add Last Modified datetime (of the source file) using the keyword $$LASTMODIFIED, as well as custom values as a metadata to the sink file.
+ ## Schema and data type mapping See [Schema and data type mapping](copy-activity-schema-and-type-mapping.md) for information about how the Copy activity maps your source data to your sink.
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Previously updated : 12/13/2023 Last updated : 01/29/2024
All the linked service types are supported for parameterization.
- Generic HTTP - Generic REST - Google AdWords
+- Google BigQuery
- Informix
+- MariaDB
- Microsoft Access - MySQL - OData
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
Unsupported resources include:
* Azure API Management in deployment modes other than the supported modes. * PaaS services (multi-tenant) including Azure App Service Environment for Power Apps. * Protected resources that include public IPs created from public IP address prefix.
+* NAT Gateway.
[!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)]
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024
defender-for-cloud Concept Gcp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-gcp-connector.md
Title: Defender for Cloud's GCP connector description: Learn how the GCP connector works on Microsoft Defender for Cloud.- Last updated 06/29/2023
The GCP connector allows for continuous monitoring of Google Cloud resources for
The authentication process between Microsoft Defender for Cloud and GCP is a federated authentication process.
-When you onboard to Defender for Cloud, the GCloud template is used to create the following resources as part of the authentication process:
+When you onboard to Defender for Cloud, the GCloud template is used to create the following resources as part of the authentication process:
- Workload identity pool and providers
From here, you can decide which resources you want to protect based on the secur
### Configure access
-Once you've selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
+Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
In this step, you can find the GCloud script that needs to be run on the GCP project that is going to onboarded. The GCloud script is generated based on the plans you selected to onboard.
From here, you can decide which resources you want to protect based on the secur
### Configure access
-Once you've selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
+Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
When you onboard an organization, there's a section that includes management project details. Similar to other GCP projects, the organization is also considered a project and is utilized by Defender for Cloud to create all of the required resources needed to connect the organization to Defender for Cloud. In the management project details section, you have the choice of: -- Dedicating a management project for Defender for Cloud to include in the GCloud script.
+- Dedicating a management project for Defender for Cloud to include in the GCloud script.
- Provide the details of an already existing project to be used as the management project with Defender for Cloud.
-You need to decide what is your best option for your organization's architecture. We recommend creating a dedicated project for Defender for Cloud.
+You need to decide what is your best option for your organization's architecture. We recommend creating a dedicated project for Defender for Cloud.
The GCloud script is generated based on the plans you selected to onboard. The script creates all of the required resources on your GCP environment so that Defender for Cloud can operate and provide the following security benefits: - Workload identity pool - Workload identity provider for each plan - Custom role to grant Defender for Cloud access to discover and get the project under the onboarded organization-- A service account for each plan
+- A service account for each plan
- A service account for the autoprovisioning service - Organization level policy bindings for each service account - API enablement(s) at the management project level.
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
When you enable the agentless discovery for Kubernetes extension, the following
Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature). - **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.-- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role *Microsoft.Security/pricings/microsoft-defender-operator*. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
+- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation by creating a `ClusterRoleBinding` between the created identity and the Kubernetes `ClusterRole` *aks:trustedaccessrole:defender-containers:microsoft-defender-operator*. The `ClusterRole` is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
## [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
To complete this quickstart, you need:
|--|--| | Release state: | General Availability. | | Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). |
-| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** to create a connector on the Azure subscription. <br> **Project Collection Administrator** on the Azure DevOps Organization. <br> **Basic or Basic + Test Plans Access Level** in Azure DevOps. <br> **Third-party application access via OAuth**, which must be set to `On` on the Azure DevOps Organization. [Learn more about OAuth and how to enable it in your organizations](/azure/devops/organizations/accounts/change-application-access-policies).|
+| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** to create a connector on the Azure subscription. <br> **Project Collection Administrator** on the Azure DevOps Organization. <br> **Basic or Basic + Test Plans Access Level** on the Azure DevOps Organization. <br> _Please ensure you have BOTH Project Collection Administrator permissions and Basic Access Level for all Azure DevOps organizations you wish to onboard. Stakeholder Access Level is not sufficient._ <br> **Third-party application access via OAuth**, which must be set to `On` on the Azure DevOps Organization. [Learn more about OAuth and how to enable it in your organizations](/azure/devops/organizations/accounts/change-application-access-policies).|
| Regions and availability: | Refer to the [support and prerequisites](devops-support.md) section for region support and feature availability. | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
defender-for-cloud Subassessment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/subassessment-rest-api.md
Azure Resource Graph (ARG) provides a REST API that can be used to programmatically access vulnerability assessment results for both Azure registry and runtime vulnerabilities recommendations. Learn more about [ARG references and query examples](/azure/governance/resource-graph/overview).
-Azure and AWS container registry vulnerabilities sub-assessments are published to ARG as part of the security resources. Learn more about [security sub-assessments](/azure/governance/resource-graph/samples/samples-by-category?tabs=azure-cli#list-container-registry-vulnerability-assessment-results).
+Azure, AWS, and GCP container registry vulnerabilities sub-assessments are published to ARG as part of the security resources. Learn more about [security sub-assessments](/azure/governance/resource-graph/samples/samples-by-category?tabs=azure-cli#list-container-registry-vulnerability-assessment-results).
## ARG query examples To pull specific sub assessments, you need the assessment key.
-* For Azure container vulnerability assessment powered by MDVM the key is `c0b7cfc6-3172-465a-b378-53c7ff2cc0d5`.
-* For AWS container vulnerability assessment powered by MDVM the key is `c27441ae-775c-45be-8ffa-655de37362ce`.
+* For Azure container vulnerability assessment powered by MDVM, the key is `c0b7cfc6-3172-465a-b378-53c7ff2cc0d5`.
+* For AWS container vulnerability assessment powered by MDVM, the key is `c27441ae-775c-45be-8ffa-655de37362ce`.
+* For GCP container vulnerability assessment powered by MDVM, the key is `5cc3a2c1-8397-456f-8792-fe9d0d4c9145`.
The following is a generic security sub assessment query example that can be used as an example to build queries with. This query pulls the first sub assessment generated in the last hour. ```kql
securityresources
] ```
+### Query result - GCP sub-assessment
+```json
+[
+ {
+ "id": "/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/ microsoft.security/ securityconnectors/{SecurityConnectorName}/securityentitydata/gar-gcp-repository-{RepositoryName}-{Region}/providers/Microsoft.Security/assessments/5cc3a2c1-8397-456f-8792-fe9d0d4c9145/subassessments/{SubAssessmentId}",
+ "name": "{SubAssessmentId}",
+ "type": "microsoft.security/assessments/subassessments",
+ "tenantId": "{TenantId}",
+ "kind": "",
+ "location": "global",
+ "resourceGroup": "{ResourceGroup}",
+ "subscriptionId": "{SubscriptionId}",
+ "managedBy": "",
+ "sku": null,
+ "plan": null,
+ "properties": {
+ "description": "This vulnerability affects the following vendors: Alpine, Debian, Libtiff, Suse, Ubuntu. To view more details about this vulnerability please visit the vendor website.",
+ "resourceDetails": {
+ "id": "us-central1-docker.pkg.dev/detection-stg-manual-tests-2/hital/nginx@sha256:09e210fe1e7f54647344d278a8d0dee8a4f59f275b72280e8b5a7c18c560057f",
+ "source": "Gcp",
+ "resourceType": "repository",
+ "nativeCloudUniqueIdentifier": "projects/detection-stg-manual-tests-2/locations/us-central1/repositories/hital/dockerImages/nginx@sha256:09e210fe1e7f54647344d278a8d0dee8a4f59f275b72280e8b5a7c18c560057f",
+ "resourceProvider": "gar",
+ "resourceName": "detection-stg-manual-tests-2/hital/nginx",
+ "hierarchyId": "788875449976",
+ "connectorId": "40139bd8-5bae-e3e0-c640-2a45cdcd2d0c",
+ "region": "us-central1"
+ },
+ "displayName": "CVE-2017-11613",
+ "additionalData": {
+ "assessedResourceType": "GcpContainerRegistryVulnerability",
+ "vulnerabilityDetails": {
+ "severity": "Low",
+ "lastModifiedDate": "2023-12-09T00:00:00.0000000Z",
+ "exploitabilityAssessment": {
+ "exploitStepsPublished": false,
+ "exploitStepsVerified": false,
+ "exploitUris": [],
+ "isInExploitKit": false,
+ "types": [
+ "PrivilegeEscalation"
+ ]
+ },
+ "publishedDate": "2017-07-26T00:00:00.0000000Z",
+ "workarounds": [],
+ "references": [
+ {
+ "title": "CVE-2017-11613",
+ "link": "https://nvd.nist.gov/vuln/detail/CVE-2017-11613"
+ },
+ {
+ "title": "129463",
+ "link": "https://exchange.xforce.ibmcloud.com/vulnerabilities/129463"
+ },
+ {
+ "title": "CVE-2017-11613_oval:com.ubuntu.trusty:def:36061000000",
+ "link": "https://security-metadata.canonical.com/oval/com.ubuntu.trusty.usn.oval.xml.bz2"
+ },
+ {
+ "title": "CVE-2017-11613_oval:org.debian:def:85994619016140765823174295608399452222",
+ "link": "https://www.debian.org/security/oval/oval-definitions-stretch.xml"
+ },
+ {
+ "title": "oval:org.opensuse.security:def:201711613",
+ "link": "https://ftp.suse.com/pub/projects/security/oval/suse.linux.enterprise.server.15.xml.gz"
+ },
+ {
+ "title": "CVE-2017-11613-cpe:2.3:a:alpine:tiff:*:*:*:*:*:alpine_3.9:*:*-3.9",
+ "link": "https://security.alpinelinux.org/vuln/CVE-2017-11613"
+ }
+ ],
+ "weaknesses": {
+ "cwe": [
+ {
+ "id": "CWE-20"
+ }
+ ]
+ },
+ "cvss": {
+ "2.0": null,
+ "3.0": {
+ "cvssVectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:L/E:U/RL:U/RC:R",
+ "base": 3.3
+ }
+ },
+ "cveId": "CVE-2017-11613",
+ "cpe": {
+ "version": "*",
+ "language": "*",
+ "vendor": "debian",
+ "softwareEdition": "*",
+ "targetSoftware": "debian_9",
+ "targetHardware": "*",
+ "product": "tiff",
+ "edition": "*",
+ "update": "*",
+ "other": "*",
+ "part": "Applications",
+ "uri": "cpe:2.3:a:debian:tiff:*:*:*:*:*:debian_9:*:*"
+ }
+ },
+ "cvssV30Score": 3.3,
+ "artifactDetails": {
+ "lastPushedToRegistryUTC": "2023-12-11T08:33:13.0000000Z",
+ "repositoryName": "detection-stg-manual-tests-2/hital/nginx",
+ "registryHost": "us-central1-docker.pkg.dev",
+ "artifactType": "ContainerImage",
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "digest": "sha256:09e210fe1e7f54647344d278a8d0dee8a4f59f275b72280e8b5a7c18c560057f",
+ "tags": [
+ "1.12"
+ ]
+ },
+ "softwareDetails": {
+ "version": "4.0.8-2+deb9u2",
+ "language": "",
+ "fixedVersion": "4.0.8-2+deb9u4",
+ "vendor": "debian",
+ "category": "OS",
+ "osDetails": {
+ "osPlatform": "linux",
+ "osVersion": "debian_9"
+ },
+ "packageName": "tiff",
+ "fixReference": {
+ "description": "DSA-4349-1: tiff security update 2018 November 30",
+ "id": "DSA-4349-1",
+ "releaseDate": "2018-11-30T22:41:54.0000000Z",
+ "url": "https://security-tracker.debian.org/tracker/DSA-4349-1"
+ },
+ "fixStatus": "FixAvailable",
+ "evidence": [
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^tiff:.* -e .*:tiff: | cut -f 1 -d ':' | xargs dpkg-query -s",
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^tiff:.* -e .*:tiff: | cut -f 1 -d ':' | xargs dpkg-query -s"
+ ]
+ }
+ },
+ "timeGenerated": "2023-12-11T10:25:43.8751687Z",
+ "remediation": "Create new image with updated package tiff with version 4.0.8-2+deb9u4 or higher.",
+ "id": "CVE-2017-11613",
+ "status": {
+ "severity": "Low",
+ "code": "Unhealthy"
+ }
+ },
+ "tags": null,
+ "identity": null,
+ "zones": null,
+ "extendedLocation": null,
+ "assessmentKey": "5cc3a2c1-8397-456f-8792-fe9d0d4c9145",
+ "timeGenerated": "2023-12-11T10:25:43.8751687Z"
+ }
+]
+```
+ ## Definitions | Name | Description |
Other context fields for Azure container registry vulnerability assessment
| **Name** | **Type** | **Description** | | -- | -- | -- |
-| assessedResourceType | string: <br> AzureContainerRegistryVulnerability<br> AwsContainerRegistryVulnerability | Subassessment resource type |
+| assessedResourceType | string: <br> AzureContainerRegistryVulnerability<br> AwsContainerRegistryVulnerability <br> GcpContainerRegistryVulnerability | Subassessment resource type |
| cvssV30Score | Numeric | CVSS V3 Score | | vulnerabilityDetails | VulnerabilityDetails | | | artifactDetails | ArtifactDetails | |
Details of the Azure resource that was assessed
| ID | string | Azure resource ID of the assessed resource | | source | string: Azure | The platform where the assessed resource resides |
-### ResourceDetails - AWS
+### ResourceDetails - AWS / GCP
-Details of the AWS resource that was assessed
+Details of the AWS/GCP resource that was assessed
| **Name** | **Type** | **Description** | | | | | | id | string | Azure resource ID of the assessed resource |
-| source | string: Aws | The platform where the assessed resource resides |
+| source | string: Aws/Gcp | The platform where the assessed resource resides |
| connectorId | string | Connector ID | | region | string | Region | | nativeCloudUniqueIdentifier | string | Native Cloud's Resource ID of the Assessed resource in |
-| resourceProvider | string: ecr | The assessed resource provider |
+| resourceProvider | string: ecr/gar/gcr | The assessed resource provider |
| resourceType | string | The assessed resource type | | resourceName | string | The assessed resource name |
-| hierarchyId | string | Account ID (Aws) |
+| hierarchyId | string | Account ID (Aws) / Project ID (Gcp) |
### SubAssessmentStatus
Programmatic code for the status of the assessment
| **Name** | **Type** | **Description**| | | | | | Healthy | string | The resource is healthy |
-| NotApplicable | string | Assessment for this resource did not happen |
+| NotApplicable | string | Assessment for this resource didn't happen |
| Unhealthy | string | The resource has a security issue that needs to be addressed | ### SecuritySubAssessment
Security subassessment on a resource
| properties.id | string | Vulnerability ID | | properties.impact | string | Description of the impact of this subassessment | | properties.remediation | string | Information on how to remediate this subassessment |
-| properties.resourceDetails | ResourceDetails: <br> [Azure Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsazure) <br> [AWS Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsaws) | Details of the resource that was assessed |
+| properties.resourceDetails | ResourceDetails: <br> [Azure Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsazure) <br> [AWS/GCP Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsaws--gcp) | Details of the resource that was assessed |
| properties.status | [SubAssessmentStatus](/azure/defender-for-cloud/subassessment-rest-api#subassessmentstatus) | Status of the subassessment | | properties.timeGenerated | string | The date and time the subassessment was generated | | type | string | Resource type |
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Previously updated : 06/18/2023 Last updated : 01/24/2024 # Microsoft Defender for Cloud troubleshooting guide
Last updated 06/18/2023
This guide is for IT professionals, information security analysts, and cloud administrators whose organizations need to troubleshoot problems related to Microsoft Defender for Cloud. > [!TIP]
-> When you're facing a problem or need advice from our support team, the **Diagnose and solve problems** section of the Azure portal is good place to look for solutions.
+> When you're facing a problem or need advice from our support team, the **Diagnose and solve problems** section of the Azure portal is a good place to look for solutions.
>
-> :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Screenshot of the Azure portal that shows the page for diagnosing and solving problems in Defender for Cloud.":::
+> :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Screenshot of the Azure portal that shows the page for diagnosing and solving problems in Defender for Cloud." lightbox="media/release-notes/solve-problems.png":::
## Use the audit log to investigate problems
Just like Azure Monitor, Defender for Cloud uses the Log Analytics agent to coll
Open the services management console (*services.msc*) to make sure that the Log Analytics agent service is running. To see which version of the agent you have, open Task Manager. On the **Processes** tab, locate the Log Analytics agent service, right-click it, and then select **Properties**. On the **Details** tab, look for the file version. ### Check installation scenarios for the Log Analytics agent
If you experience problems with loading the workload protection dashboard, make
If you can't onboard your Azure DevOps organization, try the following troubleshooting tips:
+- Make sure you're using a non-preview version of the [Azure portal]( https://portal.azure.com); the authorize step doesn't work in the Azure preview portal.
+ - It's important to know which account you're signed in to when you authorize the access, because that will be the account that the system uses for onboarding. Your account can be associated with the same email address but also associated with different tenants. Make sure that you select the right account/tenant combination. If you need to change the combination: 1. On your [Azure DevOps profile page](https://app.vssps.visualstudio.com/profile/view), use the dropdown menu to select another account.
- :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that's used to select an account.":::
+ :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that's used to select an account." lightbox="media/troubleshooting-guide/authorize-select-tenant.png":::
1. After you select the correct account/tenant combination, go to **Environment settings** in Defender for Cloud and edit your Azure DevOps connector. Reauthorize the connector to update it with the correct account/tenant combination. You should then see the correct list of organizations on the dropdown menu.
You can also find troubleshooting information for Defender for Cloud at the [Def
If you need more assistance, you can open a new support request on the Azure portal. On the **Help + support** page, select **Create a support request**. ## See also
dev-box How To Configure Stop Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-stop-schedule.md
description: Learn how to configure an auto-stop schedule to automatically shut down dev boxes in a pool at a specified time and save on costs. + Last updated 01/10/2024
dev-box Monitor Dev Box Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/monitor-dev-box-reference.md
+
+ Title: Monitoring Microsoft Dev Box data reference
+
+description: Important reference material needed when you monitor Dev Box. Schema reference for dev center diagnostic logs. Review the included Azure Storage and Azure Monitor Logs properties.
++++++ Last updated : 01/30/2023++
+# Monitoring Microsoft Dev Box data reference
+
+This article provides a reference for log and metric data collected for a Microsoft Dev Box dev center. You can use the collected data to analyze the performance and availability of resources within your dev center. For details about how to collect and analyze monitoring data for your dev center, see [Monitoring Microsoft Dev Box](monitor-dev-box.md).
+
+## Resource logs
+
+The following table lists the properties of resource logs in a Microsoft Dev Box dev center. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the **DevCenterDiagnosticLogs** table under the resource provider name of `MICROSOFT.DEVCENTER`.
+
+| Azure Storage field or property | Azure Monitor Logs property | Description |
+| | | |
+| **time** | **TimeGenerated** | The date and time (UTC) when the operation occurred. |
+| **resourceId** | **ResourceId** | The dev center resource for which logs are enabled. |
+| **operationName** | **OperationName** | Name of the operation. If the event represents an Azure role-based access control (RBAC) operation, specify the Azure RBAC operation name (for example, `Microsoft.DevCenter/projects/users/devboxes/write`). This name is typically modeled in the form of an Azure Resource Manager operation, even if it's not a documented Resource Manager operation: (`Microsoft.<providerName>/<resourceType>/<subtype>/<Write/Read/Delete/Action>`). |
+| **identity** | **CallerIdentity** | The OID of the caller of the event. |
+| **TargetResourceId** | **ResourceId** | The subresource that pertains to the request. Depending on the operation performed, this value might point to a `devbox` or `environment`. |
+| **resultSignature** | **ResponseCode** | The HTTP status code returned for the operation. |
+| **resultType** | **OperationResult** | Indicates whether the operation failed or succeeded. |
+| **correlationId** | **CorrelationId** | The unique correlation ID for the operation that can be shared with the app team to support further investigation. |
+
+For a list of all Azure Monitor log categories and links to associated schemas, see [Common and service-specific schemas for Azure resource logs](../azure-monitor/essentials/resource-logs-schema.md).
+
+## Azure Monitor Logs tables
+
+A dev center uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables that a dev center uses, see the [Azure Monitor Logs table reference organized by resource type](/azure/azure-monitor/reference/tables/tables-resourcetype#dev-centers).
+
+## Related content
+
+- [Monitor Dev Box](monitor-dev-box.md)
+- [Monitor Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md)
dev-box Monitor Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/monitor-dev-box.md
+
+ Title: Monitoring Microsoft Dev Box
+
+description: Start here to learn how to monitor Dev Box. Learn how to use Azure diagnostic logs to see an audit history for your dev center.
++++++ Last updated : 01/30/2023++
+# Monitoring Microsoft Dev Box
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Microsoft Dev Box. Microsoft Dev Box uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+## Monitoring data
+
+Microsoft Dev Box collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring *Dev Box* data reference](monitor-dev-box-reference.md) for detailed information on the metrics and logs metrics created by Dev Box.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Dev box* are listed in [Microsoft Dev Box monitoring data reference](monitor-dev-box-reference.md#resource-logs).
+
+### Configure Azure diagnostic logs for a dev center
+
+With Azure diagnostic logs for DevCenter, you can view audit logs for dataplane operations in your dev center. These logs can be routed to any of the following destinations:
+
+* Azure Storage account
+* Log Analytics workspace
+
+This feature is available on all dev centers.
+
+Diagnostics logs allow you to export basic usage information from your dev center to different kinds sources so that you can consume them in a customized way. The dataplane audit logs expose information around CRUD operations for dev boxes within your dev center. Including, for example, start and stop commands executed on dev boxes. Some sample ways you can choose to export this data:
+
+* Export data to blob storage, export to CSV.
+* Export data to Azure Monitor logs and view and query data in your own Log Analytics workspace
+
+To learn more about the different types of logs available for dev centers, see [DevCenter Diagnostic Logs Reference](monitor-reference.md).
+
+### Enable logging with the Azure portal
+
+Follow these steps enable logging for your Azure DevCenter resource:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the Azure portal, navigate to **All resources** -> **your-devcenter**
+
+3. In the **Monitoring** section, select **Diagnostics settings**.
+
+4. Select **Add diagnostic setting** in the open page.
++
+#### Enable logging with Azure Storage
+
+To use a storage account to store the logs, follow these steps:
+
+ >[!NOTE]
+ >A storage account in the same region as your dev center is required to complete these steps. Refer to: **[Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)** for more information.
+
+1. For **Diagnostic setting name**, enter a name for your diagnostic log settings.
+
+2. Select **Archive to a storage account**, then select **Dataplane audit logs**.
+
+3. For **Retention (days)**, choose the number of retention days. A retention of zero days stores the logs indefinitely.
+
+4. Select the subscription and storage account for the logs.
+
+3. Select **Save**.
+
+#### Send to Log Analytics
+
+To use Log Analytics for the logs, follow these steps:
+
+>[!NOTE]
+>A log analytics workspace is required to complete these steps. Refer to: **[Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md)** for more information.
+
+1. For **Diagnostic setting name**, enter a name for your diagnostic log settings.
+
+2. Select **Send to Log Analytics**, then select **Dataplane audit logs**.
+
+3. Select the subscription and Log Analytics workspace for the logs.
+
+4. Select **Save**.
+
+### Enable logging with PowerShell
+
+The following example shows how to enable diagnostic logs via the Azure PowerShell Cmdlets.
++
+#### Enable diagnostic logs in a storage account
+
+1. Sign in to Azure PowerShell:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+
+2. To enable Diagnostic Logs in a storage account, enter these commands. Replace the variables with your values:
+
+ ```azurepowershell-interactive
+ $rg = <your-resource-group-name>
+ $devcenterid = <your-devcenter-ARM-resource-id>
+ $storageacctid = <your-storage-account-resource-id>
+ $diagname = <your-diagnostic-setting-name>
+
+ $log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category DataplaneAuditEvent -RetentionPolicyDay 7 -RetentionPolicyEnabled $true
+
+ New-AzDiagnosticSetting -Name $diagname -ResourceId $devcenterid -StorageAccountId $storageacctid -Log $log
+ ```
+
+#### Enable diagnostics logs for Log Analytics workspace
+
+1. Sign in to Azure PowerShell:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+2. To enable Diagnostic Logs for a Log Analytics workspace, enter these commands. Replace the variables with your values:
+
+ ```azurepowershell-interactive
+ $rg = <your-resource-group-name>
+ $devcenterid = <your-devcenter-ARM-resource-id>
+ $workspaceid = <your-log-analytics-workspace-resource-id>
+ $diagname = <your-diagnostic-setting-name>
+
+ $log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category DataplaneAuditEvent -RetentionPolicyDay 7 -RetentionPolicyEnabled $true
+
+ New-AzDiagnosticSetting -Name $diagname -ResourceId $devcenterid -WorkspaceId $workspaceid -Log $log
+ ```
+
+## Analyzing Logs
+This section describes existing tables for DevCenter diagnostic logs and how to query them.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Common and service-specific schemas for Azure resource logs](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema).
+
+DevCenter stores data in the following tables.
+
+| Table | Description |
+|:|:|
+| DevCenterDiagnosticLogs | Table used to store dataplane request/response information on dev box or environments within the dev center. |
+| DevCenterResourceOperationLogs | Operation logs pertaining to DevCenter resources, including information around resource health status changes. |
+| DevCenterBillingEventLogs | Billing event related to DevCenter resources. This log contains information about the quantity and unit charged per meter. |
+
+## Sample Kusto Queries
+After enabling diagnostic settings on your dev center, you should be able to view audit logs for the tables within a log analytics workspace.
+
+Here are some queries that you can enter into Log search to help your monitor your dev boxes.
+
+To query for all dataplane logs from DevCenter:
+
+```kusto
+DevCenterDiagnosticLogs
+```
+
+To query for a filtered list of dataplane logs, specific to a single devbox:
+
+```kusto
+DevCenterDiagnosticLogs
+| where TargetResourceId contains "<devbox-name>"
+```
+
+To generate a chart for dataplane logs, grouped by operation result status:
+
+```kusto
+DevCenterDiagnosticLogs
+| summarize count() by OperationResult
+| render piechart
+```
+
+These examples are just a small sample of the rich queries that can be performed in Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
+
+## Related content
+
+- [Monitor Dev Box](monitor-dev-box.md)
+- [Azure Diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md)
+- [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md)
+- [Azure Log Analytics REST API](/rest/api/loganalytics)
dev-box Tutorial Configure Multiple Monitors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-configure-multiple-monitors.md
+
+ Title: 'Tutorial: Configure multiple monitors for your dev box'
+
+description: In this tutorial, you configure an RDP client to use multiple monitors when connecting to a dev box.
++++ Last updated : 01/30/2023+++
+# Tutorial: Use multiple monitors on a dev box
+
+In this tutorial, you configure a remote desktop client to use dual or more monitors when you connect to your dev box.
+
+Using multiple monitors gives you more screen real estate to work with. You can spread your work across multiple screens, or use one screen for your development environment and another for documentation, email or messaging.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure the remote desktop client for multiple monitors.
+
+## Prerequisites
+
+To complete this tutorial, you must [install the Remote desktop app](tutorial-connect-to-dev-box-with-remote-desktop-app.md#download-the-remote-desktop-client-for-windows) on your local machine.
+
+## Configure Remote Desktop to use multiple monitors
+
+When you connect to your cloud-hosted developer machine in Microsoft Dev Box by using a remote desktop app, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors.
+
+Use the following steps to configure Remote Desktop to use multiple monitors.
+
+# [Windows](#tab/windows)
+
+1. Open the Remote Desktop app.
+
+ :::image type="content" source="./media/tutorial-configure-multiple-monitors/remote-desktop-app.png" alt-text="Screenshot of the Windows 11 start menu with Remote desktop showing and open highlighted.":::
+
+1. Right-click the dev box you want to configure, and then select **Settings**.
+
+1. On the settings pane, turn off **Use default settings**.
+
+ :::image type="content" source="media/tutorial-configure-multiple-monitors/turn-off-default-settings.png" alt-text="Screenshot showing the Use default settings slider.":::
+
+1. In **Display Settings**, in the **Display configuration** list, select the displays to use and configure the options:
+
+ | Value | Description | Options |
+ ||||
+ | All displays | Remote desktop uses all available displays. | - Use only a single display when in windowed mode. <br> - Fit the remote session to the window. |
+ | Single display | Remote desktop uses a single display. | - Start the session in full screen mode. <br> - Fit the remote session to the window. <br> - Update the resolution on when a window is resized. |
+ | Select displays | Remote Desktop uses only the monitors you select. | - Maximize the session to the current displays. <br> - Use only a single display when in windowed mode. <br> - Fit the remote connection session to the window. |
+
+ :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-select-display.png" alt-text="Screenshot showing the Remote Desktop display settings, highlighting the option to select the number of displays.":::
+
+1. Close the settings pane, and then select your dev box to begin the Remote Desktop session.
+
+# [Non-Windows](#tab/non-Windows)
+
+1. Open Remote Desktop.
+
+1. Select **PCs**.
+
+1. On the Connections menu, select **Edit PC**.
+
+1. Select **Display**.
+
+1. On the Display tab, select **Use all monitors**, and then select **Save**.
+
+ :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-for-mac.png" alt-text="Screenshot showing the Edit PC dialog box with the display configuration options.":::
+
+1. Select your dev box to begin the Remote Desktop session.
+
+
+
+## Clean up resources
+
+Dev boxes incur costs whenever they're running. When you finish using your dev box, shut down or stop it to avoid incurring unnecessary costs.
+
+You can stop a dev box from the developer portal:
+
+1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
+
+1. For the dev box that you want to stop, select More options (**...**), and then select **Stop**.
+
+ :::image type="content" source="./media/tutorial-configure-multiple-monitors/stop-dev-box.png" alt-text="Screenshot of the menu command to stop a dev box.":::
+
+The dev box might take a few moments to stop.
+
+## Related content
+
+- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
+- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box)
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
Title: 'Tutorial: Use a Remote Desktop client to connect to a dev box'
-description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box. Configure the RDP client for a multi-monitor setup.
+description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box.
Previously updated : 12/15/2023 Last updated : 01/30/2024 # Tutorial: Use a remote desktop client to connect to a dev box
-In this tutorial, you download and use a remote desktop client application to connect to a dev box in Microsoft Dev Box. Learn how to configure the application to take advantage of a multi-monitor setup.
+In this tutorial, you download and use a remote desktop client application to connect to a dev box.
Remote Desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a Remote Desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
-Alternately, you can also connect to your dev box through the browser from the Microsoft Dev Box developer portal.
+Alternately, you can connect to your dev box through the browser from the Microsoft Dev Box developer portal.
In this tutorial, you learn how to: > [!div class="checklist"] > * Download a remote desktop client.
+> * Connect to a dev box by using a subscription URL.
> * Connect to an existing dev box.
-> * Configure the remote desktop client for multiple monitors.
## Prerequisites
-To complete this tutorial, you must first:
--- [Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md).-- [Create a dev box](./quickstart-create-dev-box.md#create-a-dev-box) on the [developer portal](https://aka.ms/devbox-portal).
+To complete this tutorial, you must have access to a dev box through the developer portal.
## Download the remote desktop client and connect to your dev box
To download and set up the Remote Desktop client for Windows:
:::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/connect-remote-desktop-client.png" alt-text="Screenshot that shows how to select your platform configuration for the Windows Remote Desktop client.":::
-1. After you select your platform configuration, click the platform configuration to start the download process for the Remote Desktop client.
+1. After you select your platform configuration, select the platform configuration to start the download process for the Remote Desktop client.
- :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/download-windows-desktop.png" alt-text="Screenshot that shows how to click the platform configuration again to download the Windows Remote Desktop client.":::
+ :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/download-windows-desktop.png" alt-text="Screenshot that shows how to select the platform configuration again to download the Windows Remote Desktop client.":::
1. After the Remote Desktop MSI file downloads to your computer, open the file and follow the prompts to install the Remote Desktop app.
To use a non-Windows Remote Desktop client to connect to your dev box:
1. Your dev box appears in the Remote Desktop client's **Workspaces** area. Double-click the dev box to connect. :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png" alt-text="Screenshot of a dev box in a non-Windows Remote Desktop client Workspace." lightbox="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png":::-
-## Configure Remote Desktop to use multiple monitors
-
-When you connect to your cloud-hosted developer machine in Microsoft Dev Box, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors.
-
-Use the following steps to configure Remote Desktop to use multiple monitors.
-
-# [Windows](#tab/windows)
-
-1. Open Remote Desktop.
-
-1. Right-click the dev box you want to configure, and then select **Settings**.
-
-1. On the settings pane, turn off **Use default settings**.
-
- :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/turn-off-default-settings.png" alt-text="Screenshot showing the Use default settings slider.":::
-
-1. In **Display Settings**, in the **Display configuration** list, select the displays to use and configure the options:
-
- | Value | Description | Options |
- ||||
- | All displays | Remote desktop uses all available displays. | - Use only a single display when in windowed mode. <br> - Fit the remote session to the window. |
- | Single display | Remote desktop uses a single display. | - Start the session in full screen mode. <br> - Fit the remote session to the window. <br> - Update the resolution on when a window is resized. |
- | Select displays | Remote Desktop uses only the monitors you select. | - Maximize the session to the current displays. <br> - Use only a single display when in windowed mode. <br> - Fit the remote connection session to the window. |
-
- :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-select-display.png" alt-text="Screenshot showing the Remote Desktop display settings, highlighting the option to select the number of displays.":::
-
-1. Close the settings pane, and then select your dev box to begin the Remote Desktop session.
-
-# [Non-Windows](#tab/non-Windows)
-
-1. Open Remote Desktop.
-
-1. Select **PCs**.
-
-1. On the Connections menu, select **Edit PC**.
-
-1. Select **Display**.
-
-1. On the Display tab, select **Use all monitors**, and then select **Save**.
-
- :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-for-mac.png" alt-text="Screenshot showing the Edit PC dialog box with the display configuration options.":::
-
-1. Select your dev box to begin the Remote Desktop session.
-
-
- ## Clean up resources Dev boxes incur costs whenever they're running. When you finish using your dev box, shut down or stop it to avoid incurring unnecessary costs.
The dev box might take a few moments to stop.
## Related content -- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)-- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box)
+- Learn how to [configure multiple monitors](./tutorial-configure-multiple-monitors.md) for your Remote Desktop client.
+- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
education-hub Create Assignment Allocate Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/create-assignment-allocate-credit.md
Title: Create an assignment and allocate credit description: Explains how to create an assignment, allocate credit, and invite students to a course in the Azure Education Hub. -+ Last updated 06/30/2020
education-hub Set Up Course Classroom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/set-up-course-classroom.md
Title: Set up a course and create a classroom description: This quickstart explains how to set up a course and classroom in Azure Education Hub. -+ Last updated 06/30/2020
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
In this article, you learn how to manage users and their memberships in OSDU gro
- Generate the service principal access token that's needed to call the Entitlement APIs. See [How to generate auth token](how-to-generate-auth-token.md). - Keep all the parameter values handy. They're needed to run different user management requests via the Entitlements API.
-## Fetch OID
+## Fetch object-id
-The object ID (OID) is the Microsoft Entra user OID.
+The Azure object ID (OID) is the Microsoft Entra user OID.
1. Find the OID of the users first. If you're managing an application's access, you must find and use the application ID (or client ID) instead of the OID.
-1. Input the OID of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance.
+1. Input the OID of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance. You can not use user's email id in the parameter and must use object ID.
:::image type="content" source="media/how-to-manage-users/azure-active-directory-object-id.png" alt-text="Screenshot that shows finding the object ID from Microsoft Entra ID.":::
The object ID (OID) is the Microsoft Entra user OID.
If you try to directly use your own access token for adding entitlements, it results in a 401 error. The `client-id` access token must be used to add the first set of users in the system. Those users (with admin access) can then manage more users with their own access token. 1. Use the `client-id` access token to do the following steps by using the commands outlined in the following sections: 1. Add the user to the `users@<data-partition-id>.<domain>` OSDU group.
- 2. Add the user to the `users.datalake.ops@<data-partition-id>.<domain>` OSDU group.
+ 2. Add the user to the `users.datalake.ops@<data-partition-id>.<domain>` OSDU group to give access of all the service groups.
+ 3. Add the user to the `users.data.root@<data-partition-id>.<domain>` OSDU group to give access of all the data groups.
1. The user becomes the admin of the data partition. The admin can then add or remove more users to the required entitlement groups: 1. Get the admin's auth token by using [Generate user access token](how-to-generate-auth-token.md#generate-the-user-auth-token) and by using the same `client-id` and `client-secret` values. 1. Get the OSDU group, such as `service.legal.editor@<data-partition-id>.<domain>`, to which you want to add more users by using the admin's access token. 1. Add more users to that OSDU group by using the admin's access token.
+
+To know more about the OSDU bootstrap groups, check out [here](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/master/docs/bootstrap/bootstrap-groups-structure.md).
## Get the list of all available groups in a data partition
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. The value to be sent for the parameter `email` is the OID of the user and not the user's email address. ```bash
- curl --location --request POST 'https://<adme-url>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
+ curl --location --request POST 'https://<adme-url>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.<domain>/members' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' \ --header 'Content-Type: application/json' \
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. Run the following curl command in Azure Cloud Shell to get all the groups associated with the user. ```bash
- curl --location --request GET 'https://<adme-url>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \
+ curl --location --request GET 'https://<adme-url>/api/entitlements/v2/members/<obejct-id>/groups?type=none' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' ```
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. *Do not* delete the OWNER of a group unless you have another OWNER who can manage users in that group. ```bash
- curl --location --request DELETE 'https://<adme-url>/api/entitlements/v2/members/<OBJECT_ID>' \
+ curl --location --request DELETE 'https://<adme-url>/api/entitlements/v2/members/<object-id>' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' ```
event-grid Add Identity Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/add-identity-roles.md
Title: Add managed identity to a role on Azure Event Grid destination
description: This article describes how to add managed identity to Azure roles on destinations such as Azure Service Bus and Azure Event Hubs. Previously updated : 03/25/2021 Last updated : 01/31/2024 # Grant managed identity the access to Event Grid destination
Assign a system-assigned managed identity by using instructions from the followi
- [System topics](enable-identity-system-topics.md) ## Supported destinations and Azure roles
-After you enable identity for your event grid custom topic or domain, Azure automatically creates an identity in Microsoft Entra ID. Add this identity to appropriate Azure roles so that the custom topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the event grid custom topic can forward events to event hubs in that namespace.
+After you enable identity for your Event Grid custom topic or domain, Azure automatically creates an identity in Microsoft Entra ID. Add this identity to appropriate Azure roles so that the custom topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the Event Grid custom topic can forward events to event hubs in that namespace.
-Currently, Azure event grid supports custom topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the custom topic can forward the events.
+Currently, Azure Event Grid supports custom topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the custom topic can forward the events.
| Destination | Azure role | | -- | |
Currently, Azure event grid supports custom topics or domains configured with a
## Use the Azure portal You can use the Azure portal to assign the custom topic or domain identity to an appropriate role so that the custom topic or domain can forward events to the destination.
-The following example adds a managed identity for an event grid custom topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the event grid custom topic can forward events to all entities within the namespace.
+The following example adds a managed identity for an Event Grid custom topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the Event Grid custom topic can forward events to all entities within the namespace.
1. Go to your **Service Bus namespace** in the [Azure portal](https://portal.azure.com). 1. Select **Access Control** in the left pane.
The following example adds a managed identity for an event grid custom topic nam
The steps are similar for adding an identity to other roles mentioned in the table. ## Use the Azure CLI
-The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for event grid custom topics. The commands for event grid domains are similar.
+The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for Event Grid custom topics. The commands for Event Grid domains are similar.
### Get the principal ID for the custom topic's system identity First, get the principal ID of the custom topic's system-managed identity and assign the identity to appropriate roles.
az role assignment create --role "$role" --assignee "$topic_pid" --scope "$event
``` ### Create a role assignment for a Service Bus topic at various scopes
-The following CLI example shows how to add an event grid custom topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the event grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the event grid custom topic can forward events only to that specific Service Bus queue or topic.
+The following CLI example shows how to add an Event Grid custom topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the Event Grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the Event Grid custom topic can forward events only to that specific Service Bus queue or topic.
```azurecli-interactive role="Azure Service Bus Data Sender"
event-grid Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-custom-topic.md
Title: Create an Azure Event Grid topic or a domain description: This article shows how to create an Event Grid topic or domain. Previously updated : 07/21/2022 Last updated : 01/31/2024
If you're new to Azure Event Grid, read through [Event Grid overview](overview.m
An Event Grid topic provides a user-defined endpoint that you post your events to. 1. Sign in to [Azure portal](https://portal.azure.com/).
-2. In the search bar at the top, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list. If you are create a domain, search for **Event Grid Domains**.
+2. In the search bar at the top, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list. To create a domain, search for **Event Grid Domains**.
- :::image type="content" source="./media/custom-event-quickstart-portal/select-topics.png" alt-text="Screenshot showing the Azure port search bar to search for Event Grid topics.":::
+ :::image type="content" source="./media/custom-event-quickstart-portal/select-topics.png" lightbox="./media/custom-event-quickstart-portal/select-topics.png" alt-text="Screenshot showing the Azure port search bar to search for Event Grid topics.":::
3. On the **Event Grid Topics** or **Event Grid Domains** page, select **+ Create** on the toolbar. :::image type="content" source="./media/custom-event-quickstart-portal/create-topic-button.png" alt-text="Screenshot showing the Create Topic button on Event Grid topics page.":::
On the **Basics** page of **Create Topic** or **Create Event Grid Domain** wizar
1. Select your Azure **subscription**. 2. Select an existing resource group or select **Create new**, and enter a **name** for the **resource group**.
-3. Provide a unique **name** for the custom topic or domain. The name must be unique because it's represented by a DNS entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-".
+3. Provide a unique **name** for the custom topic or domain. The name must be unique because it's represented by a Domain Name System (DNS) entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-".
4. Select a **location** for the Event Grid topic or domain. 1. Select **Next: Networking** at the bottom of the page to switch to the **Networking** page.
On the **Basics** page of **Create Topic** or **Create Event Grid Domain** wizar
## Networking page On the **Networking** page of the **Create Topic** or **Create Event Grid Domain** wizard, follow these steps:
-1. If you want to allow clients to connect to the topic or domain endpoint via a public IP address, keep the **Public access** option selected.
+1. If you want to allow clients to connect to the topic or domain endpoint via a public IP address, keep the **Public access** option selected. You can restrict the access to specific IP addresses or IP address range.
:::image