Updates from: 02/01/2024 02:16:20
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
The default name of the **Change email** button in *selfAsserted.html* is **chan
[!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)] +
+- The B2C Users need to have an authentication method specified for self-service password reset. Select the B2C User, in the left menu under **Manage**, select **Authentication methods**, ensure **Authentication contact info** is set. B2C users created via a SignUp flow will have this set by default. For users created via Azure Portal or by Graph API need to have this set for SSPR to work.
++ ## Self-service password reset (recommended) The new password reset experience is now part of the sign-up or sign-in policy. When the user selects the **Forgot your password?** link, they are immediately sent to the Forgot Password experience. Your application no longer needs to handle the [AADB2C90118 error code](#password-reset-policy-legacy), and you don't need a separate policy for password reset.
ai-services Blob Storage Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/blob-storage-search.md
Title: Configure your blob storage container for image retrieval and video search
+ Title: Configure your blob storage container for image retrieval
description: Configure your Azure storage account to get started with the **Search photos image retrieval** experience in Vision Studio. #
-# Configure your blob storage for image retrieval and video search in Vision Studio
+# Configure your blob storage for image retrieval in Vision Studio
-To get started with the **Search photos image retrieval** scenario in Vision Studio, you need to select or create a new Azure storage account. Your storage account can be in any region, but creating it in the same region as your Vision resource is more efficient and reduces cost.
+To get started with the **Search photos with image retrieval** scenario in Vision Studio, you need to select or create a new Azure storage account. Your storage account can be in any region, but creating it in the same region as your Vision resource is more efficient and reduces cost.
> [!IMPORTANT]
-> You need to create your storage account on the same Azure subscription as the Vision resource you're using in the **Search photos image retrieval** scenario as shown below.
-
+> You need to create your storage account on the same Azure subscription as the Vision resource you're using in the **Search photos with image retrieval** scenario.
+>
+> :::image type="content" source="../media/storage-instructions/subscription.png" alt-text="Screenshot of resource selection.":::
## Create a new storage account
In the Allowed Methods field, select the `GET` checkbox to allow an authenticate
:::image type="content" source="../media/storage-instructions/cors-rule.png" alt-text="Screenshot of completed CORS screen.":::
-This allows Vision Studio to access images and videos in your blob storage container to extract insights on your data.
-
-## Upload images and videos in Vision Studio
-
-In the **Try with your own video** or **Try with your own image** section in Vision Studio, select the storage account that you configured with the CORS rule. Select the container in which your images or videos are stored. If you don't have a container, you can create one and upload the images or videos from your local device. If you have updated the CORS rules on the storage account, refresh the Blob container or Video files on container sections.
-
+This allows Vision Studio to access images in your blob storage container to extract insights on your data.
+## Upload images in Vision Studio
+In the **Search photos with image retrieval** section in Vision Studio, select the storage account that you configured with the CORS rule. Select the container in which your images are stored. If you don't have a container, you can create one and upload the images from your local device. If you have updated the CORS rules on the storage account, refresh the Blob container or Video files on container sections.
ai-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image-40.md
Last updated 08/01/2023-+ zone_pivot_groups: programming-languages-computer-vision-40
ai-services Image Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/image-retrieval.md
Previously updated : 01/19/2024 Last updated : 01/30/2024
The API call returns a **vector** JSON object, which defines the text string's c
Cosine similarity is a method for measuring the similarity of two vectors. In an image retrieval scenario, you'll compare the search query vector with each image's vector. Images that are above a certain threshold of similarity can then be returned as search results.
-The following example C# code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results.
+The following example code calculates the cosine similarity between two vectors. It's up to you to decide what similarity threshold to use for returning images as search results.
+
+#### [C#](#tab/csharp)
```csharp public static float GetCosineSimilarity(float[] vector1, float[] vector2)
public static float GetCosineSimilarity(float[] vector1, float[] vector2)
} ```
+#### [Python](#tab/python)
+
+```python
+import numpy as np
+
+def cosine_similarity(vector1, vector2):
+ return np.dot(vector1, vector2) / (np.linalg.norm(vector1) * np.linalg.norm(vector2))
+```
+++ ## Next steps [Image retrieval concepts](../concept-image-retrieval.md)
ai-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/model-customization.md
Last updated 02/06/2023 -+ # Create a custom Image Analysis model (preview)
ai-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
Last updated 01/24/2023 -+ zone_pivot_groups: programming-languages-computer-vision-40 keywords: Azure AI Vision, Azure AI Vision service
ai-services Install Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/install-sdk.md
Last updated 08/01/2023 -+ zone_pivot_groups: programming-languages-vision-40-sdk
ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/managed-identities.md
+
+ Title: Managed identities for storage blobs
+description: Create managed identities for containers and blobs with Azure portal.
+++++ Last updated : 01/31/2024+++
+# Managed identities for Language resources
+
+Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your [source and target container URLs](use-native-documents.md#create-azure-blob-storage-containers).
+
+ :::image type="content" source="media/managed-identity-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+
+* You can use managed identities to grant access to any resource that supports Microsoft Entra authentication, including your own applications.
+
+* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](/azure/role-based-access-control/overview).
+
+* There's no added cost to use managed identities in Azure.
+
+> [!IMPORTANT]
+>
+> * When using managed identities, don't include a SAS token URL with your HTTP requestsΓÇöyour requests will fail. Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your [source and target container URLs](use-native-documents.md#create-azure-blob-storage-containers).
+>
+> * To use managed identities for Language operations, you must [create your Language resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in a specific geographic Azure region such as **East US**. If your Language resource region is set to **Global**, then you can't use managed identity authentication. You can, however, still use [Shared Access Signature tokens (SAS)](shared-access-signatures.md).
+>
+
+## Prerequisites
+
+To get started, you need the following resources:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* An [**single-service Azure AI Language**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) resource created in a regional location.
+
+* A brief understanding of [**Azure role-based access control (`Azure RBAC`)**](/azure/role-based-access-control/role-assignments-portal) using the Azure portal.
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) in the same region as your Language resource. You also need to create containers to store and organize your blob data within your storage account.
+
+* **If your storage account is behind a firewall, you must enable the following configuration**:
+ 1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+ 1. Select your Storage account.
+ 1. In the **Security + networking** group in the left pane, select **Networking**.
+ 1. In the **Firewalls and virtual networks** tab, select **Enabled from selected virtual networks and IP addresses**.
+
+ :::image type="content" source="media/firewalls-and-virtual-networks.png" alt-text="Screenshot that shows the elected networks radio button selected.":::
+
+ 1. Deselect all check boxes.
+ 1. Make sure **Microsoft network routing** is selected.
+ 1. Under the **Resource instances** section, select **Microsoft.CognitiveServices/accounts** as the resource type and select your Language resource as the instance name.
+ 1. Make certain that the **Allow Azure services on the trusted services list to access this storage account** box is checked. For more information about managing exceptions, _see_ [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security?tabs=azure-portal#manage-exceptions).
+
+ :::image type="content" source="media/allow-trusted-services-checkbox-portal-view.png" alt-text="Screenshot that shows the allow trusted services checkbox in the Azure portal.":::
+
+ 1. Select **Save**.
+
+ > [!NOTE]
+ > It may take up to 5 minutes for the network changes to propagate.
+
+ Although network access is now permitted, your Language resource is still unable to access the data in your Storage account. You need to [create a managed identity](#managed-identity-assignments) for and [assign a specific access role](#enable-a-system-assigned-managed-identity) to your Language resource.
+
+## Managed identity assignments
+
+There are two types of managed identities: **system-assigned** and **user-assigned**. Currently, Document Translation supports **system-assigned managed identity**:
+
+* A system-assigned managed identity is **enabled** directly on a service instance. It isn't enabled by default; you must go to your resource and update the identity setting.
+
+* The system-assigned managed identity is tied to your resource throughout its lifecycle. If you delete your resource, the managed identity is deleted as well.
+
+In the following steps, we enable a system-assigned managed identity and grant your Language resource limited access to your Azure Blob Storage account.
+
+## Enable a system-assigned managed identity
+
+You must grant the Language resource access to your storage account before it can create, read, or delete blobs. Once you enabled the Language resource with a system-assigned managed identity, you can use Azure role-based access control (`Azure RBAC`), to give Language features access to your Azure storage containers.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select your Language resource.
+1. In the **Resource Management** group in the left pane, select **Identity**. If your resource was created in the global region, the **Identity** tab isn't visible. You can still use [Shared Access Signature tokens (SAS)](shared-access-signatures.md) for authentication.
+1. Within the **System assigned** tab, turn on the **Status** toggle.
+
+ :::image type="content" source="media/resource-management-identity-tab.png" alt-text="Screenshot that shows the resource management identity tab in the Azure portal.":::
+
+ > [!IMPORTANT]
+ > User assigned managed identities don't meet the requirements for the batch processing storage account scenario. Be sure to enable system assigned managed identity.
+
+1. Select **Save**.
+
+## Grant storage account access for your Language resource
+
+> [!IMPORTANT]
+> To assign a system-assigned managed identity role, you need **Microsoft.Authorization/roleAssignments/write** permissions, such as [**Owner**](/azure/role-based-access-control/built-in-roles#owner) or [**User Access Administrator**](/azure/role-based-access-control/built-in-roles#user-access-administrator) at the storage scope for the storage resource.
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
+1. Select your Language resource.
+1. In the **Resource Management** group in the left pane, select **Identity**.
+1. Under **Permissions** select **Azure role assignments**:
+
+ :::image type="content" source="media/enable-system-assigned-managed-identity-portal.png" alt-text="Screenshot that shows the enable system-assigned managed identity in Azure portal.":::
+
+1. On the Azure role assignments page that opened, choose your subscription from the drop-down menu then select **+ Add role assignment**.
+
+ :::image type="content" source="media/azure-role-assignments-page-portal.png" alt-text="Screenshot that shows the Azure role assignments page in the Azure portal.":::
+
+1. Next, assign a **Storage Blob Data Contributor** role to your Language service resource. The **Storage Blob Data Contributor** role gives Language (represented by the system-assigned managed identity) read, write, and delete access to the blob container and data. In the **Add role assignment** pop-up window, complete the fields as follows and select **Save**:
+
+ | Field | Value|
+ ||--|
+ |**Scope**| **_Storage_**.|
+ |**Subscription**| **_The subscription associated with your storage resource_**.|
+ |**Resource**| **_The name of your storage resource_**.|
+ |**Role** | **_Storage Blob Data Contributor_**.|
+
+ :::image type="content" source="media/add-role-assignment-window.png" alt-text="Screenshot that shows the role assignments page in the Azure portal.":::
+
+1. After the _Added Role assignment_ confirmation message appears, refresh the page to see the added role assignment.
+
+ :::image type="content" source="media/add-role-assignment-confirmation.png" alt-text="Screenshot that shows the added role assignment confirmation pop-up message.":::
+
+1. If you don't see the new role assignment right away, wait and try refreshing the page again. When you assign or remove role assignments, it can take up to 30 minutes for changes to take effect.
+
+## HTTP requests
+
+* A native document Language service operation request is submitted to your Language service endpoint via a POST request.
+
+* With managed identity and `Azure RBAC`, you no longer need to include SAS URLs.
+
+* If successful, the POST method returns a `202 Accepted` response code and the service creates a request.
+
+* The processed documents appear in your target container.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with native document support](use-native-documents.md#include-native-documents-with-an-http-request)
ai-services Shared Access Signatures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/shared-access-signatures.md
+
+ Title: Shared access signature (SAS) tokens for storage blobs
+description: Create shared access signature tokens (SAS) for containers and blobs with Azure portal.
+++++ Last updated : 01/31/2024++
+# SAS tokens for your storage containers
+
+Learn to create user delegation, shared access signature (SAS) tokens, using the Azure portal. User delegation SAS tokens are secured with Microsoft Entra credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
++
+>[!TIP]
+>
+> [Role-based access control (managed identities)](../concepts/role-based-access-control.md) provide an alternate method for granting access to your storage data without the need to include SAS tokens with your HTTP requests.
+>
+> * You can use managed identities to grant access to any resource that supports Microsoft Entra authentication, including your own applications.
+> * Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
+> * There's no added cost to use managed identities in Azure.
+
+At a high level, here's how SAS tokens work:
+
+* Your application submits the SAS token to Azure Storage as part of a REST API request.
+
+* If the storage service verifies that the SAS is valid, the request is authorized.
+
+* If the SAS token is deemed invalid, the request is declined, and the error code 403 (Forbidden) is returned.
+
+Azure Blob Storage offers three resource types:
+
+* **Storage** accounts provide a unique namespace in Azure for your data.
+* **Data storage containers** are located in storage accounts and organize sets of blobs (files, text, or images).
+* **Blobs** are located in containers and store text and binary data such as files, text, and images.
+
+> [!IMPORTANT]
+>
+> * SAS tokens are used to grant permissions to storage resources, and should be protected in the same manner as an account key.
+>
+> * Operations that use SAS tokens should be performed only over an HTTPS connection, and SAS URIs should only be distributed on a secure connection such as HTTPS.
+
+## Prerequisites
+
+To get started, you need the following resources:
+
+* An active [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+
+* An [Azure AI Language](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) resource.
+
+* A **standard performance** [Azure Blob Storage account](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to create containers to store and organize your files within your storage account. If you don't know how to create an Azure storage account with a storage container, follow these quickstarts:
+
+ * [Create a storage account](../../../storage/common/storage-account-create.md). When you create your storage account, select **Standard** performance in the **Instance details** > **Performance** field.
+ * [Create a container](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When you create your container, set **Public access level** to **Container** (anonymous read access for containers and files) in the **New Container** window.
+
+## Create SAS tokens in the Azure portal
+
+<!-- markdownlint-disable MD024 -->
+
+Go to the [Azure portal](https://portal.azure.com/#home) and navigate to your container or a specific file as follows and continue with these steps:
+
+Workflow: **Your storage account** → **containers** → **your container** → **your file**
+
+1. Right-click the container or file and select **Generate SAS** from the drop-down menu.
+
+1. Select **Signing method** → **User delegation key**.
+
+1. Define **Permissions** by checking and/or clearing the appropriate check box:
+
+ * Your **source** file must designate **read** and **list** access.
+
+ * Your **target** file must designate **write** and **list** access.
+
+1. Specify the signed key **Start** and **Expiry** times.
+
+ * When you create a shared access signature (SAS), the default duration is 48 hours. After 48 hours, you'll need to create a new token.
+ * Consider setting a longer duration period for the time you're using your storage account for Language Service operations.
+ * The value of the expiry time is determined by whether you're using an **Account key** or **User delegation key** **Signing method**:
+ * **Account key**: No imposed maximum time limit; however, best practices recommended that you configure an expiration policy to limit the interval and minimize compromise. [Configure an expiration policy for shared access signatures](/azure/storage/common/sas-expiration-policy).
+ * **User delegation key**: The value for the expiry time is a maximum of seven days from the creation of the SAS token. The SAS is invalid after the user delegation key expires, so a SAS with an expiry time of greater than seven days will still only be valid for seven days. For more information,*see* [Use Microsoft Entra credentials to secure a SAS](/azure/storage/blobs/storage-blob-user-delegation-sas-create-cli#use-azure-ad-credentials-to-secure-a-sas).
+
+1. The **Allowed IP addresses** field is optional and specifies an IP address or a range of IP addresses from which to accept requests. If the request IP address doesn't match the IP address or address range specified on the SAS token, authorization fails. The IP address or a range of IP addresses must be public IPs, not private. For more information,*see*, [**Specify an IP address or IP range**](/rest/api/storageservices/create-account-sas#specify-an-ip-address-or-ip-range).
+
+1. The **Allowed protocols** field is optional and specifies the protocol permitted for a request made with the SAS. The default value is HTTPS.
+
+1. Review then select **Generate SAS token and URL**.
+
+1. The **Blob SAS token** query string and **Blob SAS URL** are displayed in the lower area of window.
+
+1. **Copy and paste the Blob SAS token and URL values in a secure location. They'll only be displayed once and cannot be retrieved once the window is closed.**
+
+1. To [construct a SAS URL](#use-your-sas-url-to-grant-access), append the SAS token (URI) to the URL for a storage service.
+
+### Use your SAS URL to grant access
+
+The SAS URL includes a special set of [query parameters](/rest/api/storageservices/create-user-delegation-sas#assign-permissions-with-rbac). Those parameters indicate how the client accesses the resources.
+
+You can include your SAS URL with REST API requests in two ways:
+
+* Use the **SAS URL** as your sourceURL and targetURL values.
+
+* Append the **SAS query string** to your existing sourceURL and targetURL values.
+
+Here's a sample REST API request:
+
+```json
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "id": "doc_0",
+ "language": "en",
+ "source": {
+ "location": "myaccount.blob.core.windows.net/sample-input/input.pdf?{SAS-Token}"
+ },
+ "target": {
+ "location": "https://myaccount.blob.core.windows.net/sample-output?{SAS-Token}"
+ }
+ }
+ ]
+ }
+}
+```
+
+That's it! You learned how to create SAS tokens to authorize how clients access your data.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about native document support](use-native-documents.md "Learn how to process and analyze native documents.") [Learn more about granting access with SAS ](/azure/storage/common/storage-sas-overview "Grant limited access to Azure Storage resources using shared access SAS.")
+>
ai-services Use Native Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/use-native-documents.md
+
+ Title: Native document support for Azure AI Language (preview)
+
+description: How to use native document with Azure AI Languages Personally Identifiable Information and Summarization capabilities.
++++ Last updated : 01/31/2024+++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD051 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+<!-- markdownlint-disable MD001 -->
+
+# Native document support for Azure AI Language (preview)
+
+> [!IMPORTANT]
+>
+> * Native document support is a gated preview. To request access to the native document support feature, complete and submit the [**Apply for access to Language Service previews**](https://aka.ms/gating-native-document) form.
+>
+> * Azure AI Language public preview releases provide early access to features that are in active development.
+> * Features, approaches, and processes may change, prior to General Availability (GA), based on user feedback.
+
+Azure AI Language is a cloud-based service that applies Natural Language Processing (NLP) features to text-based data. The native document support capability enables you to send API requests asynchronously, using an HTTP POST request body to send your data and HTTP GET request query string to retrieve the processed data.
+
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities:
+
+* [Personally Identifiable Information (PII)](../personally-identifiable-information/overview.md). The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. The `PiiEntityRecognition` API supports native document processing.
+
+* [Document summarization](../summarization/overview.md). Document summarization uses natural language processing to generate extractive (salient sentence extraction) or abstractive (contextual word extraction) summaries for documents. Both `AbstractiveSummarization` and `ExtractiveSummarization` APIs support native document processing.
+
+## Development options
+
+Native document support can be integrated into your applications using the [Azure AI Language REST API](/rest/api/language/). The REST API is a language agnostic interface that enables you to create HTTP requests for text-based data analysis.
+
+|Service|Description|API Reference (Latest GA version)|API Reference (Latest Preview version)|
+|--|--|--|--|
+| Text analysis - runtime | &bullet; Runtime prediction calls to extract **Personally Identifiable Information (PII)**.</br>&bullet; Custom redaction for native documents is supported in the latest **2023-04-14-preview**.|[`2023-04-01`](/rest/api/language/2023-04-01/text-analysis-runtime)|[`2023-04-15-preview`.](/rest/api/language/2023-04-15-preview/text-analysis-runtime)|
+| Summarization for documents - runtime|Runtime prediction calls to **query summarization for documents models**.|[`2023-04-01`](/rest/api/language/2023-04-01/text-analysis-runtime/submit-job)|[`2023-04-15-preview`](/rest/api/language/2023-04-15-preview/text-analysis-runtime)|
+
+## Supported document formats
+
+ Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats:
+
+|File type|File extension|Description|
+||--|--|
+|Text| `.txt`|An unformatted text document.|
+|Adobe PDF| `.pdf`|A portable document file formatted document.|
+|Microsoft Word| `.docx`|A Microsoft Word document file.|
+
+## Input guidelines
+
+***Supported file formats***
+
+|Type|support and limitations|
+|||
+|**PDFs**| Fully scanned PDFs aren't supported.|
+|**Text within images**| Digital images with imbedded text aren't supported.|
+|**Digital tables**| Tables in scanned documents aren't supported.|
+
+***Document Size***
+
+|Attribute|Input limit|
+|||
+|**Total number of documents per request** |**Γëñ 20**|
+|**Total content size per request**| **Γëñ 1 MB**|
+
+## Include native documents with an HTTP request
+
+***Let's get started:***
+
+* For this project, we use the cURL command line tool to make REST API calls.
+
+ > [!NOTE]
+ > The cURL package is pre-installed on most Windows 10 and Windows 11 and most macOS and Linux distributions. You can check the package version with the following commands:
+ > Windows: `curl.exe -V`.
+ > macOS `curl -V`
+ > Linux: `curl --version`
+
+* If cURL isn't installed, here are installation links for your platform:
+
+ * [Windows](https://curl.haxx.se/windows/).
+ * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows).
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your native files for analysis (required).
+ * **Target container**. This container is where your analyzed files are stored (required).
+
+* A [**single-service Language resource**](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) (**not** a multi-service Azure AI services resource):
+
+ **Complete the Language resource project and instance details fields as follows:**
+
+ 1. **Subscription**. Select one of your available Azure subscriptions.
+
+ 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+ 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity (RBAC)](../concepts/role-based-access-control.md) for authentication, choose a **geographic** region like **West US**.
+
+ 1. **Name**. Enter the name you chose for your resource. The name you choose must be unique within Azure.
+
+ 1. **Pricing tier**. You can use the free pricing tier (`Free F0`) to try the service, and upgrade later to a paid tier for production.
+
+ 1. Select **Review + Create**.
+
+ 1. Review the service terms and select **Create** to deploy your resource.
+
+ 1. After your resource successfully deploys, select **Go to resource**.
+
+### Retrieve your key and language service endpoint
+
+Requests to the Language service require a read-only key and custom endpoint to authenticate access.
+
+1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing language service resource, navigate directly to your resource page.
+
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+
+1. You can copy and paste your **`key`** and your **`language service instance endpoint`** into the code samples to authenticate your request to the Language service. Only one key is necessary to make an API call.
+
+## Create Azure Blob Storage containers
+
+[**Create containers**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
+
+* **Source container**. This container is where you upload your native files for analysis (required).
+* **Target container**. This container is where your analyzed files are stored (required).
+
+### **Authentication**
+
+Your Language resource needs granted access to your storage account before it can create, read, or delete blobs. There are two primary methods you can use to grant access to your storage data:
+
+* [**Shared access signature (SAS) tokens**](shared-access-signatures.md). User delegation SAS tokens are secured with Microsoft Entra credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
+
+* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources
+
+For this project, we authenticate access to the `source location` and `target location` URLs with Shared Access Signature (SAS) tokens appended as query strings. Each token is assigned to a specific blob (file).
++
+* Your **source** container or blob must designate **read** and **list** access.
+* Your **target** container or blob must designate **write** and **list** access.
+
+> [!TIP]
+>
+> Since we're processing a single file (blob), we recommend that you **delegate SAS access at the blob level**.
+
+## Request headers and parameters
+
+|parameter |Description |
+|||
+|`-X POST <endpoint>` | Specifies your Language resource endpoint for accessing the API. |
+|`--header Content-Type: application/json` | The content type for sending JSON data. |
+|`--header "Ocp-Apim-Subscription-Key:<key>` | Specifies the Language resource key for accessing the API. |
+|`-data` | The JSON file containing the data you want to pass with your request. |
+
+The following cURL commands are executed from a BASH shell. Edit these commands with your own resource name, resource key, and JSON values. Try analyzing native documents by selecting the `Personally Identifiable Information (PII)` or `Document Summarization` code sample project:
+
+### [Personally Identifiable Information (PII)](#tab/pii)
+
+### PII Sample document
+
+For this quickstart, you need a **source document** uploaded to your **source container**. You can download our [Microsoft Word sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Language/native-document-pii.docx) or [Adobe PDF](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl//Language/native-document-pii.pdf) for this project. The source language is English.
+
+### Build the POST request
+
+1. Using your preferred editor or IDE, create a new directory for your app named `native-document`.
+
+1. Create a new json file called **pii-detection.json** in your **native-document** directory.
+
+1. Copy and paste the following Personally Identifiable Information (PII) **request sample** into your `pii-detection.json` file. Replace **`{your-source-container-SAS-URL}`** and **`{your-target-container-SAS-URL}`** with values from your Azure portal Storage account containers instance:
+
+ ***Request sample***
+
+```json
+{
+ "displayName": "Extracting Location & US Region",
+ "analysisInput": {
+ "documents": [
+ {
+ "language": "en-US",
+ "id": "Output-excel-file",
+ "source": {
+ "location": "{your-source-container-with-SAS-URL}"
+ },
+ "target": {
+ "location": "{your-target-container-with-SAS-URL}"
+ }
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "kind": "PiiEntityRecognition",
+ "parameters":{
+ "excludePiiCategoriesredac" : ["PersonType", "Category2", "Category3"],
+ "redactionPolicy": "UseEntityTypeName"
+ }
+ }
+ ]
+}
+```
+
+### Run the POST request
+
+1. Here's the preliminary structure of the POST request:
+
+ ```bash
+ POST {your-language-endpoint}/language/analyze-documents/jobs?api-version=2023-11-15-preview
+ ```
+
+1. Before you run the **POST** request, replace `{your-language-resource-endpoint}` and `{your-key}` with the values from your Azure portal Language service instance.
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](/azure/key-vault/general/overview). For more information, *see* Azure AI services [security](/azure/ai-services/security-features).
+
+ ***PowerShell***
+
+ ```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-documents/jobs?api-version=2023-11-15-preview" -i -X POST --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@pii-detection.json"
+ ```
+
+ ***command prompt / terminal***
+
+ ```bash
+ curl -v -X POST "{your-language-resource-endpoint}/language/analyze-documents/jobs?api-version=2023-11-15-preview" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@pii-detection.json"
+ ```
+
+1. Here's a sample response:
+
+ ```http
+ HTTP/1.1 202 Accepted
+ Content-Length: 0
+ operation-location: https://{your-language-resource-endpoint}/language/analyze-documents/jobs/f1cc29ff-9738-42ea-afa5-98d2d3cabf94?api-version=2023-11-15-preview
+ apim-request-id: e7d6fa0c-0efd-416a-8b1e-1cd9287f5f81
+ x-ms-region: West US 2
+ Date: Thu, 25 Jan 2024 15:12:32 GMT
+ ```
+
+### POST response (jobId)
+
+You receive a 202 (Success) response that includes a read-only Operation-Location header. The value of this header contains a **jobId** that can be queried to get the status of the asynchronous operation and retrieve the results using a **GET** request:
+
+ :::image type="content" source="media/operation-location-result-id.png" alt-text="Screenshot showing the operation-location value in the POST response.":::
+
+### Get analyze results (GET request)
+
+1. After your successful **POST** request, poll the operation-location header returned in the POST request to view the processed data.
+
+1. Here's the preliminary structure of the **GET** request:
+
+ ```bash
+ GET {your-language-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview
+ ```
+
+1. Before you run the command, make these changes:
+
+ * Replace {**jobId**} with the Operation-Location header from the POST response.
+
+ * Replace {**your-language-resource-endpoint**} and {**your-key**} with the values from your Language service instance in the Azure portal.
+
+### Get request
+
+```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" -i -X GET --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+```bash
+ curl -v -X GET "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+#### Examine the response
+
+You receive a 200 (Success) response with JSON output. The **status** field indicates the result of the operation. If the operation isn't complete, the value of **status** is "running" or "notStarted", and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
+
+#### Sample response
+
+```json
+{
+ "jobId": "f1cc29ff-9738-42ea-afa5-98d2d3cabf94",
+ "lastUpdatedDateTime": "2024-01-24T13:17:58Z",
+ "createdDateTime": "2024-01-24T13:17:47Z",
+ "expirationDateTime": "2024-01-25T13:17:47Z",
+ "status": "succeeded",
+ "errors": [],
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "PiiEntityRecognitionLROResults",
+ "lastUpdateDateTime": "2024-01-24T13:17:58.33934Z",
+ "status": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "id": "doc_0",
+ "source": {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-input/input.pdf"
+ },
+ "targets": [
+ {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-output/df6611a3-fe74-44f8-b8d4-58ac7491cb13/PiiEntityRecognition-0001/input.result.json"
+ },
+ {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-output/df6611a3-fe74-44f8-b8d4-58ac7491cb13/PiiEntityRecognition-0001/input.docx"
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2023-09-01"
+ }
+ }
+ ]
+ }
+}
+```
+
+### [Document Summarization](#tab/summarization)
+
+### Summarization sample document
+
+For this project, you need a **source document** uploaded to your **source container**. You can download our [Microsoft Word sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Language/native-document-summarization.docx) or [Adobe PDF](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Language/native-document-summarization.pdf) for this quickstart. The source language is English.
+
+### Build the POST request
+
+1. Using your preferred editor or IDE, create a new directory for your app named `native-document`.
+1. Create a new json file called **document-summarization.json** in your **native-document** directory.
+
+1. Copy and paste the Document Summarization **request sample** into your `document-summarization.json` file. Replace **`{your-source-container-SAS-URL}`** and **`{your-target-container-SAS-URL}`** with values from your Azure portal Storage account containers instance:
+
+ `**Request sample**`
+
+ ```json
+ {
+ "kind": "ExtractiveSummarization",
+ "parameters": {
+ "sentenceCount": 6
+ },
+ "analysisInput":{
+ "documents":[
+ {
+ "source":{
+ "location":"{your-source-container-SAS-URL}"
+ },
+ "targets":
+ {
+ "location":"{your-target-container-SAS-URL}",
+ }
+ }
+ ]
+ }
+ }
+ ```
+
+### Run the POST request
+
+Before you run the **POST** request, replace `{your-language-resource-endpoint}` and `{your-key}` with the endpoint value from your Azure portal Language resource instance.
+
+ > [!IMPORTANT]
+ > Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](/azure/key-vault/general/overview). For more information, *see* Azure AI services [security](/azure/ai-services/security-features).
+
+ ***PowerShell***
+
+ ```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-text/jobs?api-version=2023-04-01" -i -X POST --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@document-summarization.json"
+ ```
+
+ ***command prompt / terminal***
+
+ ```bash
+ curl -v -X POST "{your-language-resource-endpoint}/language/analyze-text/jobs?api-version=2023-04-01" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}" --data "@document-summarization.json"
+ ```
+
+Here's a sample response:
+
+ ```http
+ HTTP/1.1 202 Accepted
+ Content-Length: 0
+ operation-location: https://{your-language-resource-endpoint}/language/analyze-documents/jobs/f1cc29ff-9738-42ea-afa5-98d2d3cabf94?api-version=2023-11-15-preview
+ apim-request-id: e7d6fa0c-0efd-416a-8b1e-1cd9287f5f81
+ x-ms-region: West US 2
+ Date: Thu, 25 Jan 2024 15:12:32 GMT
+ ```
+
+### POST response (jobId)
+
+You receive a 202 (Success) response that includes a read-only Operation-Location header. The value of this header contains a jobId that can be queried to get the status of the asynchronous operation and retrieve the results using a GET request:
+
+ :::image type="content" source="media/operation-location-result-id.png" alt-text="Screenshot showing the operation-location value in the POST response.":::
+
+### Get analyze results (GET request)
+
+1. After your successful **POST** request, poll the operation-location header returned in the POST request to view the processed data.
+
+1. Here's the structure of the **GET** request:
+
+ ```http
+ GET {cognitive-service-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview
+ ```
+
+1. Before you run the command, make these changes:
+
+ * Replace {**jobId**} with the Operation-Location header from the POST response.
+
+ * Replace {**your-language-resource-endpoint**} and {**your-key**} with the values from your Language service instance in the Azure portal.
+
+### Get request
+
+```powershell
+ cmd /c curl "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" -i -X GET --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+```bash
+ curl -v -X GET "{your-language-resource-endpoint}/language/analyze-documents/jobs/{jobId}?api-version=2023-11-15-preview" --header "Content-Type: application/json" --header "Ocp-Apim-Subscription-Key: {your-key}"
+```
+
+#### Examine the response
+
+You receive a 200 (Success) response with JSON output. The **status** field indicates the result of the operation. If the operation isn't complete, the value of **status** is "running" or "notStarted", and you should call the API again, either manually or through a script. We recommend an interval of one second or more between calls.
+
+#### Sample response
+
+```json
+{
+ "jobId": "f1cc29ff-9738-42ea-afa5-98d2d3cabf94",
+ "lastUpdatedDateTime": "2024-01-24T13:17:58Z",
+ "createdDateTime": "2024-01-24T13:17:47Z",
+ "expirationDateTime": "2024-01-25T13:17:47Z",
+ "status": "succeeded",
+ "errors": [],
+ "tasks": {
+ "completed": 1,
+ "failed": 0,
+ "inProgress": 0,
+ "total": 1,
+ "items": [
+ {
+ "kind": "ExtractiveSummarizationLROResults",
+ "lastUpdateDateTime": "2024-01-24T13:17:58.33934Z",
+ "status": "succeeded",
+ "results": {
+ "documents": [
+ {
+ "id": "doc_0",
+ "source": {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-input/input.pdf"
+ },
+ "targets": [
+ {
+ "kind": "AzureBlob",
+ "location": "https://myaccount.blob.core.windows.net/sample-output/df6611a3-fe74-44f8-b8d4-58ac7491cb13/ExtractiveSummarization-0001/input.result.json"
+ }
+ ],
+ "warnings": []
+ }
+ ],
+ "errors": [],
+ "modelVersion": "2023-02-01-preview"
+ }
+ }
+ ]
+ }
+}
+```
+++
+***Upon successful completion***:
+
+* The analyzed documents can be found in your target container.
+* The successful POST method returns a `202 Accepted` response code indicating that the service created the batch request.
+* The POST request also returned response headers including `Operation-Location` that provides a value used in subsequent GET requests.
+
+## Clean up resources
+
+If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
+
+* [Portal](../../multi-service-resource.md?pivots=azportal#clean-up-resources)
+* [Azure CLI](../../multi-service-resource.md?pivots=azcli#clean-up-resources)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [PII detection overview](../personally-identifiable-information/overview.md "Learn more about Personally Identifiable Information detection.") [Document Summarization overview](../summarization/overview.md "Learn more about automatic document summarization.")
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/overview.md
Previously updated : 12/19/2023 Last updated : 01/31/2024 # What is Personally Identifiable Information (PII) detection in Azure AI Language?
-PII detection is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can **identify, categorize, and redact** sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use have been separated.
+PII detection is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. The PII detection feature can **identify, categorize, and redact** sensitive information in unstructured text. For example: phone numbers, email addresses, and forms of identification. The method for utilizing PII in conversations is different than other use cases, and articles for this use are separate.
* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to-call.md) contain instructions for using the service in more specific or customized ways. * The [**conceptual articles**](concepts/entity-categories.md) provide in-depth explanations of the service's functionality and features. PII comes into two shapes:+ * [PII](how-to-call.md) - works on unstructured text. * [Conversation PII (preview)](how-to-call-for-conversations.md) - tailored model to work on conversation transcription. - [!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
-## Get started with PII detection
+## Native document support
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for the [**PiiEntityRecognition**](../personally-identifiable-information/concepts/entity-categories.md) capability.
+
+ Currently **PII** supports the following native document formats:
+|File type|File extension|Description|
+||--|--|
+|Text| `.txt`|An unformatted text document.|
+|Adobe PDF| `.pdf` |A portable document file formatted document.|
+|Microsoft Word|`.docx`|A Microsoft Word document file.|
+For more information, *see* [**Use native documents for language processing**](../native-document-support/use-native-documents.md)
+
+## Get started with PII detection
+
-## Responsible AI
+## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it's deployed. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who use it, the people affected by it, and the deployment environment. Read the [transparency note for PII](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. For more information, see the following articles:
[!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)] ## Example scenarios * **Apply sensitivity labels** - For example, based on the results from the PII service, a public sensitivity label might be applied to documents where no PII entities are detected. For documents where US addresses and phone numbers are recognized, a confidential label might be applied. A highly confidential label might be used for documents where bank routing numbers are recognized.
-* **Redact some categories of personal information from documents that get wider circulation** - For example, if customer contact records are accessible to first line support representatives, the company may want to redact the customer's personal information besides their name from the version of the customer history to preserve the customer's privacy.
-* **Redact personal information in order to reduce unconscious bias** - For example, during a company's resume review process, they may want to block name, address and phone number to help reduce unconscious gender or other biases.
+* **Redact some categories of personal information from documents that get wider circulation** - For example, if customer contact records are accessible to frontline support representatives, the company can redact the customer's personal information besides their name from the version of the customer history to preserve the customer's privacy.
+* **Redact personal information in order to reduce unconscious bias** - For example, during a company's resume review process, they can block name, address and phone number to help reduce unconscious gender or other biases.
* **Replace personal information in source data for machine learning to reduce unfairness** ΓÇô For example, if you want to remove names that might reveal gender when training a machine learning model, you could use the service to identify them and you could replace them with generic placeholders for model training. * **Remove personal information from call center transcription** ΓÇô For example, if you want to remove names or other PII data that happen between the agent and the customer in a call center scenario. You could use the service to identify and remove them. * **Data cleaning for data science** - PII can be used to make the data ready for data scientists and engineers to be able to use these data to train their machine learning models. Redacting the data to make sure that customer data isn't exposed.
An AI system includes not only the technology, but also the people who will use
There are two ways to get started using the entity linking feature: * [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Language service features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
+* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md
Summarization is one of the features offered by [Azure AI Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language. Use this article to learn more about this feature, and how to use it in your applications.
-Note that though the services are labeled document and conversation summarization, document summarization only accepts plain text blocks, and conversation summarization will accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use document summarization for that scenario.
+Though the services are labeled document and conversation summarization, document summarization only accepts plain text blocks, and conversation summarization accept various speech artifacts in order for the model to learn more. If you want to process a conversation but only care about text, you can use document summarization for that scenario.
Custom Summarization enables users to build custom AI models to summarize unstructured text, such as contracts or novels. By creating a Custom Summarization project, developers can iteratively label data, train, evaluate, and improve model performance before making it available for consumption. The quality of the labeled data greatly impacts model performance. To simplify building and customizing your model, the service offers a custom web portal that can be accessed through the [Language studio](https://aka.ms/languageStudio). You can easily get started with the service by following the steps in this [quickstart](custom/quickstart.md).
This documentation contains the following article types:
* **[Quickstarts](quickstart.md?pivots=rest-api&tabs=document-summarization)** are getting-started instructions to guide you through making requests to the service. * **[How-to guides](how-to/document-summarization.md)** contain instructions for using the service in more specific or customized ways.
-Document summarization uses natural language processing techniques to generate a summary for documents. There are two general approaches to automatic summarization, both of which are supported by the API: extractive and abstractive.
+Document summarization uses natural language processing techniques to generate a summary for documents. There are two supported API approaches to automatic summarization: extractive and abstractive.
-Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words which are not simply extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
+Extractive summarization extracts sentences that collectively represent the most important or relevant information within the original content. Abstractive summarization generates a summary with concise, coherent sentences or words that aren't verbatim extract sentences from the original document. These features are designed to shorten content that could be considered too long to read.
+
+## Native document support
+
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for both [**AbstractiveSummarization**](../summarization/how-to/document-summarization.md#try-document-abstractive-summarization) and [**ExtractiveSummarization**](../summarization/how-to/document-summarization.md#try-document-extractive-summarization) capabilities.
+
+ Currently **Document Summarization** supports the following native document formats:
+
+|File type|File extension|Description|
+||--|--|
+|Text| `.txt`|An unformatted text document.|
+|Adobe PDF| `.pdf` |A portable document file formatted document.|
+|Microsoft Word|`.docx`|A Microsoft Word document file.|
+
+For more information, *see* [**Use native documents for language processing**](../native-document-support/use-native-documents.md)
## Key features There are two types of document summarization this API provides: * **Extractive summarization**: Produces a summary by extracting salient sentences within the document.
- * Multiple extracted sentences: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content.
- * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
- * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization will return the three highest scored sentences.
- * Positional information: The start position and length of extracted sentences.
-* **Abstractive summarization**: Generates a summary that may not use the same words as those in the document, but captures the main idea.
- * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document may be segmented so multiple groups of summary texts may be returned with their contextual input range.
- * Contextual input range: The range within the input document that was used to generate the summary text.
+
+ * Multiple extracted sentences: These sentences collectively convey the main idea of the document. They're original sentences extracted from the input document's content.
+ * Rank score: The rank score indicates how relevant a sentence is to a document's main topic. Document summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank.
+ * Multiple returned sentences: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary extractive summarization returns the three highest scored sentences.
+ * Positional information: The start position and length of extracted sentences.
+
+* **Abstractive summarization**: Generates a summary that doesn't use the same words as in the document, but captures the main idea.
+ * Summary texts: Abstractive summarization returns a summary for each contextual input range within the document. A long document can be segmented so multiple groups of summary texts can be returned with their contextual input range.
+ * Contextual input range: The range within the input document that was used to generate the summary text.
As an example, consider the following paragraph of text:
-*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
+*"At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there's magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we achieve human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
-The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](../concepts/multilingual-emoji-support.md) for more information.
+The document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API is returned. The output is available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response can contain text offsets. For more information, see [how to process offsets](../concepts/multilingual-emoji-support.md).
-Using the above example, the API might return the following summarized sentences:
+If we use the above example, the API might return these summarized sentences:
**Extractive summarization**:-- "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."-- "We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."-- "The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
+- "At Microsoft, we are on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding."
+- "We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages."
+- "The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today."
**Abstractive summarization**:-- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we have achieved human performance on benchmarks in."
+- "Microsoft is taking a more holistic, human-centric approach to learning and understanding. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. Over the past five years, we achieved human performance on benchmarks in conversational speech recognition."
# [Conversation summarization](#tab/conversation-summarization)
Conversation summarization supports the following features:
## When to use issue and resolution summarization
-* When there are aspects of an ΓÇ£issueΓÇ¥ and ΓÇ£resolutionΓÇ¥, such as:
+* When there are aspects of an "issue" and "resolution" such as:
* The reason for a service chat/call (the issue). * That resolution for the issue. * You only want a summary that focuses on related information about issues and resolutions.
Conversation summarization supports the following features:
As an example, consider the following example conversation:
-**Agent**: "*Hello, youΓÇÖre chatting with Rene. How may I help you?*"
+**Agent**: "*Hello, you're chatting with Rene. How may I help you?*"
-**Customer**: "*Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didnΓÇÖt work.*"
+**Customer**: "*Hi, I tried to set up wifi connection for Smart Brew 300 espresso machine, but it didn't work.*"
-**Agent**: "*IΓÇÖm sorry to hear that. LetΓÇÖs see what we can do to fix this issue. Could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking?*"
+**Agent**: "*I'm sorry to hear that. Let's see what we can do to fix this issue. Could you push the wifi connection button, hold for 3 seconds, then let me know if the power light is slowly blinking?*"
**Customer**: "*Yes, I pushed the wifi connection button, and now the power light is slowly blinking.*"
As an example, consider the following example conversation:
**Customer**: "*No. Nothing happened.*"
-**Agent**: "*I see. Thanks. LetΓÇÖs try if a factory reset can solve the issue. Could you please press and hold the center button for 5 seconds to start the factory reset.*"
+**Agent**: "*I see. Thanks. Let's try if a factory reset can solve the issue. Could you please press and hold the center button for 5 seconds to start the factory reset.*"
-**Customer**: *"IΓÇÖve tried the factory reset and followed the above steps again, but it still didnΓÇÖt work."*
+**Customer**: *"I've tried the factory reset and followed the above steps again, but it still didn't work."*
-**Agent**: "*IΓÇÖm very sorry to hear that. Let me see if thereΓÇÖs another way to fix the issue. Please hold on for a minute.*"
+**Agent**: "*I'm very sorry to hear that. Let me see if there's another way to fix the issue. Please hold on for a minute.*"
-Conversation summarization feature would simplify the text into the following:
+Conversation summarization feature would simplify the text as follows:
|Example summary | Format | Conversation aspect | ||-|-|
Conversation summarization feature would simplify the text into the following:
# [Document summarization](#tab/document-summarization)
-* Summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
-* Summarization works with a variety of written languages. See [language support](language-support.md?tabs=document-summarization) for more information.
+* Summarization takes text for analysis. For more information, see [Data and service limits](../concepts/data-limits.md) in the how-to guide.
+* Summarization works with various written languages. For more information, see [language support](language-support.md?tabs=document-summarization).
# [Conversation summarization](#tab/conversation-summarization)
-* Conversation summarization takes structured text for analysis. See the [data and service limits](../concepts/data-limits.md) for more information.
-* Conversation summarization accepts text in English. See [language support](language-support.md?tabs=conversation-summarization) for more information.
+* Conversation summarization takes structured text for analysis. For more information, see [data and service limits](../concepts/data-limits.md).
+* Conversation summarization accepts text in English. For more information, see [language support](language-support.md?tabs=conversation-summarization).
As you use document summarization in your applications, see the following refere
## Responsible AI
-An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
+An AI system includes not only the technology, but also the people who use it, the people affected by it, and the deployment environment. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. For more information, see the following articles:
* [Transparency note for Azure AI Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/ai-services/language-service/context/context) * [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/ai-services/language-service/context/context)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/whats-new.md
Previously updated : 04/14/2023 Last updated : 01/31/2024
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## January 2024
+
+* [Native document support](native-document-support/use-native-documents.md) is now available in `2023-11-15-preview` public preview.
+ ## November 2023 * [Named Entity Recognition Container](./named-entity-recognition/how-to/use-containers.md) is now Generally Available (GA).
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
## April 2023 * [Custom Text analytics for health](./custom-text-analytics-for-health/overview.md) is available in public preview, which enables you to build custom AI models to extract healthcare specific entities from unstructured text
-* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below.
- * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md).
+* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the following links:
+ * Autolabel your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md).
* Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
-* The latest model version (2022-10-01) for Language Detection now supports 6 more International languages and 12 Romanized Indic languages.
+* The latest model version (`2022-10-01`) for Language Detection now supports 6 more International languages and 12 Romanized Indic languages.
## March 2023
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
## February 2023
-* Conversational language understanding and orchestration workflow is now available in the following regions in the sovereign cloud for China:
+* Conversational language understanding and orchestration workflow now available in the following regions in the sovereign cloud for China:
* China East 2 (Authoring and Prediction) * China North 2 (Prediction) * New model evaluation updates for Conversational language understanding and Orchestration workflow.
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* The summarization feature now has the following capabilities: * [Document summarization](./summarization/overview.md):
- * Abstractive summarization, which generates a summary of a document that may not use the same words as those in the document, but captures the main idea.
+ * Abstractive summarization, which generates a summary of a document that can't use the same words as presented in the document, but captures the main idea.
* [Conversation summarization](./summarization/overview.md?tabs=document-summarization?tabs=conversation-summarization) * Chapter title summarization, which returns suggested chapter titles of input conversations. * Narrative summarization, which returns call notes, meeting notes or chat summaries of input conversations.
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* [Orchestration workflow](./orchestration-workflow/overview.md) * [Custom text classification](./custom-text-classification/overview.md) * [Custom named entity recognition](./custom-named-entity-recognition/overview.md)
-* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an additional ability to influence entity predictions.
+* [Regular expressions](./conversational-language-understanding/concepts/entity-components.md#regex-component) in conversational language understanding and [required components](./conversational-language-understanding/concepts/entity-components.md#required-components), offering an added ability to influence entity predictions.
* [Entity resolution](./named-entity-recognition/concepts/entity-resolutions.md) in named entity recognition * New region support for: * [Conversational language understanding](./conversational-language-understanding/service-limits.md#regional-availability)
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Central India * Switzerland North * West US 2
-* Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
+* Text Analytics for Health now [supports more languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
* The Azure.AI.TextAnalytics client library v5.2.0 are generally available and ready for use in production applications. For more information on Language service client libraries, see the [**Developer overview**](./concepts/developer-guide.md). * Java * [**Package (Maven)**](https://mvnrepository.com/artifact/com.azure/azure-ai-textanalytics/5.2.0)
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Conversational PII is now available in all Azure regions supported by the Language service.
-* A new version of the Language API (`2022-07-01-preview`) has been released. It provides:
+* A new version of the Language API (`2022-07-01-preview`) is available. It provides:
* [Automatic language detection](./concepts/use-asynchronously.md#automatic-language-detection) for asynchronous tasks. * Text Analytics for health confidence scores are now returned in relations.
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* [Python](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-language-conversations_1.0.0/sdk/cognitivelanguage/azure-ai-language-conversations) * v1.1.0b1 client library for [conversation summarization](summarization/quickstart.md?tabs=conversation-summarization&pivots=programming-language-python) is available as a preview for: * [Python](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-language-conversations_1.1.0b1/sdk/cognitivelanguage/azure-ai-language-conversations/samples/README.md)
-* There is a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and reference documentation for information on structuring your API calls. All text analytics 3.2-preview.2 API users can begin migrating their workloads to this new endpoint.
+* There's a new endpoint URL and request format for making REST API calls to prebuilt Language service features. See the following quickstart guides and reference documentation for information on structuring your API calls. All text analytics `3.2-preview.2` API users can begin migrating their workloads to this new endpoint.
* [Entity linking](./entity-linking/quickstart.md?pivots=rest-api) * [Language detection](./language-detection/quickstart.md?pivots=rest-api) * [Key phrase extraction](./key-phrase-extraction/quickstart.md?pivots=rest-api)
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Model improvements for latest model-version for [text summarization](summarization/overview.md)
-* Model 2021-10-01 is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
+* Model `2021-10-01` is Generally Available (GA) for [Sentiment Analysis and Opinion Mining](sentiment-opinion-mining/overview.md), featuring enhanced modeling for emojis and better accuracy across all supported languages.
* [Question Answering](question-answering/overview.md): Active learning v2 incorporates a better clustering logic providing improved accuracy of suggestions. It considers user actions when suggestions are accepted or rejected to avoid duplicate suggestions, and improve query suggestions. ## December 2021
-* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library have been retired. Please upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
+* The version 3.1-preview.x REST endpoints and 5.1.0-beta.x client library are retired. Upgrade to the General Available version of the API(v3.1). If you're using the client libraries, use package version 5.1.0 or higher. See the [migration guide](./concepts/migrate-language-service-latest.md) for details.
## November 2021
-* Based on ongoing customer feedback, we have increased the character limit per document for Text Analytics for health from 5,120 to 30,720.
+* Based on ongoing customer feedback, we increased the character limit per document for Text Analytics for health from 5,120 to 30,720.
* Azure AI Language release, with support for:
Azure AI Language is updated on an ongoing basis. To stay up-to-date with recent
* Preview model version `2021-10-01-preview` for [Sentiment Analysis and Opinion mining](sentiment-opinion-mining/overview.md), which provides: * Improved prediction quality.
- * [Additional language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
+ * [Added language support](sentiment-opinion-mining/language-support.md?tabs=sentiment-analysis) for the opinion mining feature.
* For more information, see the [project z-code site](https://www.microsoft.com/research/project/project-zcode/). * To use this [model version](sentiment-opinion-mining/how-to/call-api.md#specify-the-sentiment-analysis-model), you must specify it in your API calls, using the model version parameter.
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
One of the key features of Azure OpenAI on your data is its ability to retrieve
To get started, [connect your data source](../use-your-data-quickstart.md) using Azure OpenAI Studio and start asking questions and chatting on your data. > [!NOTE]
-> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) with either the gpt-35-turbo or the gpt-4 models deployed.
+> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) deployed in a [supported region](#azure-openai-on-your-data-regional-availability) with either the gpt-35-turbo or the gpt-4 models.
## Data formats and file types
class TokenEstimator(object):
token_output = TokenEstimator.estimate_tokens(input_text) ```
+## Azure OpenAI on your data regional availability
+
+You can use Azure OpenAI on your data with an Azure OpenAI resource in the following regions:
+* Australia East
+* Brazil South
+* Canada East
+* East US
+* East US 2
+* France Central
+* Japan East
+* North Central US
+* Norway East
+* South Central US
+* South India
+* Sweden Central
+* Switzerland North
+* UK South
+* West Europe
+* West US
+
+If your Azure OpenAI resource is in another region, you won't be able to use Azure OpenAI on your data.
+ ## Next steps * [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)
ai-services Dynamic Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/dynamic-quota.md
+
+ Title: Azure OpenAI Service dynamic quota
+
+description: Learn how to use Azure OpenAI dynamic quota
+#
++++ Last updated : 01/30/2024++++
+# Azure OpenAI Dynamic quota (Preview)
+
+Dynamic quota is an Azure OpenAI feature that enables a standard (pay-as-you-go) deployment to opportunistically take advantage of more quota when extra capacity is available. When dynamic quota is set to off, your deployment will be able to process a maximum throughput established by your Tokens Per Minute (TPM) setting. When you exceed your preset TPM, requests will return HTTP 429 responses. When dynamic quota is enabled, the deployment has the capability to access higher throughput before returning 429 responses, allowing you to perform more calls earlier. The extra requests are still billed at the [regular pricing rates](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
+
+Dynamic quota can only temporarily *increase* your available quota: it will never decrease below your configured value.
+
+## When to use dynamic quota
+
+Dynamic quota is useful in most scenarios, particularly when your application can use extra capacity opportunistically or the application itself is driving the rate at which the Azure OpenAI API is called.
+
+Typically, the situation in which you might prefer to avoid dynamic quota is when your application would provide an adverse experience if quota is volatile or increased.
+
+For dynamic quota, consider scenarios such as:
+
+* Bulk processing,
+* Creating summarizations or embeddings for Retrieval Augmented Generation (RAG),
+* Offline analysis of logs for generation of metrics and evaluations,
+* Low-priority research,
+* Apps that have a small amount of quota allocated.
+
+### When does dynamic quota come into effect?
+
+The Azure OpenAI backend decides if, when, and how much extra dynamic quota is added or removed from different deployments. It isn't forecasted or announced in advance, and isn't predictable. Azure OpenAI lets your application know there's more quota available by responding with an HTTP 429 and not letting more API calls through. To take advantage of dynamic quota, your application code must be able to issue more requests as HTTP 429 responses become infrequent.
+
+### How does dynamic quota change costs?
+
+* Calls that are done above your base quota have the same costs as regular calls.
+
+* There's no extra cost to turn on dynamic quota on a deployment, though the increased throughput could ultimately result in increased cost depending on the amount of traffic your deployment receives.
+
+> [!NOTE]
+> With dynamic quota, there is no call enforcement of a "ceiling" quota or throughput. Azure OpenAI will process as many requests as it can above your baseline quota. If you need to control the rate of spend even when quota is less constrained, your application code needs to hold back requests accordingly.
+
+## How to use dynamic quota
+
+To use dynamic quota, you must:
+
+* Turn on the dynamic quota property in your Azure OpenAI deployment.
+* Make sure your application can take advantage of dynamic quota.
+
+### Enable dynamic quota
+
+To activate dynamic quota for your deployment, you can go to the advanced properties in the resource configuration, and switch it on:
++
+Alternatively, you can enable it programmatically with Azure CLI's [`az rest`](/cli/azure/reference-index?view=azure-cli-latest#az-rest&preserve-view=true):
+
+Replace the `{subscriptionId}`, `{resourceGroupName}`, `{accountName}`, and `{deploymentName}` with the relevant values for your resource. In this case, `accountName` is equal to Azure OpenAI resource name.
+
+```azurecli
+az rest --method patch --url "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments/{deploymentName}?2023-10-01-preview" --body '{"properties": {"dynamicThrottlingEnabled": true} }'
+```
+
+### How do I know how much throughput dynamic quota is adding to my app?
+
+To monitor how it's working, you can track the throughput of your application in Azure Monitor. During the Preview of dynamic quota, there's no specific metric or log to indicate if quota has been dynamically increased or decreased.
+dynamic quota is less likely to be engaged for your deployment if it runs in heavily utilized regions, and during peak hours of use for those regions.
+
+## Next steps
+
+* Learn more about how [quota works](./quota.md).
+* Learn more about [monitoring Azure OpenAI](./monitoring.md).
++
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| Parameter | Type | Required? | Default | Description | |--|--|--|--|--|
-| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#using-whisper-models) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
+| ```file```| file | Yes | N/A | The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.<br/><br/>The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, break it into chunks. Alternatively you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.<br/><br/>You can get sample audio files from the [Azure AI Speech SDK repository at GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/audiofiles). |
| ```language``` | string | No | Null | The language of the input audio such as `fr`. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format improves accuracy and latency.<br/><br/>For the list of supported languages, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```prompt``` | string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). | | ```response_format``` | string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. |
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).
+- An Azure OpenAI resource in a [supported region](./concepts/use-your-data.md#azure-openai-on-your-data-regional-availability) with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).
- Your chat model can use version `gpt-35-turbo (0301)`, `gpt-35-turbo-16k`, `gpt-4`, and `gpt-4-32k`. You can view or change your model version in [Azure OpenAI Studio](./how-to/working-with-models.md#model-updates).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Azure OpenAI Service now supports the GPT-3.5 Turbo Instruct model. This model h
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whisper model. Get AI-generated text based on the speech audio you provide. To learn more, check out the [quickstart](./whisper-quickstart.md). > [!NOTE]
-> Azure AI Speech also supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](../speech-service/batch-transcription-create.md#using-whisper-models) guide. Check out [What is the Whisper model?](../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
+> Azure AI Speech also supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) guide. Check out [What is the Whisper model?](../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
### New Regions
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
zone_pivot_groups: openai-whisper
In this quickstart, you use the Azure OpenAI Whisper model for speech to text.
-The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#using-whisper-models) API.
+The file size limit for the Azure OpenAI Whisper model is 25 MB. If you need to transcribe a file larger than 25 MB, you can use the Azure AI Speech [batch transcription](../speech-service/batch-transcription-create.md#use-a-whisper-model) API.
## Prerequisites
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
The batch transcription API supports many different formats and codecs, such as:
## Azure Blob Storage upload
-When audio files are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account, you can request transcription of individual audio files or an entire Azure Blob Storage container. You can also [write transcription results](batch-transcription-create.md#destination-container-url) to a Blob container.
+When audio files are located in an [Azure Blob Storage](../../storage/blobs/storage-blobs-overview.md) account, you can request transcription of individual audio files or an entire Azure Blob Storage container. You can also [write transcription results](batch-transcription-create.md#specify-a-destination-container-url) to a Blob container.
> [!NOTE] > For blob and container limits, see [batch transcription quotas and limits](speech-services-quotas-and-limits.md#batch-transcription).
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Title: Create a batch transcription - Speech service
-description: With batch transcriptions, you submit the audio, and then retrieve transcription results asynchronously.
+description: Learn how to use Azure AI Speech for batch transcriptions, where you submit audio and then retrieve transcription results asynchronously.
Previously updated : 1/18/2024 Last updated : 1/26/2024 zone_pivot_groups: speech-cli-rest
+#customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
# Create a batch transcription
+With batch transcriptions, you submit [audio data](batch-transcription-audio-data.md) in a batch. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
+ > [!IMPORTANT]
-> New pricing is in effect for batch transcription via [Speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
+> New pricing is in effect for batch transcription by using [Speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
-With batch transcriptions, you submit the [audio data](batch-transcription-audio-data.md), and then retrieve transcription results asynchronously. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
+## Prerequisites
-> [!NOTE]
-> To use batch transcription, you need to use a standard (S0) Speech resource. Free resources (F0) aren't supported. For more information, see [pricing and limits](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+- The [Speech SDK](quickstarts/setup-platform.md) installed.
+- A standard (S0) Speech resource. Free resources (F0) aren't supported.
## Create a transcription job
With batch transcriptions, you submit the [audio data](batch-transcription-audio
To create a transcription, use the [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation of the [Speech to text REST API](rest-speech-to-text.md#transcriptions). Construct the request body according to the following instructions: - You must set either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).-- Set the required `locale` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.
+- Set the required `locale` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later.
- Set the required `displayName` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later.-- Optionally to use a model other than the base model, set the `model` property to the model ID. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).-- Optionally you can set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. For Whisper models set the `displayFormWordLevelTimestampsEnabled` property instead. Whisper is a display-only model, so the lexical field isn't populated in the transcription.-- Optionally you can set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.
+- Optionally, to use a model other than the base model, set the `model` property to the model ID. For more information, see [Use a custom model](#use-a-custom-model) and [Use a Whisper model](#use-a-whisper-model).
+- Optionally, set the `wordLevelTimestampsEnabled` property to `true` to enable word-level timestamps in the transcription results. The default value is `false`. For Whisper models, set the `displayFormWordLevelTimestampsEnabled` property instead. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
+- Optionally, set the `languageIdentification` property. Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification). If you set the `languageIdentification` property, then you must also set `languageIdentification.candidateLocales` with candidate locales.
+
+For more information, see [Request configuration options](#request-configuration-options).
-For more information, see [request configuration options](#request-configuration-options).
+Make an HTTP POST request that uses the URI as shown in the following [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) example.
-Make an HTTP POST request using the URI as shown in the following [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) example. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
+- Replace `YourSubscriptionKey` with your Speech resource key.
+- Replace `YourServiceRegion` with your Speech resource region.
+- Set the request body properties as previously described.
```azurecli-interactive curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
regularly from the service, after you retrieve the results. Alternatively, set t
To create a transcription, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions: -- Set the required `content` parameter. You can specify either a semi-colon delimited list of individual files, or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).-- Set the required `language` property. This should match the expected locale of the audio data to transcribe. The locale can't be changed later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
+- Set the required `content` parameter. You can specify a semi-colon delimited list of individual files or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
+- Set the required `language` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
- Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response. Here's an example Speech CLI command that creates a transcription job:
-```azurecli-interactive
+```azurecli
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav ```
The top-level `self` property in the response body is the transcription's URI. U
For Speech CLI help with transcriptions, run the following command:
-```azurecli-interactive
+```azurecli
spx help batch transcription ```
Here are some property options that you can use to configure a transcription whe
| Property | Description | |-|-| |`channels`|An array of channel numbers to process. Channels `0` and `1` are transcribed by default. |
-|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
-|`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
-|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) contains a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0), then it's ignored and only 2 speakers are identified.|
-|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1 and later).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
+|`contentContainerUrl`| You can submit individual audio files or a whole storage container.<br/><br/>You must specify the audio data location by using either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property isn't returned in the response.|
+|`contentUrls`| You can submit individual audio files or a whole storage container.<br/><br/>You must specify the audio data location by using either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property isn't returned in the response.|
+|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information, such as the supported security scenarios, see [Specify a destination container URL](#specify-a-destination-container-url).|
+|`diarization`|Indicates that the Speech service should attempt diarization analysis on the input, which is expected to be a mono channel that contains multiple voices. The feature isn't available with stereo recordings.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings.<br/><br/>Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) contains a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers, setting `diarizationEnabled` property to `true` is enough. For an example of the property usage, see [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).<br/><br/>The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property. For an example, see [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version, such as version 3.0, it's ignored and only two speakers are identified.|
+|`diarizationEnabled`|Specifies that the Speech service should attempt diarization analysis on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization`. Use only with Speech to text REST API version 3.1 and later.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
|`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
-|`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the displayWords property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
+|`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the `displayWords` property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
|`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.|
-|`languageIdentification.candidateLocales`|The candidate locales for language identification such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of 2 and a maximum of 10 candidate locales, including the main locale for the transcription, is supported.|
-|`locale`|The locale of the batch transcription. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
-|`model`|You can set the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).|
+|`languageIdentification.candidateLocales`|The candidate locales for language identification, such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of two and a maximum of ten candidate locales, including the main locale for the transcription, is supported.|
+|`locale`|The locale of the batch transcription. This value should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
+|`model`|You can set the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Use a custom model](#use-a-custom-model) and [Use a Whisper model](#use-a-whisper-model).|
|`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. | |`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.<br/><br/>This property isn't applicable for Whisper models.| |`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete) regularly after you retrieve the transcription results.|
Here are some property options that you can use to configure a transcription whe
For Speech CLI help with transcription configuration options, run the following command:
-```azurecli-interactive
+```azurecli
spx help batch transcription create advanced ``` ::: zone-end
-## Using custom models
+## Use a custom model
-Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
+Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
-Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model.
+Optionally, you can modify the previous [create transcription example](#create-a-transcription-job) by setting the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model.
::: zone pivot="rest-api"
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone pivot="speech-cli"
-```azurecli-interactive
+```azurecli
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.1/models/base/1aae1070-7972-47e9-a977-87e3b05c457d" ``` ::: zone-end
-To use a custom speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
+To use a custom speech model for batch transcription, you need the model's URI. The top-level `self` property in the response body is the model's URI. You can retrieve the model location when you create or get a model. For more information, see the JSON response example in [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model).
> [!TIP]
-> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
+> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if you use the [custom speech model](how-to-custom-speech-train-model.md) only for batch transcription.
-Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
+Batch transcription requests for expired models fail with a 4xx error. Set the `model` property to a base model or custom model that isn't expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
-## Using Whisper models
+## Use a Whisper model
-Azure AI Speech supports OpenAI's Whisper model via the batch transcription API.
+Azure AI Speech supports OpenAI's Whisper model by using the batch transcription API. You can use the Whisper model for batch transcription.
> [!NOTE]
-> Azure OpenAI Service also supports OpenAI's Whisper model for speech to text with a synchronous REST API. To learn more, check out the [quickstart](../openai/whisper-quickstart.md). Check out [What is the Whisper model?](./whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
+> Azure OpenAI Service also supports OpenAI's Whisper model for speech to text with a synchronous REST API. To learn more, see [Speech to text with the Azure OpenAI Whisper model](../openai/whisper-quickstart.md). For more information about when to use Azure AI Speech vs. Azure OpenAI Service, see [What is the Whisper model?](./whisper-overview.md)
-To use a Whisper model for batch transcription, you also need to set the `model` property. Whisper is a display-only model, so the lexical field isn't populated in the response.
+To use a Whisper model for batch transcription, you need to set the `model` property. Whisper is a display-only model, so the lexical field isn't populated in the response.
> [!IMPORTANT]
-> Whisper models are currently in preview. And you should always use [version 3.2](./migrate-v3-1-to-v3-2.md) of the speech to text API (that's available in a seperate preview) for Whisper models.
+> Whisper models are currently in preview. You should always use [version 3.2](./migrate-v3-1-to-v3-2.md) of the speech to text API, which is available in a separate preview, for Whisper models.
-Whisper models via batch transcription are supported in the East US, Southeast Asia, and West Europe regions.
+Whisper models by batch transcription are supported in the East US, Southeast Asia, and West Europe regions.
::: zone pivot="rest-api"
-You can make a [Models_ListBaseModels](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-2-preview1/operations/Models_ListBaseModels) request to get available base models for all locales.
+You can make a [Models_ListBaseModels](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-2-preview1/operations/Models_ListBaseModels) request to get available base models for all locales.
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
Make an HTTP GET request as shown in the following example for the `eastus` regi
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" ```
-By default only the 100 oldest base models are returned, so you can use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
+By default, only the 100 oldest base models are returned. Use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
```azurecli-interactive curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base?skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
-``````
+```
::: zone-end ::: zone pivot="speech-cli"
-Make sure that you set the [configuration variables](spx-basics.md#create-a-resource-configuration) for a Speech resource in one of the supported regions. You can run the `spx csr list --base` command to get available base models for all locales.
+Make sure that you set the [configuration variables](spx-basics.md#create-a-resource-configuration) for a Speech resource in one of the supported regions. You can run the `spx csr list --base` command to get available base models for all locales.
-```azurecli-interactive
+```azurecli
spx csr list --base --api-version v3.2-preview.1 ```+ ::: zone-end The `displayName` property of a Whisper model contains "Whisper Preview" as shown in this example. Whisper is a display-only model, so the lexical field isn't populated in the transcription.
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
::: zone pivot="speech-cli"
-```azurecli-interactive
+```azurecli
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2-preview.1/models/base/d9cbeee6-582b-47ad-b5c1-6226583c92b6" --api-version v3.2-preview.1 ``` ::: zone-end -
-## Destination container URL
+## Specify a destination container URL
The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted.
-You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic.
+You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). This option uses only an [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic.
-If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). See details on how to use BYOS-enabled Speech resource for Batch transcription in [this article](bring-your-own-storage-speech-resource-speech-to-text.md).
+If you want to store the transcription results in an Azure Blob storage container by using the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). For more information, see [Use the Bring your own storage (BYOS) Speech resource for speech to text](bring-your-own-storage-speech-resource-speech-to-text.md).
-## Next steps
+## Related content
- [Batch transcription overview](batch-transcription.md) - [Locate audio files for batch transcription](batch-transcription-audio-data.md)
ai-services Bring Your Own Storage Speech Resource Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md
Perform these steps to execute Batch transcription with BYOS-enabled Speech reso
> [!IMPORTANT] > Don't use `destinationContainerUrl` parameter in your transcription request. If you use BYOS, the transcription results are stored in the BYOS-associated Storage account automatically. >
- > If you use `destinationContainerUrl` parameter, it will work, but provide significantly less security for your data, because of ad hoc SAS usage. See details [here](batch-transcription-create.md#destination-container-url).
+ > If you use `destinationContainerUrl` parameter, it will work, but provide significantly less security for your data, because of ad hoc SAS usage. See details [here](batch-transcription-create.md#specify-a-destination-container-url).
1. When transcription is complete, get transcription results according to [this guide](batch-transcription-get.md). Consider using `sasValidityInSeconds` parameter (see the following section).
ai-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-speech-synthesis-viseme.md
synthesizer.visemeReceived = function (s, e) {
window.console.log("(Viseme), Audio offset: " + e.audioOffset / 10000 + "ms. Viseme ID: " + e.visemeId); // `Animation` is an xml string for SVG or a json string for blend shapes
- var animation = e.Animation;
+ var animation = e.animation;
} // If VisemeID is the only thing you want, you can also use `speakTextAsync()`
ai-services Migrate V3 1 To V3 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/migrate-v3-1-to-v3-2.md
The `LanguageIdentificationMode` is added to `LanguageIdentificationProperties`
### Whisper models
-Azure AI Speech now supports OpenAI's Whisper model via Speech to text REST API v3.2. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#using-whisper-models) guide.
+Azure AI Speech now supports OpenAI's Whisper model via Speech to text REST API v3.2. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#use-a-whisper-model) guide.
> [!NOTE] > Azure OpenAI Service also supports OpenAI's Whisper model for speech to text with a synchronous REST API. To learn more, check out the [quickstart](../openai/whisper-quickstart.md). Check out [What is the Whisper model?](./whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
You need to use [speech synthesis markup language (SSML)](./speech-synthesis-mar
- The `speakerProfileId` property in SSML is used to specify the [speaker profile ID](./personal-voice-create-voice.md) for the personal voice. -- The voice name is specified in the `name` property in SSML. For personal voice, the voice name must be set to `PhoenixV2Neural` or another supported base model voice name. To get a list of supported base model voice names, use the `BaseModels_List` operation of the custom voice API.
+- The voice name is specified in the `name` property in SSML. For personal voice, the voice name must be one of the supported base model voice names. To get a list of supported base model voice names, use the `BaseModels_List` operation of the custom voice API.
+
+ > [!NOTE]
+ > The voice names labeled with the `Latest`, such as `DragonLatestNeural` or `PhoenixLatestNeural`, will be updated from time to time; its performance may vary with updates for ongoing improvements. If you would like to use a stable version, select one labeled with a version number, such as `PhoenixV2Neural`.
+- `Dragon` is a base model with superior voice cloning similarity compared to `Phoenix`. `Phoenix` is a base model with more accurate pronunciation and lower latency than `Dragon`. ΓÇâ
+
Here's example SSML in a request for text to speech with the voice name and the speaker profile ID. ```xml <speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts' xml:lang='en-US'>
- <voice name='PhoenixV2Neural'>
+ <voice name='DragonLatestNeural'>
<mstts:ttsembedding speakerProfileId='your speaker profile ID here'> I'm happy to hear that you find me amazing and that I have made your trip planning easier and more fun. 我很高兴听到你觉得我很了不起,我让你的旅行计划更轻松、更有趣。Je suis heureux d'apprendre que vous me trouvez incroyable et que j'ai rendu la planification de votre voyage plus facile et plus amusante. </mstts:ttsembedding>
ai-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/releasenotes.md
Azure AI Speech is updated on an ongoing basis. To stay up-to-date with recent d
## Recent highlights
-* Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#using-whisper-models) guide.
+* Azure AI Speech now supports OpenAI's Whisper model via the batch transcription API. To learn more, check out the [Create a batch transcription](./batch-transcription-create.md#use-a-whisper-model) guide.
* [Speech to text REST API version 3.2](./migrate-v3-1-to-v3-2.md) is available in public preview. * [Real-time diarization](./get-started-stt-diarization.md) is in public preview.
ai-services Whisper Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/whisper-overview.md
Either the Whisper model or the Azure AI Speech models are appropriate depending
| Scenario | Whisper model | Azure AI Speech models | |||| | Real-time transcriptions, captions, and subtitles for audio and video. | Not available | Recommended |
-| Transcriptions, captions, and subtitles for prerecorded audio and video. | The Whisper model via [Azure OpenAI](../openai/whisper-quickstart.md) is recommended for fast processing of individual audio files. The Whisper model via [Azure AI Speech](./batch-transcription-create.md#using-whisper-models) is recommended for batch processing of large files. For more information, see [Whisper model via Azure AI Speech or via Azure OpenAI Service?](#whisper-model-via-azure-ai-speech-or-via-azure-openai-service) | Recommended for batch processing of large files, diarization, and word level timestamps. |
+| Transcriptions, captions, and subtitles for prerecorded audio and video. | The Whisper model via [Azure OpenAI](../openai/whisper-quickstart.md) is recommended for fast processing of individual audio files. The Whisper model via [Azure AI Speech](./batch-transcription-create.md#use-a-whisper-model) is recommended for batch processing of large files. For more information, see [Whisper model via Azure AI Speech or via Azure OpenAI Service?](#whisper-model-via-azure-ai-speech-or-via-azure-openai-service) | Recommended for batch processing of large files, diarization, and word level timestamps. |
| Transcript of phone call recordings and analytics such as call summary, sentiment, key topics, and custom insights. | Available | Recommended | | Real-time transcription and analytics to assist call center agents with customer questions. | Not available | Recommended | | Transcript of meeting recordings and analytics such as meeting summary, meeting chapters, and action item extraction. | Available | Recommended |
Either the Whisper model or the Azure AI Speech models are appropriate depending
## Whisper model via Azure AI Speech or via Azure OpenAI Service?
-You can choose whether to use the Whisper Model via [Azure OpenAI](../openai/whisper-quickstart.md) or via [Azure AI Speech](./batch-transcription-create.md#using-whisper-models). In either case, the readability of the transcribed text is the same. You can input mixed language audio and the output is in English.
+You can choose whether to use the Whisper Model via [Azure OpenAI](../openai/whisper-quickstart.md) or via [Azure AI Speech](./batch-transcription-create.md#use-a-whisper-model). In either case, the readability of the transcribed text is the same. You can input mixed language audio and the output is in English.
Whisper Model via Azure OpenAI Service might be best for: - Quickly transcribing audio files one at a time
Regional support is another consideration.
## Next steps -- [Use Whisper models via the Azure AI Speech batch transcription API](./batch-transcription-create.md#using-whisper-models)
+- [Use Whisper models via the Azure AI Speech batch transcription API](./batch-transcription-create.md#use-a-whisper-model)
- [Try the speech to text quickstart for Whisper via Azure OpenAI](../openai/whisper-quickstart.md) - [Try the real-time speech to text quickstart via Azure AI Speech](./get-started-speech-to-text.md)
ai-services Document Translation Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/quickstarts/document-translation-rest-api.md
Previously updated : 07/18/2023 Last updated : 01/17/2024 recommendations: false ms.devlang: csharp
To get started, you need:
1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
- 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+ 1. **Name**. Enter the name you chose for your resource. The name you choose must be unique within Azure.
> [!NOTE] > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
To get started, you need:
1. Review the service terms and select **Create** to deploy your resource.
- 1. After your resource has successfully deployed, select **Go to resource**.
+ 1. After your resource successfully deploys, select **Go to resource**.
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) -->
### Retrieve your key and document translation endpoint *Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
-1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+1. If you created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
-1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
-
-1. You paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
+1. You can copy and paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service. Only one key is necessary to make an API call.
:::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint) -->
- ## Create Azure Blob Storage containers You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
You need to [**create containers**](../../../../storage/blobs/storage-quickstart
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](../how-to-guides/create-sas-tokens.md).
-* Your **source** container or blob must have designated **read** and **list** access.
-* Your **target** container or blob must have designated **write** and **list** access.
-* Your **glossary** blob must have designated **read** and **list** access.
+* Your **source** container or blob must designate **read** and **list** access.
+* Your **target** container or blob must designate **write** and **list** access.
+* Your **glossary** blob must designate **read** and **list** access.
> [!TIP] >
The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Share
> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**. > * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
-<!-- > [!div class="nextstepaction"]
-> [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers) -->
### Sample document
That's it, congratulations! In this quickstart, you used Document Translation to
## Next steps -
+> [!div class="nextstepaction"]
+> [**Learn more about Document Translation operations**](../reference/rest-api-guide.md)
ai-studio Rbac Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md
Title: Role-based access control in Azure AI Studio
-description: This article introduces role-based access control in Azure AI Studio
+description: This article introduces role-based access control in Azure AI Studio.
-# Role-based access control in Azure AI Studio
+# Role-based access control in Azure AI Studio
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
ai-studio Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-runtime.md
Automatic is the default option for a runtime. You can start an automatic runtim
On a flow page, you can use the following options to manage an automatic runtime: -- **Install packages** triggers `pip install -r requirements.txt` in the flow folder. The process can take a few minutes, depending on the packages that you install.
+- **Install packages** Open `requirements.txt` in prompt flow UI, you can add packages in it.
+- **View installed packages** shows the packages that are installed in the runtime. It includes the packages baked to base image and packages specify in the `requirements.txt` file in the flow folder.
- **Reset** deletes the current runtime and creates a new one with the same environment. If you encounter a package conflict, you can try this option. - **Edit** opens the runtime configuration page, where you can define the VM side and the idle time for the runtime. - **Stop** deletes the current runtime. If there's no active runtime on the underlying compute, the compute resource is also deleted.
If you want to use a private feed in Azure DevOps, follow these steps:
:::image type="content" source="../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png" alt-text="Screenshot that shows the toggle for using a workspace user-assigned managed identity." lightbox = "../media/prompt-flow/how-to-create-manage-runtime/runtime-advanced-setting-msi.png":::
+#### Change the base image for automatic runtime (preview)
+
+By default, we use the latest prompt flow image as the base image. If you want to use a different base image, you need build your own base image, this docker image should be built from prompt flow base image that is `mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<newest_version>`. If possible use the [latest version of the base image](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list). To use the new base image, you need to reset the runtime via the `reset` command. This process takes several minutes as it pulls the new base image and reinstalls packages.
++
+```yaml
+environment:
+ image: <your-custom-image>
+ python_requirements_txt: requirements.txt
+```
+ ### Update a compute instance runtime on a runtime page Azure AI Studio gets regular updates to the base image (`mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable`) to include the latest features and bug fixes. To get the best experience and performance, periodically update your runtime to the [latest version](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list).
ai-studio Create Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 1/31/2024
Once a project is created, you can access the **Tools**, **Components**, and **S
In the project details page (select **Build** > **Settings**), you can find information about the project, such as the project name, description, and the Azure AI resource that hosts the project. You can also find the project ID, which is used to identify the project in the Azure AI Studio API. -- Project name: The name of the project corresponds to the selected project in the left panel. The project name is also referenced in the *Welcome to the YOUR-PROJECT-NAME project* message on the main page. You can change the name of the project by selecting the edit icon next to the project name.-- Project description: The project description (if set) is shown directly below the *Welcome to the YOUR-PROJECT-NAME project* message on the main page. You can change the description of the project by selecting the edit icon next to the project description.
+- Project name: The name of the project corresponds to the selected project in the left panel.
- Azure AI resource: The Azure AI resource that hosts the project. -- Location: The location of the Azure AI resource that hosts the project. Azure AI resources are supported in the same regions as Azure OpenAI.
+- Location: The location of the Azure AI resource that hosts the project. For supported locations, see [Azure AI Studio regions](../reference/region-support.md).
- Subscription: The subscription that hosts the Azure AI resource that hosts the project. - Resource group: The resource group that hosts the Azure AI resource that hosts the project.-- Container registry: The container for project files. Container registry allows you to build, store, and manage container images and artifacts in a private registry for all types of container deployments.-- Storage account: The storage account for the project.
+- Permissions: The users that have access to the project. For more information, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
-Select the Azure AI resource, subscription, resource group, container registry, or storage account to navigate to the corresponding resource in the Azure portal.
+Select the Azure AI resource, subscription, or resource group to navigate to the corresponding resource in the Azure portal.
## Next steps -- [Quickstart: Generate product name ideas in the Azure AI Studio playground](../quickstarts/playground-completions.md)
+- [QuickStart: Moderate text and images with content safety in Azure AI Studio](../quickstarts/content-safety.md)
- [Learn more about Azure AI Studio](../what-is-ai-studio.md) - [Learn more about Azure AI resources](../concepts/ai-resources.md)
ai-studio Data Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-add.md
To create and work with data, you need:
* An Azure subscription. If you don't have one, create a free account before you begin.
-* An Azure AI Studio project.
+* An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
## Create data
If you're using SDK or CLI to create data, you must specify a `path` that points
A data that is a File (`uri_file`) type points to a *single file* on storage (for example, a CSV file). You can create a file typed data using: -- # [Studio](#tab/azure-studio) These steps explain how to create a File typed data in the Azure AI Studio:
myfile = Data(
client.data.create_or_update(myfile) ``` - ### Create data: Folder type
ai-studio Data Image Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/data-image-add.md
Use this article to learn how to provide your own image data for GPT-4 Turbo wit
This guide is scoped to the Azure AI Studio playground, but you can also add image data via your project's **Data** page. See [Add data to your project](../how-to/data-add.md) for more information.
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](create-projects.md) in Azure AI Studio.
1. If you aren't already in the playground, select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. In the playground, make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed GPT-4 Turbo with Vision model from the **Deployment** dropdown.
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
You must have:
## Create an index
-1. Sign in to Azure AI Studio and open the Azure AI project in which you want to create the index.
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. From the collapsible menu on the left, select **Indexes** under **Components**. :::image type="content" source="../media/index-retrieve/project-left-menu.png" alt-text="Screenshot of Project Left Menu." lightbox="../media/index-retrieve/project-left-menu.png":::
ai-studio Hear Speak Playground https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/hear-speak-playground.md
The speech to text and text to speech features can be used together or separatel
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a chat model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
## Configure the playground
The speech to text and text to speech features can be used together or separatel
Before you can start a chat session, you need to configure the playground to use the speech to text and text to speech features. 1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. Select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. Make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed chat model from the **Deployment** dropdown.
ai-studio Multimodal Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/multimodal-vision.md
Extra usage fees might apply for using GPT-4 Turbo with Vision and Azure AI Visi
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a GPT-4 Turbo with Vision model deployed in one of the regions that support GPT-4 Turbo with Vision: Australia East, Switzerland North, Sweden Central, and West US. When you deploy from your project's **Deployments** page, select: `gpt-4` as the model name and `vision-preview` as the model version.
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
## Start a chat session to analyze images or video
You need a video up to three minutes in length to complete the video quickstart.
In this chat session, you instruct the assistant to aid in understanding images that you input. 1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. Select **Build** from the top menu and then select **Playground** from the collapsible left menu. 1. Make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed GPT-4 Turbo with Vision model from the **Deployment** dropdown. Under the chat session text box, you should now see the option to select a file.
ai-studio Playground Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/playground-completions.md
Use this article to get started making your first calls to Azure OpenAI.
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. - An [Azure AI resource](../how-to/create-azure-ai-resource.md) with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).-
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
### Try text completions To use the Azure OpenAI for text completions in the playground, follow these steps: 1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. From the Azure AI Studio Home page, select **Build** > **Playground**. 1. Select your deployment from the **Deployments** dropdown. 1. Select **Completions** from the **Mode** dropdown menu.
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
The steps in this tutorial are:
Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An Azure OpenAI resource with a model deployed. For more information about model deployment, see the [resource deployment guide](../../ai-services/openai/how-to/create-resource.md).
+- An [Azure AI resource](../how-to/create-azure-ai-resource.md) and [project](../how-to/create-projects.md) in Azure AI Studio.
- You need at least one file to upload that contains example data. To complete this tutorial, use the product information samples from the [Azure/aistudio-copilot-sample repository on GitHub](https://github.com/Azure/aistudio-copilot-sample/tree/main/data). Specifically, the [product_info_11.md](https://github.com/Azure/aistudio-copilot-sample/blob/main/dat` on your local computer.
The steps in this tutorial are:
Follow these steps to deploy a chat model and test it without your data.
-1. Sign in to [Azure AI Studio](https://ai.azure.com) with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. You should be on the Azure AI Studio **Home** page.
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Go to your project or [create a new project](../how-to/create-projects.md) in Azure AI Studio.
1. Select **Build** from the top menu and then select **Deployments** > **Create**. :::image type="content" source="../media/tutorials/chat-web-app/deploy-create.png" alt-text="Screenshot of the deployments page without deployments." lightbox="../media/tutorials/chat-web-app/deploy-create.png":::
ai-studio Screen Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md
Within **Explore**, you can explore many capabilities found within the secondary
## Projects
-To work within the Azure AI Studio, you must first create a project:
+To work within the Azure AI Studio, you must first [create a project](../how-to/create-projects.md):
1. Navigate to the Build tab in the primary navigation. 1. Press the Tab key until you hear *New project* and select this button. 1. Enter the information requested in the **Create a new project** dialog.
ai-studio What Is Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/what-is-ai-studio.md
Using Azure AI Studio also incurs cost associated with the underlying services,
Azure AI Studio is currently available in the following regions: Australia East, Brazil South, Canada Central, East US, East US 2, France Central, Germany West Central, India South, Japan East, North Central US, Norway East, Poland Central, South Africa North, South Central US, Sweden Central, Switzerland North, UK South, West Europe, and West US.
-To learn more, see [Azure global infrastructure - Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services).
+To learn more, see [Azure AI Studio regions](./reference/region-support.md).
## How to get access
aks Active Active Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/active-active-solution.md
+
+ Title: Recommended active-active high availability solution overview for Azure Kubernetes Service (AKS)
+description: Learn about the recommended active-active high availability solution overview for Azure Kubernetes Service (AKS).
++++ Last updated : 01/30/2024++
+# Recommended active-active high availability solution overview for Azure Kubernetes Service (AKS)
+
+When you create an application in Azure Kubernetes Service (AKS) and choose an Azure region during resource creation, it's a single-region app. In the event of a disaster that causes the region to become unavailable, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region, your application becomes less susceptible to a single-region disaster, which guarantees business continuity, and any data replication across the regions lets you recover your last application state.
+
+While there are multiple patterns that can provide recoverability for an AKS solution, this guide outlines the recommended active-active high availability solution for AKS. Within this solution, we deploy two independent and identical AKS clusters into two paired Azure regions with both clusters actively serving traffic.
+
+> [!NOTE]
+> The following use case can be considered standard practice within AKS. It has been reviewed internally and vetted in conjunction with our Microsoft partners.
+
+## Active-active high availability solution overview
+
+This solution relies on two identical AKS clusters configured to actively serve traffic. You place a global traffic manager, such as [Azure Front Door](../frontdoor/front-door-overview.md), in front of the two clusters to distribute traffic across them. You must consistently configure the clusters to host an instance of all applications required for the solution to function.
+
+Availability zones are another way to ensure high availability and fault tolerance for your AKS cluster within the same region. Availability zones allow you to distribute your cluster nodes across multiple isolated locations within an Azure region. This way, if one zone goes down due to a power outage, hardware failure, or network issue, your cluster can continue to run and serve your applications. Availability zones also improve the performance and scalability of your cluster by reducing the latency and contention among nodes. To set up availability zones for your AKS cluster, you need to specify the zone numbers when creating or updating your node pools. For more information, see [What are Azure availability zones?](../reliability/availability-zones-overview.md)
+
+> [!NOTE]
+> Many regions support availability zones. Consider using regions with availability zones to provide more resiliency and availability for your workloads. For more information, see [Recover from a region-wide service disruption](/azure/architecture/resiliency/recovery-loss-azure-region).
+
+## Scenarios and configurations
+
+This solution is best implemented when hosting stateless applications and/or with other technologies also deployed across both regions, such as horizontal scaling. In scenarios where the hosted application is reliant on resources, such as databases, that are actively in only one region, we recommend instead implementing an [active-passive solution](./active-passive-solution.md) for potential cost savings, as active-passive has more downtime than active-active.
+
+## Components
+
+The active-active high availability solution uses many Azure services. This section covers only the components unique to this multi-cluster architecture. For more information on the remaining components, see the [AKS baseline architecture](/azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=%2Fazure%2Faks%2Ftoc.json&bc=%2Fazure%2Faks%2Fbreadcrumb%2Ftoc.json).
+
+**Multiple clusters and regions**: You deploy multiple AKS clusters, each in a separate Azure region. During normal operations, your Azure Front Door configuration routes network traffic between all regions. If one region becomes unavailable, traffic routes to a region with the fastest load time for the user.
+
+**Hub-spoke network per region**: A regional hub-spoke network pair is deployed for each regional AKS instance. [Azure Firewall Manager](../firewall-manager/overview.md) policies manage the firewall policies across all regions.
+
+**Regional key store**: You provision [Azure Key Vault](../key-vault/general/overview.md) in each region to store sensitive values and keys specific to the AKS instance and to support services found in that region.
+
+**Azure Front Door**: [Azure Front Door](../frontdoor/front-door-overview.md) load balances and routes traffic to a regional [Azure Application Gateway](../application-gateway/overview.md) instance, which sits in front of each AKS cluster. Azure Front Door allows for *layer seven* global routing.
+
+**Log Analytics**: Regional [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) instances store regional networking metrics and diagnostic logs. A shared instance stores metrics and diagnostic logs for all AKS instances.
+
+**Container Registry**: The container images for the workload are stored in a managed container registry. With this solution, a single [Azure Container Registry](../container-registry/container-registry-intro.md) instance is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables you to replicate images to the selected Azure regions and provides continued access to images even if a region experiences an outage.
+
+## Failover process
+
+If a service or service component becomes unavailable in one region, traffic should be routed to a region where that service is available. A multi-region architecture includes many different failure points. In this section, we cover the potential failure points.
+
+### Application Pods (Regional)
+
+A Kubernetes deployment object creates multiple replicas of a pod (*ReplicaSet*). If one is unavailable, traffic is routed between the remaining replicas. The Kubernetes *ReplicaSet* attempts to keep the specified number of replicas up and running. If one instance goes down, a new instance should be recreated. [Liveness probes](../container-instances/container-instances-liveness-probe.md) can check the state of the application or process running in the pod. If the pod is unresponsive, the liveness probe removes the pod, which forces the *ReplicaSet* to create a new instance.
+
+For more information, see [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/).
+
+### Application Pods (Global)
+
+When an entire region becomes unavailable, the pods in the cluster are no longer available to serve requests. In this case, the Azure Front Door instance routes all traffic to the remaining health regions. The Kubernetes clusters and pods in these regions continue to serve requests. To compensate for increased traffic and requests to the remaining cluster, keep in mind the following guidance:
+
+- Make sure network and compute resources are right sized to absorb any sudden increase in traffic due to region failover. For example, when using Azure Container Network Interface (CNI), make sure you have a subnet that can support all pod IPs with a spiked traffic load.
+- Use the [Horizontal Pod Autoscaler](./concepts-scale.md#horizontal-pod-autoscaler) to increase the pod replica count to compensate for the increased regional demand.
+- Use the AKS [Cluster Autoscaler](./cluster-autoscaler.md) to increase the Kubernetes instance node counts to compensate for the increased regional demand.
+
+### Kubernetes node pools (Regional)
+
+Occasionally, localized failure can occur to compute resources, such as power becoming unavailable in a single rack of Azure servers. To protect your AKS nodes from becoming a single point regional failure, use [Azure Availability Zones](./availability-zones.md). Availability zones ensure that AKS nodes in each availability zone are physically separated from those defined in another availability zones.
+
+### Kubernetes node pools (Global)
+
+In a complete regional failure, Azure Front Door routes traffic to the remaining healthy regions. Again, make sure to compensate for increased traffic and requests to the remaining cluster.
+
+## Failover testing strategy
+
+While there are no mechanisms currently available within AKS to take down an entire region of deployment for testing purposes, [Azure Chaos Studio](../chaos-studio/chaos-studio-overview.md) offers the ability to create a chaos experiment on your cluster.
+
+## Next steps
+
+If you're considering a different solution, see the following articles:
+
+- [Active passive disaster recovery solution overview for Azure Kubernetes Service (AKS)](./active-passive-solution.md)
+- [Passive cold solution overview for Azure Kubernetes Service (AKS)](./passive-cold-solution.md)
aks Active Passive Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/active-passive-solution.md
+
+ Title: Recommended active-passive disaster recovery solution overview for Azure Kubernetes Service (AKS)
+description: Learn about an active-passive disaster recovery solution overview for Azure Kubernetes Service (AKS).
++++ Last updated : 01/30/2024++
+# Active-passive disaster recovery solution overview for Azure Kubernetes Service (AKS)
+
+When you create an application in Azure Kubernetes Service (AKS) and choose an Azure region during resource creation, it's a single-region app. When the region becomes unavailable during a disaster, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region, your application becomes less susceptible to a single-region disaster, which guarantees business continuity, and any data replication across the regions lets you recover your last application state.
+
+This guide outlines an active-passive disaster recovery solution for AKS. Within this solution, we deploy two independent and identical AKS clusters into two paired Azure regions with only one cluster actively serving traffic.
+
+> [!NOTE]
+> The following practice has been reviewed internally and vetted in conjunction with our Microsoft partners.
+
+## Active-passive solution overview
+
+In this disaster recovery approach, we have two independent AKS clusters being deployed in two Azure regions. However, only one of the clusters is actively serving traffic at any one time. The secondary cluster (not actively serving traffic) contains the same configuration and application data as the primary cluster but doesnΓÇÖt accept any traffic unless directed by Azure Front Door traffic manager.
+
+## Scenarios and configurations
+
+This solution is best implemented when hosting applications reliant on resources, such as databases, that actively serve traffic in one region. In scenarios where you need to host stateless applications deployed across both regions, such as horizontal scaling, we recommend considering an [active-active solution](./active-active-solution.md), as active-passive involves added latency.
+
+## Components
+
+The active-passive disaster recovery solution uses many Azure services. This example architecture involves the following components:
+
+**Multiple clusters and regions**: You deploy multiple AKS clusters, each in a separate Azure region. During normal operations, network traffic is routed to the primary AKS cluster set in the Azure Front Door configuration.
+
+**Configured cluster prioritization**: You set a prioritization level between 1-5 for each cluster (with 1 being the highest priority and 5 being the lowest priority). You can set multiple clusters to the same priority level and specify the weight for each cluster. If the primary cluster becomes unavailable, traffic automatically routes to the next region selected in Azure Front Door. All traffic must go through Azure Front Door for this system to work.
+
+**Azure Front Door**: [Azure Front Door](../frontdoor/front-door-overview.md) load balances and routes traffic to the [Azure Application Gateway](../application-gateway/overview.md) instance in the primary region (cluster must be marked with priority 1). In the event of a region failure, the service redirects traffic to the next cluster in the priority list.
+
+For more information, see [Priority-based traffic-routing](../frontdoor/routing-methods.md#priority-based-traffic-routing).
+
+**Hub-spoke pair**: A hub-spoke pair is deployed for each regional AKS instance. [Azure Firewall Manager](../firewall-manager/overview.md) policies manage the firewall rules across each region.
+
+**Key Vault**: You provision an [Azure Key Vault](../key-vault/general/overview.md) in each region to store secrets and keys.
+
+**Log Analytics**: Regional [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) instances store regional networking metrics and diagnostic logs. A shared instance stores metrics and diagnostic logs for all AKS instances.
+
+**Container Registry**: The container images for the workload are stored in a managed container registry. With this solution, a single [Azure Container Registry](../container-registry/container-registry-intro.md) instance is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables you to replicate images to the selected Azure regions and provides continued access to images even if a region experiences an outage.
+
+## Failover process
+
+If a service or service component becomes unavailable in one region, traffic should be routed to a region where that service is available. A multi-region architecture includes many different failure points. In this section, we cover the potential failure points.
+
+### Application Pods (Regional)
+
+A Kubernetes deployment object creates multiple replicas of a pod (*ReplicaSet*). If one is unavailable, traffic is routed between the remaining replicas. The Kubernetes *ReplicaSet* attempts to keep the specified number of replicas up and running. If one instance goes down, a new instance should be recreated. [Liveness probes](../container-instances/container-instances-liveness-probe.md) can check the state of the application or process running in the pod. If the pod is unresponsive, the liveness probe removes the pod, which forces the *ReplicaSet* to create a new instance.
+
+For more information, see [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/).
+
+### Application Pods (Global)
+
+When an entire region becomes unavailable, the pods in the cluster are no longer available to serve requests. In this case, the Azure Front Door instance routes all traffic to the remaining health regions. The Kubernetes clusters and pods in these regions continue to serve requests. To compensate for increased traffic and requests to the remaining cluster, keep in mind the following guidance:
+
+- Make sure network and compute resources are right sized to absorb any sudden increase in traffic due to region failover. For example, when using Azure Container Network Interface (CNI), make sure you have a subnet that can support all pod IPs with a spiked traffic load.
+- Use the [Horizontal Pod Autoscaler](./concepts-scale.md#horizontal-pod-autoscaler) to increase the pod replica count to compensate for the increased regional demand.
+- Use the AKS [Cluster Autoscaler](./cluster-autoscaler.md) to increase the Kubernetes instance node counts to compensate for the increased regional demand.
+
+### Kubernetes node pools (Regional)
+
+Occasionally, localized failure can occur to compute resources, such as power becoming unavailable in a single rack of Azure servers. To protect your AKS nodes from becoming a single point regional failure, use [Azure Availability Zones](./availability-zones.md). Availability zones ensure that AKS nodes in each availability zone are physically separated from those defined in another availability zones.
+
+### Kubernetes node pools (Global)
+
+In a complete regional failure, Azure Front Door routes traffic to the remaining healthy regions. Again, make sure to compensate for increased traffic and requests to the remaining cluster.
+
+## Failover testing strategy
+
+While there are no mechanisms currently available within AKS to take down an entire region of deployment for testing purposes, [Azure Chaos Studio](../chaos-studio/chaos-studio-overview.md) offers the ability to create a chaos experiment on your cluster.
+
+## Next steps
+
+If you're considering a different solution, see the following articles:
+
+- [Active active high availability solution overview for Azure Kubernetes Service (AKS)](./active-active-solution.md)
+- [Passive cold solution overview for Azure Kubernetes Service (AKS)](./passive-cold-solution.md)
aks App Routing Dns Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md
az keyvault create -g <ResourceGroupName> -l <Location> -n <KeyVaultName> --enab
### Create and export a self-signed SSL certificate
-> [!NOTE]
-> If you already have a certificate, you can skip this step.
->
+For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. If you already have a certificate, you can skip this step.
+
+> [!CAUTION]
+> Self-signed certificates are digital certificates that are not signed by a trusted third-party CA. Self-signed certificates are created, issued, and signed by the company or developer who is responsible for the website or software being signed. This is why self-signed certificates are considered unsafe for public-facing websites and applications. Azure Key Vault has a [trusted partnership with the some Certificate Authorities](../key-vault/certificates/how-to-integrate-certificate-authority.md).
+ 1. Create a self-signed SSL certificate to use with the Ingress using the `openssl req` command. Make sure you replace *`<Hostname>`* with the DNS name you're using. ```bash
aks Control Plane Metrics Default List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-plane-metrics-default-list.md
+
+ Title: List of control plane metrics in Azure Monitor managed service for Prometheus (preview)
+description: This article describes the minimal ingestion profile metrics for Azure Kubernetes Service (AKS) control plane metrics.
+ Last updated : 01/31/2024+++
+# Minimal ingestion profile for control plane Metrics in Managed Prometheus
+
+Azure Monitor metrics addon collects many Prometheus metrics by default. `Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules and default alerts are collected. This article describes how this setting is configured specifically for control plane metrics. This article also lists metrics collected by default when `minimal ingestion profile` is enabled.
+
+> [!NOTE]
+> For addon based collection, `Minimal ingestion profile` setting is enabled by default. The discussion here is focused on control plane metrics. The current set of default targets and metrics is listed [here][azure-monitor-prometheus-metrics-scrape-config-minimal].
+
+Following targets are **enabled/ON** by default - meaning you don't have to provide any scrape job configuration for scraping these targets, as metrics addon scrapes these targets automatically by default:
+
+- `controlplane-apiserver` (job=`controlplane-apiserver`)
+- `controlplane-etcd` (job=`controlplane-etcd`)
+
+Following targets are available to scrape, but scraping isn't enabled (**disabled/OFF**) by default. Meaning you don't have to provide any scrape job configuration for scraping these targets, and you need to turn **ON/enable** scraping for these targets using the [ama-metrics-settings-configmap][ama-metrics-settings-configmap-github] under the `default-scrape-settings-enabled` section.
+
+- `controlplane-cluster-autoscaler`
+- `controlplane-kube-scheduler`
+- `controlplane-kube-controller-manager`
+
+> [!NOTE]
+> The default scrape frequency for all default targets and scrapes is `30 seconds`. You can override it for each target using the [ama-metrics-settings-configmap][ama-metrics-settings-configmap-github] under `default-targets-scrape-interval-settings` section.
+
+### Minimal ingestion for default ON targets
+
+The following metrics are allow-listed with `minimalingestionprofile=true` for default **ON** targets. The below metrics are collected by default, as these targets are scraped by default.
+
+**controlplane-apiserver**
+
+- `apiserver_request_total`
+- `apiserver_cache_list_fetched_objects_total`
+- `apiserver_cache_list_returned_objects_total`
+- `apiserver_flowcontrol_demand_seats_average`
+- `apiserver_flowcontrol_current_limit_seats`
+- `apiserver_request_sli_duration_seconds_bucket`
+- `apiserver_request_sli_duration_seconds_sum`
+- `apiserver_request_sli_duration_seconds_count`
+- `process_start_time_seconds`
+- `apiserver_request_duration_seconds_bucket`
+- `apiserver_request_duration_seconds_sum`
+- `apiserver_request_duration_seconds_count`
+- `apiserver_storage_list_fetched_objects_total`
+- `apiserver_storage_list_returned_objects_total`
+- `apiserver_current_inflight_requests`
+
+**controlplane-etcd**
+
+- `etcd_server_has_leader`
+- `rest_client_requests_total`
+- `etcd_mvcc_db_total_size_in_bytes`
+- `etcd_mvcc_db_total_size_in_use_in_bytes`
+- `etcd_server_slow_read_indexes_total`
+- `etcd_server_slow_apply_total`
+- `etcd_network_client_grpc_sent_bytes_total`
+- `etcd_server_heartbeat_send_failures_total`
+
+### Minimal ingestion for default OFF targets
+
+The following are metrics that are allow-listed with `minimalingestionprofile=true` for default **OFF** targets. These metrics aren't collected by default. You can turn **ON** scraping for these targets using `default-scrape-settings-enabled.<target-name>=true` using the [ama-metrics-settings-configmap][ama-metrics-settings-configmap-github] under the `default-scrape-settings-enabled` section.
+
+**controlplane-kube-controller-manager**
+
+- `workqueue_depth `
+- `rest_client_requests_total`
+- `rest_client_request_duration_seconds `
+
+**controlplane-kube-scheduler**
+
+- `scheduler_pending_pods`
+- `scheduler_unschedulable_pods`
+- `scheduler_queue_incoming_pods_total`
+- `scheduler_schedule_attempts_total`
+- `scheduler_preemption_attempts_total`
+
+**controlplane-cluster-autoscaler**
+
+- `rest_client_requests_total`
+- `cluster_autoscaler_last_activity`
+- `cluster_autoscaler_cluster_safe_to_autoscale`
+- `cluster_autoscaler_failed_scale_ups_total`
+- `cluster_autoscaler_scale_down_in_cooldown`
+- `cluster_autoscaler_scaled_up_nodes_total`
+- `cluster_autoscaler_unneeded_nodes_count`
+- `cluster_autoscaler_unschedulable_pods_count`
+- `cluster_autoscaler_nodes_count`
+- `cloudprovider_azure_api_request_errors`
+- `cloudprovider_azure_api_request_duration_seconds_bucket`
+- `cloudprovider_azure_api_request_duration_seconds_count`
+
+> [!NOTE]
+> The CPU and memory usage metrics for all control-plane targets are not exposed irrespective of the profile.
+
+## References
+
+- [Kubernetes Upstream metrics list][kubernetes-metrics-instrumentation-reference]
+
+- [Cluster autoscaler metrics list][kubernetes-metrics-autoscaler-reference]
+
+## Next steps
+
+- [Learn more about control plane metrics in Managed Prometheus](monitor-control-plane-metrics.md)
+
+<!-- EXTERNAL LINKS -->
+[ama-metrics-settings-configmap-github]: https://github.com/Azure/prometheus-collector/blob/89e865a73601c0798410016e9beb323f1ecba335/otelcollector/configmaps/ama-metrics-settings-configmap.yaml
+[kubernetes-metrics-instrumentation-reference]: https://kubernetes.io/docs/reference/instrumentation/metrics/
+(https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/metrics.md)
+[kubernetes-metrics-autoscaler-reference]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/proposals/metrics.md
+
+<!-- INTERNAL LINKS -->
+[azure-monitor-prometheus-metrics-scrape-config-minimal]: ../azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
As you work with the node resource group, keep in mind that you can't:
You might get unexpected scaling and upgrading errors if you modify or delete Azure-created tags and other resource properties in the node resource group. AKS allows you to create and modify custom tags created by end users, and you can add those tags when [creating a node pool](manage-node-pools.md#specify-a-taint-label-or-tag-for-a-node-pool). You might want to create or modify custom tags, for example, to assign a business unit or cost center. Another option is to create Azure Policies with a scope on the managed resource group.
-However, modifying any **Azure-created tags** on resources under the node resource group in the AKS cluster is an unsupported action, which breaks the service-level objective (SLO). For more information, see [Does AKS offer a service-level agreement?](#does-aks-offer-a-service-level-agreement)
+Azure-created tags are created for their respective Azure Services and should always be allowed. For AKS, there are the `aks-managed` and `k8s-azure` tags. Modifying any **Azure-created tags** on resources under the node resource group in the AKS cluster is an unsupported action, which breaks the service-level objective (SLO). For more information, see [Does AKS offer a service-level agreement?](#does-aks-offer-a-service-level-agreement)
+
+> [!NOTE]
+> In the past, the tag name "Owner" was reserved for AKS to manage the public IP that is assigned on front end IP of the loadbalancer. Now, services follow use the `aks-managed` prefix. For legacy resources, don't use Azure policies to apply the "Owner" tag name. Otherwise, all resources on your AKS cluster deployment and update operations will break. This does not apply to newly created resources.
## What Kubernetes admission controllers does AKS support? Can admission controllers be added or removed?
The following example shows an ip route setup of Transparent mode. Each Pod's in
## How to avoid permission ownership setting slow issues when the volume has numerous files?
-Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the podΓÇÖs security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
+Traditionally if your pod is running as a nonroot user (which you should), you must specify a `fsGroup` inside the pod's security context so the volume can be readable and writable by the Pod. This requirement is covered in more detail in [here](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/).
A side effect of setting `fsGroup` is that each time a volume is mounted, Kubernetes must recursively `chown()` and `chmod()` all the files and directories inside the volume (with a few exceptions noted below). This scenario happens even if group ownership of the volume already matches the requested `fsGroup`. It can be expensive for larger volumes with lots of small files, which can cause pod startup to take a long time. This scenario has been a known problem before v1.20, and the workaround is setting the Pod run as root:
Any patch, including a security patch, is automatically applied to the AKS clust
The AKS Linux Extension is an Azure VM extension that installs and configures monitoring tools on Kubernetes worker nodes. The extension is installed on all new and existing Linux nodes. It configures the following monitoring tools: - [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics.-- [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.
+- [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the cluster's API server using Events and NodeConditions.
- [ig](https://inspektor-gadget.io/docs/latest/ig/): An eBPF-powered open-source framework for debugging and observing Linux and Kubernetes systems. It provides a set of tools (or gadgets) designed to gather relevant information, allowing users to identify the cause of performance issues, crashes, or other anomalies. Notably, its independence from Kubernetes enables users to employ it also for debugging control plane issues. These tools help provide observability around many node health related problems, such as:
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Graphical processing units (GPUs) are often used for compute-intensive workloads
This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters. ## Supported GPU-enabled VMs+ To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported on AKS. > [!NOTE] > GPU-enabled VMs contain specialized hardware subject to higher pricing and region availability. For more information, see the [pricing][azure-pricing] tool and [region availability][azure-availability]. ## Limitations
-* AKS does not support Windows GPU-enabled node pools.
+ * If you're using an Azure Linux GPU-enabled node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).
-* [NVadsA10](../virtual-machines/nva10v5-series.md) v5-series are not a recommended SKU for GPU VHD.
+* [NVadsA10](../virtual-machines/nva10v5-series.md) v5-series are *not* a recommended SKU for GPU VHD.
+* AKS doesn't support Windows GPU-enabled node pools.
+* Updating an existing node pool to add GPU isn't supported.
## Before you begin * This article assumes you have an existing AKS cluster. If you don't have a cluster, create one using the [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
-* You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* You need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Get the credentials for your cluster
To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-sku
## Options for using NVIDIA GPUs
-There are three ways to add the NVIDIA device plugin:
-
-1. [Using the AKS GPU image](#update-your-cluster-to-use-the-aks-gpu-image-preview)
-2. [Manually installing the NVIDIA device plugin](#manually-install-the-nvidia-device-plugin)
-3. Using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html)
-
-### Use NVIDIA GPU Operator with AKS
-You can use the NVIDIA GPU Operator by skipping the gpu driver installation on AKS. For more information about using the NVIDIA GPU Operator with AKS, see [NVIDIA Documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html).
-
-Adding the node pool tag `SkipGPUDriverInstall=true` will skip installing the GPU driver automatically on newly created nodes in the node pool. Any existing nodes will not be changed - the pool can be scaled to 0 and back up to make the change take effect. You can specify the tag using the `--nodepool-tags` argument to [`az aks create`][az-aks-create] command (for a new cluster) or `--tags` with [`az aks nodepool add`][az-aks-nodepool-add] or [`az aks nodepool update`][az-aks-nodepool-update].
-
-> [!WARNING]
-> We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image.
+Using NVIDIA GPUs involves the installation of various NVIDIA software components such as the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin?tab=readme-ov-file), GPU driver installation, and more.
-### Update your cluster to use the AKS GPU image (preview)
+### Skip GPU driver installation (preview)
-AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github].
+AKS has automatic GPU driver installation enabled by default. In some cases, such as installing your own drivers or using the NVIDIA GPU Operator, you may want to skip GPU driver installation.
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-1. Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+1. Register or update the aks-preview extension using the [`az extension add`][az-extension-add] or [`az extension update`][az-extension-update] command.
```azurecli-interactive
+ # Register the aks-preview extension
az extension add --name aks-preview
- ```
-
-2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
- ```azurecli-interactive
+ # Update the aks-preview extension
az extension update --name aks-preview ```
-3. Register the `GPUDedicatedVHDPreview` feature flag using the [`az feature register`][az-feature-register] command.
+2. Create a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--skip-gpu-driver-install` flag to skip automatic GPU driver installation.
```azurecli-interactive
- az feature register --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
+ --skip-gpu-driver-install \
+ --node-vm-size Standard_NC6s_v3 \
+ --node-taints sku=gpu:NoSchedule \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
```
- It takes a few minutes for the status to show *Registered*.
+ Adding the `--skip-gpu-driver-install` flag during node pool creation skips the automatic GPU driver installation. Any existing nodes aren't changed. You can scale the node pool to zero and then back up to make the change take effect.
-4. Verify the registration status using the [`az feature show`][az-feature-show] command.
+### NVIDIA device plugin installation
- ```azurecli-interactive
- az feature show --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
- ```
+NVIDIA device plugin installation is required when using GPUs on AKS. In some cases, the installation is handled automatically, such as when using the [NVIDIA GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/microsoft-aks.html) or the [AKS GPU image (preview)](#use-the-aks-gpu-image-preview). Alternatively, you can manually install the NVIDIA device plugin.
-5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
+#### Manually install the NVIDIA device plugin
- ```azurecli-interactive
- az provider register --namespace Microsoft.ContainerService
- ```
+You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on each node to provide the required drivers for the GPUs. This is the recommended approach when using GPU-enabled node pools for Azure Linux.
-#### Add a node pool for GPU nodes
+##### [Ubuntu Linux node pool (default SKU)](#tab/add-ubuntu-gpu-node-pool)
-Now that you updated your cluster to use the AKS GPU image, you can add a node pool for GPU nodes to your cluster.
+To use the default OS SKU, you create the node pool without specifying an OS SKU. The node pool is configured for the default operating system based on the Kubernetes version of the cluster.
-* Add a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+1. Add a node pool to your cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command.
```azurecli-interactive az aks nodepool add \
Now that you updated your cluster to use the AKS GPU image, you can add a node p
--node-count 1 \ --node-vm-size Standard_NC6s_v3 \ --node-taints sku=gpu:NoSchedule \
- --aks-custom-headers UseGPUDedicatedVHD=true \
--enable-cluster-autoscaler \ --min-count 1 \ --max-count 3 ```
- The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
+ This command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
- * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
- * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
- * `--aks-custom-headers`: Specifies a specialized AKS GPU image, *UseGPUDedicatedVHD=true*. If your GPU sku requires generation 2 VMs, use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true* instead.
- * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
- * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
- * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
+ * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
+ * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
+ * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
+ * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
> [!NOTE]
- > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
-### Manually install the NVIDIA device plugin
+##### [Azure Linux node pool](#tab/add-azure-linux-gpu-node-pool)
-You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on each node to provide the required drivers for the GPUs.
+To use Azure Linux, you specify the OS SKU by setting `os-sku` to `AzureLinux` during node pool creation. The `os-type` is set to `Linux` by default.
-1. Add a node pool to your cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+1. Add a node pool to your cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--os-sku` flag set to `AzureLinux`.
```azurecli-interactive az aks nodepool add \
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
--cluster-name myAKSCluster \ --name gpunp \ --node-count 1 \
+ --os-sku AzureLinux \
--node-vm-size Standard_NC6s_v3 \ --node-taints sku=gpu:NoSchedule \ --enable-cluster-autoscaler \
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
--max-count 3 ```
- The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
+ This command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
* `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*. * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
* `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool. > [!NOTE]
- > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time. Certain SKUs, including A100 and H100 VM SKUs, aren't available for Azure Linux. For more information, see [GPU-optimized VM sizes in Azure][gpu-skus].
-2. Create a namespace using the [`kubectl create namespace`][kubectl-create] command.
+
- ```console
+1. Create a namespace using the [`kubectl create namespace`][kubectl-create] command.
+
+ ```bash
kubectl create namespace gpu-resources ```
-3. Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github]:
+2. Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github]:
```yaml apiVersion: apps/v1
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
path: /var/lib/kubelet/device-plugins ```
-4. Create the DaemonSet and confirm the NVIDIA device plugin is created successfully using the [`kubectl apply`][kubectl-apply] command.
+3. Create the DaemonSet and confirm the NVIDIA device plugin is created successfully using the [`kubectl apply`][kubectl-apply] command.
- ```console
+ ```bash
kubectl apply -f nvidia-device-plugin-ds.yaml ```
+4. Now that you successfully installed the NVIDIA device plugin, you can check that your [GPUs are schedulable](#confirm-that-gpus-are-schedulable) and [run a GPU workload](#run-a-gpu-enabled-workload).
+
+### Use NVIDIA GPU Operator with AKS
+
+The NVIDIA GPU Operator automates the management of all NVIDIA software components needed to provision GPU including driver installation, the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin?tab=readme-ov-file), the NVIDIA container runtime, and more. Since the GPU Operator handles these components, it's not necessary to manually install the NVIDIA device plugin. This also means that the automatic GPU driver installation on AKS is no longer required.
+
+1. Skip automatic GPU driver installation by creating a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command with `--skip-gpu-driver-install`. Adding the `--skip-gpu-driver-install` flag during node pool creation skips the automatic GPU driver installation. Any existing nodes aren't changed. You can scale the node pool to zero and then back up to make the change take effect.
+
+2. Follow the NVIDIA documentation to [Install the GPU Operator](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/install-gpu-ocp.html#install-nvidiagpu:~:text=NVIDIA%20GPU%20Operator-,Installing%20the%20NVIDIA%20GPU%20Operator,-%EF%83%81).
+
+3. Now that you successfully installed the GPU Operator, you can check that your [GPUs are schedulable](#confirm-that-gpus-are-schedulable) and [run a GPU workload](#run-a-gpu-enabled-workload).
+
+> [!WARNING]
+> We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image.
+
+### Use the AKS GPU image (preview)
+
+AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github]. The AKS GPU image is currently only supported for Ubuntu 18.04.
++
+1. Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
+
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
+
+3. Register the `GPUDedicatedVHDPreview` feature flag using the [`az feature register`][az-feature-register] command.
+
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ ```
+
+ It takes a few minutes for the status to show *Registered*.
+
+4. Verify the registration status using the [`az feature show`][az-feature-show] command.
+
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ ```
+
+5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
+
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
+
+ Now that you updated your cluster to use the AKS GPU image, you can add a node pool for GPU nodes to your cluster.
+
+6. Add a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
+ --node-vm-size Standard_NC6s_v3 \
+ --node-taints sku=gpu:NoSchedule \
+ --aks-custom-headers UseGPUDedicatedVHD=true \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
+ ```
+
+ The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
+
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
+ * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
+ * `--aks-custom-headers`: Specifies a specialized AKS GPU image, *UseGPUDedicatedVHD=true*. If your GPU sku requires generation 2 VMs, use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true* instead.
+ * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
+ * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
+ * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
+
+ > [!NOTE]
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
+
+7. Now that you successfully created a node pool using the GPU image, you can check that your [GPUs are schedulable](#confirm-that-gpus-are-schedulable) and [run a GPU workload](#run-a-gpu-enabled-workload).
+ ## Confirm that GPUs are schedulable After creating your cluster, confirm that GPUs are schedulable in Kubernetes.
aks Ha Dr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ha-dr-overview.md
+
+ Title: High availability and disaster recovery overview for Azure Kubernetes Service (AKS)
+description: Learn about the high availability and disaster recovery options for Azure Kubernetes Service (AKS) clusters.
++++ Last updated : 01/30/2024++
+# High availability and disaster recovery overview for Azure Kubernetes Service (AKS)
+
+When creating and managing applications in the cloud, there's always a risk of disruption from outages and disasters. To ensure business continuity (BC), you need to plan for high availability (HA) and disaster recovery (DR).
+
+HA refers to the design and implementation of a system or service that's highly reliable and experiences minimal downtime. HA is a combination of tools, technologies, and processes that ensure a system or service is available to perform its intended function. HA is a critical component of DR planning. DR is the process of recovering from a disaster and restoring business operations to a normal state. DR is a subset of BC, which is the process of maintaining business functions or quickly resuming them in the event of a major disruption.
+
+This article covers some recommended practices for applications deployed to AKS, but is by no means meant as an exhaustive list of possible solutions.
+
+## Technology overview
+
+A Kubernetes cluster is divided into two components:
+
+- The **control plane**, which provides the core Kubernetes services and orchestration of application workloads, and
+- The **nodes**, which run your application workloads.
+
+![Diagram of Kubernetes control plane and node components.](media/concepts-clusters-workloads/control-plane-and-nodes.png)
+
+When you create an AKS cluster, the Azure platform automatically creates and configures a control plane. AKS offers two pricing tiers for cluster management: the **Free tier** and the **Standard tier**. For more information, see [Free and Standard pricing tiers for AKS cluster management](./free-standard-pricing-tiers.md).
+
+The control plane and its resources reside only in the region where you created the cluster. AKS provides a single-tenant control plane with a dedicated API server, scheduler, etc. You define the number and size of the nodes, and the Azure platform configures the secure communication between the control plane and nodes. Interaction with the control plane occurs through Kubernetes APIs, such as `kubectl` or the Kubernetes dashboard.
+
+To run your applications and supporting services, you need a Kubernetes *node*. An AKS cluster has at least one node, an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime. The Azure VM size for your nodes defines CPUs, memory, size, and the storage type available (such as high-performance SSD or regular HDD). Plan the VM and storage size around whether your applications may require large amounts of CPU and memory or high-performance storage. In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux, [Azure Linux](./use-azure-linux.md), or Windows Server 2022. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs.
+
+For more information on cluster and workload components in AKS, see [Kubernetes core concepts for AKS](./concepts-clusters-workloads.md).
+
+## Important considerations
+
+### Regional and global resources
+
+**Regional resources** are provisioned as part of a *deployment stamp* to a single Azure region. These resources share nothing with resources in other regions, and they can be independently removed or replicated to other regions. For more information, see [Regional resources](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-intro#regional-resources).
+
+**Global resources** share the lifetime of the system, and they can be globally available within the context of a multi-region deployment. For more information, see [Global resources](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-intro#global-resources).
+
+### Recovery objectives
+
+A complete disaster recovery plan must specify business requirements for each process the application implements:
+
+- **Recovery Point Objective (RPO)** is the maximum duration of acceptable data loss. RPO is measured in units of time, such as minutes, hours, or days.
+- **Recovery Time Objective (RTO)** is the maximum duration of acceptable downtime, with *downtime* defined by your specification. For example, if the acceptable downtime duration in a disaster is *eight hours*, then the RTO is eight hours.
+
+### Availability zones
+
+You can use availability zones to spread your data across multiple zones in the same region. Within a region, availability zones are close enough to have low-latency connections to other availability zones, but they're far enough apart to reduce the likelihood that more than one will be affected by local outages or weather. For more information, see [Recommendations for using availability zones and regions](/azure/well-architected/reliability/regions-availability-zones).
+
+### Zonal resilience
+
+AKS clusters are resilient to zonal failures. If a zone fails, the cluster continues to run in the remaining zones. The cluster's control plane and nodes are spread across the zones, and the Azure platform automatically handles the distribution of the nodes. For more information, see [AKS zonal resilience](./availability-zones.md).
+
+### Load balancing
+
+#### Global load balancing
+
+Global load balancing services distribute traffic across regional backends, clouds, or hybrid on-premises services. These services route end-user traffic to the closest available backend. They also react to changes in service reliability or performance to maximize availability and performance. The following Azure services provide global load balancing:
+
+- [Azure Front Door](../frontdoor/front-door-overview.md)
+- [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md)
+- [Cross-region Azure Load Balancer](../load-balancer/cross-region-overview.md)
+- [Azure Kubernetes Fleet Manager](../kubernetes-fleet/overview.md)
+
+#### Regional load balancing
+
+Regional load balancing services distribute traffic within virtual networks across VMs or zonal and zone-redundant service endpoints within a region. The following Azure services provide regional load balancing:
+
+- [Azure Load Balancer](../load-balancer/load-balancer-overview.md)
+- [Azure Application Gateway](../application-gateway/overview.md)
+- [Azure Application Gateway for Containers](../application-gateway/for-containers/overview.md)
+
+### Observability
+
+You need to collect data from applications and infrastructure to allow for effective operations and maximized reliability. Azure provides tools to help you monitor and manage your AKS workloads. For more information, see [Observability resources](/azure/architecture/reference-architectures/containers/aks-mission-critical/mission-critical-intro#observability-resources).
+
+## Scope definition
+
+Application uptime becomes important as you manage AKS clusters. By default, AKS provides high availability by using multiple nodes in a [Virtual Machine Scale Set](../virtual-machine-scale-sets/overview.md), but these nodes donΓÇÖt protect your system from a region failure. To maximize your uptime, plan ahead to maintain business continuity and prepare for disaster recovery using the following best practices:
+
+- Plan for AKS clusters in multiple regions.
+- Route traffic across multiple clusters using Azure Traffic Manager.
+- Use geo-replication for your container image registries.
+- Plan for application state across multiple clusters.
+- Replicate storage across multiple regions.
+
+### Deployment model implementations
+
+|Deployment model|Pros|Cons|
+|-|-|-|
+|[Active-active](#active-active-high-availability-deployment-model)|ΓÇó No data loss or inconsistency during failover <br> ΓÇó High resiliency <br> ΓÇó Better utilization of resources with higher performance|ΓÇó Complex implementation and management <br> ΓÇó Higher cost <br> ΓÇó Requires a load balancer and form of traffic routing|
+|[Active-passive](#active-passive-disaster-recovery-deployment-model)|ΓÇó Simpler implementation and management <br> ΓÇó Lower cost <br> ΓÇó Doesn't require a load balancer or traffic manager|ΓÇó Potential for data loss or inconsistency during failover <br> ΓÇó Longer recovery time and downtime <br> ΓÇó Underutilization of resources|
+|[Passive-cold](#passive-cold-failover-deployment-model)|ΓÇó Lowest cost <br> ΓÇó Doesn't require synchronization, replication, load balancer, or traffic manager <br> ΓÇó Suitable for low-priority, non-critical workloads|ΓÇó High risk of data loss or inconsistency during failover <br> ΓÇó Longest recovery time and downtime <br> ΓÇó Requires manual intervention to activate cluster and trigger backup|
+
+#### Active-active high availability deployment model
+
+In the active-active high availability (HA) deployment model, you have two independent AKS clusters deployed in two different Azure regions (typically paired regions, such as Canada Central and Canada East or US East 2 and US Central) that actively serve traffic.
+
+With this example architecture:
+
+- You deploy two AKS clusters in separate Azure regions.
+- During normal operations, network traffic routes between both regions. If one region becomes unavailable, traffic automatically routes to a region closest to the user who issued the request.
+- There's a deployed hub-spoke pair for each regional AKS instance. Azure Firewall Manager policies manage the firewall rules across the regions.
+- Azure Key Vault is provisioned in each region to store secrets and keys.
+- Azure Front Door load balances and routes traffic to a regional Azure Application Gateway instance, which sits in front of each AKS cluster.
+- Regional Log Analytics instances store regional networking metrics and diagnostic logs.
+- The container images for the workload are stored in a managed container registry. A single Azure Container Registry is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables replicating images to the selected Azure regions and provides continued access to images, even if a region experiences an outage.
+
+To create an active-active deployment model in AKS, you perform the following steps:
+
+1. Create two identical deployments in two different Azure regions.
+2. Create two instances of your web app.
+3. Create an Azure Front Door profile with the following resources:
+
+ - An endpoint.
+ - Two origin groups, each with a priority of *one*.
+ - A route.
+
+4. Limit network traffic to the web apps only from the Azure Front Door instance. 5. Configure all other backend Azure services, such as databases, storage accounts, and authentication providers.
+5. Deploy code to both web apps with continuous deployment.
+
+For more information, see the [**Recommended active-active high availability solution overview for AKS**](./active-active-solution.md).
+
+#### Active-passive disaster recovery deployment model
+
+In the active-passive disaster recovery (DR) deployment model, you have two independent AKS clusters deployed in two different Azure regions (typically paired regions, such as Canada Central and Canada East or US East 2 and US Central) that actively serve traffic. Only one of the clusters actively serves traffic at any given time. The other cluster contains the same configuration and application data as the active cluster, but doesn't accept traffic unless directed by a traffic manager.
+
+With this example architecture:
+
+- You deploy two AKS clusters in separate Azure regions.
+- During normal operations, network traffic routes to the primary AKS cluster, which you set in the Azure Front Door configuration.
+ - Priority needs to be set between *1-5* with 1 being the highest and 5 being the lowest.
+ - You can set multiple clusters to the same priority level and can specify the weight of each.
+- If the primary cluster becomes unavailable (disaster occurs), traffic automatically routes to the next region selected in the Azure Front Door.
+ - All traffic must go through the Azure Front Door traffic manager for this system to work.
+- Azure Front Door routes traffic to the Azure App Gateway in the primary region (cluster must be marked with priority 1). If this region fails, the service redirects traffic to the next cluster in the priority list.
+ - Rules come from Azure Front Door.
+- A hub-spoke pair is deployed for each regional AKS instance. Azure Firewall Manager policies manage the firewall rules across the regions.
+- Azure Key Vault is provisioned in each region to store secrets and keys.
+- Regional Log Analytics instances store regional networking metrics and diagnostic logs.
+- The container images for the workload are stored in a managed container registry. A single Azure Container Registry is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables replicating images to the selected Azure regions and provides continued access to images, even if a region experiences an outage.
+
+To create an active-passive deployment model in AKS, you perform the following steps:
+
+1. Create two identical deployments in two different Azure regions.
+2. Configure autoscaling rules for the secondary application so it scales to the same instance count as the primary when the primary region becomes inactive. While inactive, it doesn't need to be scaled up. This helps reduce costs.
+3. Create two instances of your web application, with one on each cluster.
+4. Create an Azure Front Door profile with the following resources:
+
+ - An endpoint.
+ - An origin group with a priority of *one* for the primary region.
+ - A second origin group with a priority of *two* for the secondary region.
+ - A route.
+
+5. Limit network traffic to the web applications from only the Azure Front Door instance.
+6. Configure all other backend Azure services, such as databases, storage accounts, and authentication providers.
+7. Deploy code to both the web applications with continuous deployment.
+
+For more information, see the [**Recommended active-passive disaster recovery solution overview for AKS**](./active-passive-solution.md).
+
+#### Passive-cold failover deployment model
+
+The passive-cold failover deployment model is configured in the same way as the [active-passive disaster recovery deployment model](#active-passive-disaster-recovery-deployment-model), except the clusters remain inactive until a user activates them in the event of a disaster. We consider this approach *out-of-scope* because it involves a similar configuration to active-passive, but with the added complexity of manual intervention to activate the cluster and trigger a backup.
+
+With this example architecture:
+
+- You create two AKS clusters, preferably in different regions or zones for better resiliency.
+- When you need to fail over, you activate the deployment to take over the traffic flow.
+- In the case the primary passive cluster goes down, you need to manually activate the cold cluster to take over the traffic flow.
+- This condition needs to be set either by a manual input every time or a certain event as specified by you.
+- Azure Key Vault is provisioned in each region to store secrets and keys.
+- Regional Log Analytics instances store regional networking metrics and diagnostic logs for each cluster.
+
+To create a passive-cold failover deployment model in AKS, you perform the following steps:
+
+1. Create two identical deployments in different zones/regions.
+2. Configure autoscaling rules for the secondary application so it scales to the same instance count as the primary when the primary region becomes inactive. While inactive, it doesn't need to be scaled up, which helps reduce costs.
+3. Create two instances of your web application, with one on each cluster.
+4. Configure all other backend Azure services, such as databases, storage accounts, and authentication providers.
+5. Set a condition when the cold cluster should be triggered. You can use a load balancer if you need.
+
+For more information, see the [**Recommended passive-cold failover solution overview for AKS**](./passive-cold-solution.md).
+
+## Service quotas and limits
+
+AKS sets default limits and quotas for resources and features, including usage restrictions for certain VM SKUs.
++
+For more information, see [AKS service quotas and limits](./quotas-skus-regions.md#service-quotas-and-limits).
+
+## Backup
+
+Azure Backup supports backing up AKS cluster resources and persistent volumes attached to the cluster using a backup extension. The Backup vault communicates with the AKS cluster through the extension to perform backup and restore operations.
+
+For more information, see the following articles:
+
+- [About AKS backup using Azure Backup (preview)](../backup/azure-kubernetes-service-backup-overview.md)
+- [Back up AKS using Azure Backup (preview)](../backup/azure-kubernetes-service-cluster-backup.md)
aks Istio Plugin Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md
Title: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview) description: Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview) + Last updated 12/04/2023- # Plug in CA certificates for Istio-based service mesh add-on on Azure Kubernetes Service (preview)
You may need to periodically rotate the certificate authorities for security or
[az-provider-register]: /cli/azure/provider#az-provider-register [az-aks-mesh-disable]: /cli/azure/aks/mesh#az-aks-mesh-disable [istio-generate-certs]: https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/#plug-in-certificates-and-key-into-the-cluster
-[istio-mtls-reference]: https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication
+[istio-mtls-reference]: https://istio.io/latest/docs/concepts/security/#mutual-tls-authentication
aks Kubelogin Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelogin-authentication.md
Title: Use kubelogin to authenticate in Azure Kubernetes Service description: Learn how to use the kubelogin plugin for all Microsoft Entra authentication methods in Azure Kubernetes Service (AKS). -+ Last updated 11/28/2023
aks Monitor Control Plane Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-control-plane-metrics.md
+
+ Title: Monitor Azure Kubernetes Service control plane metrics (preview)
+description: Learn how to collect metrics from the Azure Kubernetes Service (AKS) control plane and view the telemetry in Azure Monitor.
++++ Last updated : 01/31/2024++
+#CustomerIntent: As a platform engineer, I want to collect metrics from the control plane and monitor them for any potential issues
++
+# Monitor Azure Kubernetes Service (AKS) control plane metrics (preview)
+
+The Azure Kubernetes Service (AKS) [control plane](concepts-clusters-workloads.md#control-plane) health is critical for the performance and reliability of the cluster. Control plane metrics (preview) provide more visibility into its availability and performance, allowing you to maximize overall observability and maintain operational excellence. These metrics are fully compatible with Prometheus and Grafana, and can be customized to only store what you consider necessary. With these new metrics, you can collect all metrics from API server, ETCD, Scheduler, Autoscaler, and controller manager.
+
+This article helps you understand this new feature, how to implement it, and how to observe the telemetry collected.
+
+## Prerequisites and limitations
+
+- Only supports [Azure Monitor managed service for Prometheus][managed-prometheus-overview].
+- [Private link](../azure-monitor/logs/private-link-security.md) isn't supported.
+- Only the default [ama-metrics-settings-config-map](../azure-monitor/containers/prometheus-metrics-scrape-configuration.md#configmaps) can be customized. All other customizations are not supported.
+- The cluster must use [managed identity authentication](use-managed-identity.md).
+- This feature is currently available in the following regions: West US 2, East Asia, UK South, East US, Australia Central, Australia East, Brazil South, Canada Central, Central India, East US 2, France Central, and Germany West Central.
+
+### Install or update the `aks-preview` Azure CLI extension
++
+Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
+
+```azurecli-interactive
+az extension add --name aks-preview
+```
+
+If you need to update the extension version, you can do this using the [`az extension update`][az-extension-update] command.
+
+```azurecli-interactive
+az extension update --name aks-preview
+```
+
+### Register the 'AzureMonitorMetricsControlPlanePreview' feature flag
+
+Register the `AzureMonitorMetricsControlPlanePreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace "Microsoft.ContainerService"
+```
+
+## Enable control plane metrics on your AKS cluster
+
+You can enable control plane metrics with the Azure Monitor managed service for Prometheus add-on during cluster creation or for an existing cluster. To collect Prometheus metrics from your Kubernetes cluster, see [Enable Prometheus and Grafana for Kubernetes clusters][enable-monitoring-kubernetes-cluster] and follow the steps on the **CLI** tab for an AKS cluster. On the command-line, be sure to include the parameters `--generate-ssh-keys` and `--enable-managed-identity`.
+
+>[!NOTE]
+> Unlike the metrics collected from cluster nodes, control plane metrics are collected by a component which isn't part of the **ama-metrics** add-on. Enabling the `AzureMonitorMetricsControlPlanePreview` feature flag and the managed prometheus add-on ensures control plane metrics are collected. After enabling metric collection, it can take several minutes for the data to appear in the workspace.
+
+## Querying control plane metrics
+
+Control plane metrics are stored in an Azure monitor workspace in the cluster's region. They can be queried directly from the workspace or through the Azure Managed Grafana instance connected to the workspace. To find the Azure Monitor workspace associated with the cluster, from the left-hand pane of your selected AKS cluster, navigate to the **Monitoring** section and select **Insights**. On the Container Insights page for the cluster, select **Monitor Settings**.
++
+If you're using Azure Managed Grafana to visualize the data, you can import the following dashboards. AKS provides dashboard templates to help you view and analyze your control plane telemetry data in real-time.
+
+* [API server][grafana-dashboard-template-api-server]
+* [ETCD][grafana-dashboard-template-etcd]
+
+## Customize control plane metrics
+
+By default, AKs includes a pre-configured set of metrics to collect and store for each component. `API server` and `etcd` are enabled by default. This list can be customized through the [ama-settings-configmap][ama-metrics-settings-configmap]. The list of `minimal-ingestion` profile metrics are available [here][list-of-default-metrics-aks-control-plane].
+
+The following lists the default targets:
+
+```yaml
+controlplane-apiserver = true
+controlplane-cluster-autoscaler = false
+controlplane-kube-scheduler = false
+controlplane-kube-controller-manager = false
+controlplane-etcd = true
+```
+
+The various options are similar to Azure Managed Prometheus listed [here][prometheus-metrics-scrape-configuration-minimal].
+
+All ConfigMaps should be applied to `kube-system` namespace for any cluster.
+
+### Ingest only minimal metrics for the default targets
+
+This is the default behavior with the setting `default-targets-metrics-keep-list.minimalIngestionProfile="true"`. Only metrics listed later in this article are ingested for each of the default targets, which in this case is `controlplane-apiserver` and `controlplane-etcd`.
+
+### Ingest all metrics from all targets
+
+Perform the following steps to collect all metrics from all targets on the cluster.
+
+1. Download the ConfigMap file [ama-metrics-settings-configmap.yaml][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.
+
+1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape, are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
+
+1. Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f configmap-controlplane.yaml
+ ```
+
+ After the configuration is applied, it takes several minutes before the metrics from the specified targets scraped from the control plane appear in the Azure Monitor workspace.
+
+### Ingest a few other metrics in addition to minimal metrics
+
+`Minimal ingestion profile` is a setting that helps reduce ingestion volume of metrics, as only metrics used by default dashboards, default recording rules & default alerts are collected. Perform the following steps to customize this behavior.
+
+1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.
+
+1. Set `minimalingestionprofile = true` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify are: `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`, `controlplane-kube-controller-manager`, and `controlplane-etcd`.
+
+1. Under the `default-target-metrics-list`, specify the list of metrics for the `true` targets. For example,
+
+ ```yaml
+ controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests"
+ ```
+
+- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f configmap-controlplane.yaml
+ ```
+
+ After the configuration is applied, it takes several minutes before the metrics from the specified targets scraped from the control plane appear in the Azure Monitor workspace.
+
+### Ingest only specific metrics from some targets
+
+1. Download the ConfigMap file [ama-metrics-settings-configmap][ama-metrics-settings-configmap] and rename it to `configmap-controlplane.yaml`.
+
+1. Set `minimalingestionprofile = false` and verify the targets under `default-scrape-settings-enabled` that you want to scrape are set to `true`. The only targets you can specify here are `controlplane-apiserver`, `controlplane-cluster-autoscaler`, `controlplane-kube-scheduler`,`controlplane-kube-controller-manager`, and `controlplane-etcd`.
+
+1. Under the `default-target-metrics-list`, specify the list of metrics for the `true` targets. For example,
+
+ ```yaml
+ controlplane-apiserver= "apiserver_admission_webhook_admission_duration_seconds| apiserver_longrunning_requests"
+ ```
+
+- Apply the ConfigMap by running the [kubectl apply][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f configmap-controlplane.yaml
+ ```
+
+ After the configuration is applied, it takes several minutes before the metrics from the specified targets scraped from the control plane appear in the Azure Monitor workspace.
+
+## Troubleshoot control plane metrics issues
+
+Make sure to check that the feature flag `AzureMonitorMetricsControlPlanePreview` is enabled and the `ama-metrics` pods are running.
+
+> [!NOTE]
+> The [troubleshooting methods][prometheus-troubleshooting] for Azure managed service Prometheus won't translate directly here as the components scraping the control plane aren't present in the managed prometheus add-on.
+
+## ConfigMap formatting or errors
+
+Make sure to double check the formatting of the ConfigMap, and if the fields are correctly populated with the intended values. Specifically the `default-targets-metrics-keep-list`, `minimal-ingestion-profile`, and `default-scrape-settings-enabled`.
+
+### Isolate control plane from data plane issue
+
+Start by setting some of the [node related metrics][node-metrics] to `true` and verify the metrics are being forwarded to the workspace. This helps determine if the issue is specific to scraping control plane metrics.
+
+### Events ingested
+
+Once you applied the changes, you can open metrics explorer from the **Azure Monitor overview** page, or from the **Monitoring** section the selected cluster. In the Azure portal, select **Metrics**. Check for an increase or decrease in the number of events ingested per minute. It should help you determine if the specific metric is missing or all metrics are missing.
+
+### Specific metric is not exposed
+
+There were cases where the metrics are documented, but not exposed from the target and wasn't forwarded to the Azure Monitor workspace. In this case, it's necessary to verify other metrics are being forwarded to the workspace.
+
+### No access to the Azure Monitor workspace
+
+When you enable the add-on, you might have specified an existing workspace that you don't have access to. In that case, it might look like the metrics are not being collected and forwarded. Make sure that you create a new workspace while enabling the add-on or while creating the cluster.
+
+## Disable control plane metrics on your AKS cluster
+
+You can disable control plane metrics at any time, by either disabling the feature flag, disabling managed Prometheus, or by deleting the AKS cluster.
+
+> [!NOTE]
+> This action doesn't remove any existing data stored in your Azure Monitor workspace.
+
+Run the following command to remove the metrics add-on that scrapes Prometheus metrics.
+
+```azurecli-interactive
+az aks update --disable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group>
+```
+
+Run the following command to disable scraping of control plane metrics on the AKS cluster by unregistering the `AzureMonitorMetricsControlPlanePreview` feature flag using the [az feature unregister][az-feature-unregister] command.
+
+```azurecli-interactive
+az feature unregister "Microsoft.ContainerService" --name "AzureMonitorMetricsControlPlanePreview"
+```
+
+## Next steps
+
+After evaluating this preview feature, [share your feedback][share-feedback]. We're interested in hearing what you think.
+
+- Learn more about the [list of default metrics for AKS control plane][list-of-default-metrics-aks-control-plane].
+
+<!-- EXTERNAL LINKS -->
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[ama-metrics-settings-configmap]: https://github.com/Azure/prometheus-collector/blob/89e865a73601c0798410016e9beb323f1ecba335/otelcollector/configmaps/ama-metrics-settings-configmap.yaml
+[share-feedback]: https://forms.office.com/r/Mq4hdZ1W7W
+[grafana-dashboard-template-api-server]: https://grafana.com/grafana/dashboards/20331-kubernetes-api-server/
+[grafana-dashboard-template-etcd]: https://grafana.com/grafana/dashboards/20330-kubernetes-etcd/
+
+<!-- INTERNAL LINKS -->
+[managed-prometheus-overview]: ../azure-monitor/essentials/prometheus-metrics-overview.md
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[enable-monitoring-kubernetes-cluster]: ../azure-monitor/containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana
+[prometheus-metrics-scrape-configuration-minimal]: ../azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md#scenarios
+[prometheus-troubleshooting]: ../azure-monitor/containers/prometheus-metrics-troubleshoot.md
+[node-metrics]: ../azure-monitor/containers/prometheus-metrics-scrape-default.md
+[list-of-default-metrics-aks-control-plane]: control-plane-metrics-default-list.md
+[az-feature-unregister]: /cli/azure/feature#az-feature-unregister
+[release-tracker]: https://releases.aks.azure.com/#tabversion
aks Operator Best Practices Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-multi-region.md
- Title: Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS)
-description: Best practices for a cluster operator to achieve maximum uptime for your applications and to provide high availability and prepare for disaster recovery in Azure Kubernetes Service (AKS).
- Previously updated : 03/08/2023-
-#Customer intent: As an AKS cluster operator, I want to plan for business continuity or disaster recovery to help protect my cluster from region problems.
--
-# Best practices for business continuity and disaster recovery in Azure Kubernetes Service (AKS)
-
-As you manage clusters in Azure Kubernetes Service (AKS), application uptime becomes important. By default, AKS provides high availability by using multiple nodes in a [Virtual Machine Scale Set (VMSS)](../virtual-machine-scale-sets/overview.md). But these multiple nodes donΓÇÖt protect your system from a region failure. To maximize your uptime, plan ahead to maintain business continuity and prepare for disaster recovery.
-
-This article focuses on how to plan for business continuity and disaster recovery in AKS. You learn how to:
-
-> [!div class="checklist"]
-
-> * Plan for AKS clusters in multiple regions.
-> * Route traffic across multiple clusters using Azure Traffic Manager.
-> * Use geo-replication for your container image registries.
-> * Plan for application state across multiple clusters.
-> * Replicate storage across multiple regions.
-
-## Plan for multiregion deployment
-
-> **Best practice**
->
-> When you deploy multiple AKS clusters, choose regions where AKS is available. Use paired regions.
-
-An AKS cluster is deployed into a single region. To protect your system from region failure, deploy your application into multiple AKS clusters across different regions. When planning where to deploy your AKS cluster, consider:
-
-* [**AKS region availability**](./quotas-skus-regions.md#region-availability)
- * Choose regions close to your users.
- * AKS continually expands into new regions.
-
-* [**Azure paired regions**](../availability-zones/cross-region-replication-azure.md)
- * For your geographic area, choose two regions paired together.
- * AKS platform updates (planned maintenance) are serialized with a delay of at least 24 hours between paired regions.
- * Recovery efforts for paired regions are prioritized where needed.
-
-* **Service availability**
- * Decide whether your paired regions should be hot/hot, hot/warm, or hot/cold.
- * Do you want to run both regions at the same time, with one region *ready* to start serving traffic? *or*
- * Do you want to give one region time to get ready to serve traffic?
-
-AKS region availability and paired regions are a joint consideration. Deploy your AKS clusters into paired regions designed to manage region disaster recovery together. For example, AKS is available in East US and West US. These regions are paired. Choose these two regions when you're creating an AKS BC/DR strategy.
-
-When you deploy your application, add another step to your CI/CD pipeline to deploy to these multiple AKS clusters. Updating your deployment pipelines prevents applications from deploying into only one of your regions and AKS clusters. In that scenario, customer traffic directed to a secondary region won't receive the latest code updates.
-
-## Use Azure Traffic Manager to route traffic
-
-> **Best practice**
->
-> For the best performance and redundancy, direct all application traffic through Traffic Manager before it goes to your AKS cluster.
-
-If you have multiple AKS clusters in different regions, use Traffic Manager to control traffic flow to the applications running in each cluster. [Azure Traffic Manager](../traffic-manager/index.yml) is a DNS-based traffic load balancer that can distribute network traffic across regions. Use Traffic Manager to route users based on cluster response time or based on priority.
-
-![AKS with Traffic Manager](media/operator-best-practices-bc-dr/aks-azure-traffic-manager.png)
-
-If you have a single AKS cluster, you typically connect to the service IP or DNS name of a given application. In a multi-cluster deployment, you should connect to a Traffic Manager DNS name that points to the services on each AKS cluster. Define these services by using Traffic Manager endpoints. Each endpoint is the *service load balancer IP*. Use this configuration to direct network traffic from the Traffic Manager endpoint in one region to the endpoint in a different region.
-
-Traffic Manager performs DNS lookups and returns your most appropriate endpoint. With priority routing you can enable a primary service endpoint and multiple backup endpoints in case the primary or one of the backup endpoints is unavailable.
-
-![Priority routing through Traffic Manager](media/operator-best-practices-bc-dr/traffic-manager-priority-routing.png)
-
-For information on how to set up endpoints and routing, see [Configure priority traffic routing method in Traffic Manager](../traffic-manager/traffic-manager-configure-priority-routing-method.md).
-
-### Application routing with Azure Front Door Service
-
-Using split TCP-based anycast protocol, [Azure Front Door Service](../frontdoor/front-door-overview.md) promptly connects your end users to the nearest Front Door POP (Point of Presence). More features of Azure Front Door Service:
-
-* TLS termination
-* Custom domain
-* Web application firewall
-* URL Rewrite
-* Session affinity
-
-Review the needs of your application traffic to understand which solution is the most suitable.
-
-### Interconnect regions with global virtual network peering
-
-Connect both virtual networks to each other through [virtual network peering](../virtual-network/virtual-network-peering-overview.md) to enable communication between clusters. Virtual network peering interconnects virtual networks, providing high bandwidth across Microsoft's backbone network - even across different geographic regions.
-
-Before peering virtual networks with running AKS clusters, use the standard Load Balancer in your AKS cluster. This prerequisite makes Kubernetes services reachable across the virtual network peering.
-
-## Enable geo-replication for container images
-
-> **Best practice**
->
-> Store your container images in Azure Container Registry and geo-replicate the registry to each AKS region.
-
-To deploy and run your applications in AKS, you need a way to store and pull the container images. Container Registry integrates with AKS, so it can securely store your container images or Helm charts. Container Registry supports multimaster geo-replication to automatically replicate your images to Azure regions around the world.
-
-To improve performance and availability, use Container Registry geo-replication to create a registry in each region where you have an AKS cluster.Each AKS cluster will then pull container images from the local container registry in the same region.
-
-![Container Registry geo-replication for container images](media/operator-best-practices-bc-dr/acr-geo-replication.png)
-
-Using Container Registry geo-replication to pull images from the same region has the following benefits:
-
-* **Faster**: Pull images from high-speed, low-latency network connections within the same Azure region.
-* **More reliable**: If a region is unavailable, your AKS cluster pulls the images from an available container registry.
-* **Cheaper**: No network egress charge between datacenters.
-
-Geo-replication is a *Premium* SKU container registry feature. For information on how to configure geo-replication, see [Container Registry geo-replication](../container-registry/container-registry-geo-replication.md).
-
-## Remove service state from inside containers
-
-> **Best practice**
->
-> Avoid storing service state inside the container. Instead, use an Azure platform as a service (PaaS) that supports multi-region replication.
-
-*Service state* refers to the in-memory or on-disk data required by a service to function. State includes the data structures and member variables that the service reads and writes. Depending on how the service is architected, the state might also include files or other resources stored on the disk. For example, the state might include the files a database uses to store data and transaction logs.
-
-State can be either externalized or co-located with the code that manipulates the state. Typically, you externalize state by using a database or other data store that runs on different machines over the network or that runs out of process on the same machine.
-
-Containers and microservices are most resilient when the processes that run inside them don't retain state. Since applications almost always contain some state, use a PaaS solution, such as:
-
-* Azure Cosmos DB
-* Azure Database for PostgreSQL
-* Azure Database for MySQL
-* Azure SQL Database
-
-To build portable applications, see the following guidelines:
-
-* [The 12-factor app methodology](https://12factor.net/)
-* [Run a web application in multiple Azure regions](/azure/architecture/reference-architectures/app-service-web-app/multi-region)
-
-## Create a storage migration plan
-
-> **Best practice**
->
-> If you use Azure Storage, prepare and test how to migrate your storage from the primary region to the backup region.
-
-Your applications might use Azure Storage for their data. If so, your applications are spread across multiple AKS clusters in different regions. You need to keep the storage synchronized. Here are two common ways to replicate storage:
-
-* Infrastructure-based asynchronous replication
-* Application-based asynchronous replication
-
-### Infrastructure-based asynchronous replication
-
-Your applications might require persistent storage even after a pod is deleted. In Kubernetes, you can use persistent volumes to persist data storage. Persistent volumes are mounted to a node VM and then exposed to the pods. Persistent volumes follow pods even if the pods are moved to a different node inside the same cluster.
-
-The replication strategy you use depends on your storage solution. The following common storage solutions provide their own guidance about disaster recovery and replication:
-
-* [Gluster](https://docs.gluster.org/en/latest/Administrator-Guide/Geo-Replication/)
-* [Ceph](https://docs.ceph.com/docs/master/cephfs/disaster-recovery/)
-* [Rook](https://rook.io/docs/rook/v1.2/ceph-disaster-recovery.html)
-* [Portworx](https://docs.portworx.com/portworx-enterprise/operations/operate-kubernetes/storage-operations/kubernetes-storage-101/volumes.html)
-
-Typically, you provide a common storage point where applications write their data. This data is then replicated across regions and accessed locally.
-
-![Infrastructure-based asynchronous replication](media/operator-best-practices-bc-dr/aks-infra-based-async-repl.png)
-
-If you use Azure Managed Disks, you can use [Velero on Azure][velero] and [Kasten][kasten] to handle replication and disaster recovery. These options are back up solutions native to but unsupported by Kubernetes.
-
-### Application-based asynchronous replication
-
-Kubernetes currently provides no native implementation for application-based asynchronous replication. Since containers and Kubernetes are loosely coupled, any traditional application or language approach should work. Typically, the applications themselves replicate the storage requests, which are then written to each cluster's underlying data storage.
-
-![Application-based asynchronous replication](media/operator-best-practices-bc-dr/aks-app-based-async-repl.png)
-
-## Next steps
-
-This article focuses on business continuity and disaster recovery considerations for AKS clusters. For more information about cluster operations in AKS, see these articles about best practices:
-
-* [Multitenancy and cluster isolation][aks-best-practices-cluster-isolation]
-* [Basic Kubernetes scheduler features][aks-best-practices-scheduler]
-
-<!-- INTERNAL LINKS -->
-[aks-best-practices-scheduler]: operator-best-practices-scheduler.md
-[aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md
-
-[velero]: https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md
-[kasten]: https://www.kasten.io/
aks Passive Cold Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/passive-cold-solution.md
+
+ Title: Passive-cold solution overview for Azure Kubernetes Service (AKS)
+description: Learn about a passive-cold disaster solution overview for Azure Kubernetes Service (AKS).
++++ Last updated : 01/30/2024++
+# Passive-cold solution overview for Azure Kubernetes Service (AKS)
+
+When you create an application in Azure Kubernetes Service (AKS) and choose an Azure region during resource creation, it's a single-region app. When the region becomes unavailable during a disaster, your application also becomes unavailable. If you create an identical deployment in a secondary Azure region, your application becomes less susceptible to a single-region disaster, which guarantees business continuity, and any data replication across the regions lets you recover your last application state.
+
+This guide outlines a passive-cold solution for AKS. Within this solution, we deploy two independent and identical AKS clusters into two paired Azure regions with only one cluster actively serving traffic when the application is needed.
+
+> [!NOTE]
+> The following practice has been reviewed internally and vetted in conjunction with our Microsoft partners.
+
+## Passive-cold solution overview
+
+In this approach, we have two independent AKS clusters being deployed in two Azure regions. When the application is needed, we activate the passive cluster to receive traffic. If the passive cluster goes down, we must manually activate the cold cluster to take over the flow of traffic. We can set this condition through a manual input every time or to specify a certain event.
+
+## Scenarios and configurations
+
+This solution is best implemented as a ΓÇ£use as neededΓÇ¥ workload, which is useful for scenarios that require workloads to run at specific times of day or run on demand. Example use cases for a passive-cold approach include:
+
+- A manufacturing company that needs to run a complex and resource-intensive simulation on a large dataset. In this case, the passive cluster is located in a cloud region that offers high-performance computing and storage services. The passive cluster is only used when the simulation is triggered by the user or by a schedule. If the cluster doesnΓÇÖt work upon triggering, the cold cluster can be used as a backup and the workload can run on it instead.
+- A government agency that needs to maintain a backup of its critical systems and data in case of a cyber attack or natural disaster. In this case, the passive cluster is located in a secure and isolated location thatΓÇÖs not accessible to the public.
+
+## Components
+
+The passive-cold disaster recovery solution uses many Azure services. This example architecture involves the following components:
+
+**Multiple clusters and regions**: You deploy multiple AKS clusters, each in a separate Azure region. When the app is needed, the passive cluster is activated to receive network traffic.
+
+**Key Vault**: You provision an [Azure Key Vault](../key-vault/general/overview.md) in each region to store secrets and keys.
+
+**Log Analytics**: Regional [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) instances store regional networking metrics and diagnostic logs. A shared instance stores metrics and diagnostic logs for all AKS instances.
+
+**Hub-spoke pair**: A hub-spoke pair is deployed for each regional AKS instance. [Azure Firewall Manager](../firewall-manager/overview.md) policies manage the firewall rules across each region.
+
+**Container Registry**: The container images for the workload are stored in a managed container registry. With this solution, a single [Azure Container Registry](../container-registry/container-registry-intro.md) instance is used for all Kubernetes instances in the cluster. Geo-replication for Azure Container Registry enables you to replicate images to the selected Azure regions and provides continued access to images even if a region experiences an outage.
+
+## Failover process
+
+If the passive cluster isn't functioning properly because of an issue in its specific Azure region, you can activate the cold cluster and redirect all traffic to that cluster's region. You can use this process while the passive cluster is deactivated until it starts working again. The cold cluster can take a couple minutes to come online, as it has been turned off and needs to complete the setup process. This approach isn't ideal for time-sensitive applications. In that case, we recommend considering an [active-active failover](./active-active-solution.md#failover-process).
+
+### Application Pods (Regional)
+
+A Kubernetes deployment object creates multiple replicas of a pod (*ReplicaSet*). If one is unavailable, traffic is routed between the remaining replicas. The Kubernetes *ReplicaSet* attempts to keep the specified number of replicas up and running. If one instance goes down, a new instance should be recreated. [Liveness probes](../container-instances/container-instances-liveness-probe.md) can check the state of the application or process running in the pod. If the pod is unresponsive, the liveness probe removes the pod, which forces the *ReplicaSet* to create a new instance.
+
+For more information, see [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/).
+
+### Application Pods (Global)
+
+When an entire region becomes unavailable, the pods in the cluster are no longer available to serve requests. In this case, the Azure Front Door instance routes all traffic to the remaining health regions. The Kubernetes clusters and pods in these regions continue to serve requests. To compensate for increased traffic and requests to the remaining cluster, keep in mind the following guidance:
+
+- Make sure network and compute resources are right sized to absorb any sudden increase in traffic due to region failover. For example, when using Azure Container Network Interface (CNI), make sure you have a subnet that can support all pod IPs with a spiked traffic load.
+- Use the [Horizontal Pod Autoscaler](./concepts-scale.md#horizontal-pod-autoscaler) to increase the pod replica count to compensate for the increased regional demand.
+- Use the AKS [Cluster Autoscaler](./cluster-autoscaler.md) to increase the Kubernetes instance node counts to compensate for the increased regional demand.
+
+### Kubernetes node pools (Regional)
+
+Occasionally, localized failure can occur to compute resources, such as power becoming unavailable in a single rack of Azure servers. To protect your AKS nodes from becoming a single point regional failure, use [Azure Availability Zones](./availability-zones.md). Availability zones ensure that AKS nodes in each availability zone are physically separated from those defined in another availability zones.
+
+### Kubernetes node pools (Global)
+
+In a complete regional failure, Azure Front Door routes traffic to the remaining healthy regions. Again, make sure to compensate for increased traffic and requests to the remaining cluster.
+
+## Failover testing strategy
+
+While there are no mechanisms currently available within AKS to take down an entire region of deployment for testing purposes, [Azure Chaos Studio](../chaos-studio/chaos-studio-overview.md) offers the ability to create a chaos experiment on your cluster.
+
+## Next steps
+
+If you're considering a different solution, see the following articles:
+
+- [Active passive disaster recovery solution overview for Azure Kubernetes Service (AKS)](./active-passive-solution.md)
+- [Active active high availability solution overview for Azure Kubernetes Service (AKS)](./active-active-solution.md)
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
Title: Manage API inventory in Azure API Center - Azure CLI
description: Use the Azure CLI to create and update APIs, API versions, and API definitions in your Azure API center. + Last updated 01/12/2024
api-management Api Management Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-capacity.md
To follow the steps in this article, you must have:
+ API Management management plane services, such as management actions applied via the Azure portal or Azure Resource Manager, or load coming from the [developer portal](api-management-howto-developer-portal.md). + Selected operating system processes, including processes that involve cost of TLS handshakes on new connections. + Platform updates, such as OS updates on the underlying compute resources for the instance.++ Number of APIs deployed, regardless of activity, which can consume additional capacity. Total **capacity** is an average of its own values from every [unit](upgrade-and-scale.md) of an API Management instance.
Low **capacity metric** doesn't necessarily mean that your API Management instan
- [Upgrade and scale an Azure API Management service instance](upgrade-and-scale.md) - [Automatically scale an Azure API Management instance](api-management-howto-autoscale.md)-- [Plan and manage costs for API Management](plan-manage-costs.md)
+- [Plan and manage costs for API Management](plan-manage-costs.md)
api-management Api Management Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-kubernetes.md
Microservices are perfect for building APIs. With [Azure Kubernetes Service](htt
## Background
-When publishing microservices as APIs for consumption, it can be challenging to manage the communication between the microservices and the clients that consume them. There is a multitude of cross-cutting concerns such as authentication, authorization, throttling, caching, transformation, and monitoring. These concerns are valid regardless of whether the microservices are exposed to internal or external clients.
+When publishing microservices as APIs for consumption, it can be challenging to manage the communication between the microservices and the clients that consume them. There's a multitude of cross-cutting concerns such as authentication, authorization, throttling, caching, transformation, and monitoring. These concerns are valid regardless of whether the microservices are exposed to internal or external clients.
The [API Gateway](/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern) pattern addresses these concerns. An API gateway serves as a front door to the microservices, decouples clients from your microservices, adds an additional layer of security, and decreases the complexity of your microservices by removing the burden of handling cross cutting concerns. [Azure API Management](https://aka.ms/apimrocks) is a turnkey solution to solve your API gateway needs. You can quickly create a consistent and modern gateway for your microservices and publish them as APIs. As a full-lifecycle API management solution, it also provides additional capabilities including a self-service developer portal for API discovery, API lifecycle management, and API analytics.
-When used together, AKS and API Management provide a platform for deploying, publishing, securing, monitoring, and managing your microservices-based APIs. In this article, we will go through a few options of deploying AKS in conjunction with API Management.
+When used together, AKS and API Management provide a platform for deploying, publishing, securing, monitoring, and managing your microservices-based APIs. In this article, we'll go through a few options of deploying AKS in conjunction with API Management.
## Kubernetes Services and APIs
-In a Kubernetes cluster, containers are deployed in [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/), which are ephemeral and have a lifecycle. When a worker node dies, the Pods running on the node are lost. Therefore, the IP address of a Pod can change anytime. We cannot rely on it to communicate with the pod.
+In a Kubernetes cluster, containers are deployed in [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/), which are ephemeral and have a lifecycle. When a worker node dies, the Pods running on the node are lost. Therefore, the IP address of a Pod can change anytime. We can't rely on it to communicate with the pod.
To solve this problem, Kubernetes introduced the concept of [Services](https://kubernetes.io/docs/concepts/services-networking/service/). A Kubernetes Service is an abstraction layer which defines a logic group of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.
-When we are ready to publish our microservices as APIs through API Management, we need to think about how to map our Services in Kubernetes to APIs in API Management. There are no set rules. It depends on how you designed and partitioned your business capabilities or domains into microservices at the beginning. For instance, if the pods behind a Service are responsible for all operations on a given resource (e.g., Customer), the Service may be mapped to one API. If operations on a resource are partitioned into multiple microservices (e.g., GetOrder, PlaceOrder), then multiple Services may be logically aggregated into one single API in API management (See Fig. 1).
+When we are ready to publish our microservices as APIs through API Management, we need to think about how to map our Services in Kubernetes to APIs in API Management. There are no set rules. It depends on how you designed and partitioned your business capabilities or domains into microservices at the beginning. For instance, if the pods behind a Service are responsible for all operations on a given resource (for example, Customer), the Service may be mapped to one API. If operations on a resource are partitioned into multiple microservices (for example, GetOrder, PlaceOrder), then multiple Services may be logically aggregated into one single API in API management (See Fig. 1).
The mappings can also evolve. Since API Management creates a facade in front of the microservices, it allows us to refactor and right-size our microservices over time.
The mappings can also evolve. Since API Management creates a facade in front of
There are a few options of deploying API Management in front of an AKS cluster.
-While an AKS cluster is always deployed in a virtual network (VNet), an API Management instance is not required to be deployed in a VNet. When API Management does not reside within the cluster VNet, the AKS cluster has to publish public endpoints for API Management to connect to. In that case, there is a need to secure the connection between API Management and AKS. In other words, we need to ensure the cluster can only be accessed exclusively through API Management. LetΓÇÖs go through the options.
+While an AKS cluster is always deployed in a virtual network (VNet), an API Management instance isn't required to be deployed in a VNet. When API Management doesn't reside within the cluster VNet, the AKS cluster has to publish public endpoints for API Management to connect to. In that case, there's a need to secure the connection between API Management and AKS. In other words, we need to ensure the cluster can only be accessed exclusively through API Management. LetΓÇÖs go through the options.
### Option 1: Expose Services publicly
This might be the easiest option to deploy API Management in front of AKS, espec
![Publish services directly](./media/api-management-aks/direct.png) Pros:
-* Easy configuration on the API Management side because it does not need to be injected into the cluster VNet
+* Easy configuration on the API Management side because it doesn't need to be injected into the cluster VNet
* No change on the AKS side if Services are already exposed publicly and authentication logic already exists in microservices Cons:
Cons:
### Option 2: Install an Ingress Controller
-Although Option 1 might be easier, it has notable drawbacks as mentioned above. If an API Management instance does not reside in the cluster VNet, Mutual TLS authentication (mTLS) is a robust way of ensuring the traffic is secure and trusted in both directions between an API Management instance and an AKS cluster.
+Although Option 1 might be easier, it has notable drawbacks as mentioned above. If an API Management instance doesn't reside in the cluster VNet, Mutual TLS authentication (mTLS) is a robust way of ensuring the traffic is secure and trusted in both directions between an API Management instance and an AKS cluster.
Mutual TLS authentication is [natively supported](./api-management-howto-mutual-certificates.md) by API Management and can be enabled in Kubernetes by [installing an Ingress Controller](../aks/ingress-own-tls.md) (Fig. 3). As a result, authentication will be performed in the Ingress Controller, which simplifies the microservices. Additionally, you can add the IP addresses of API Management to the allowed list by Ingress to make sure only API Management has access to the cluster.
Mutual TLS authentication is [natively supported](./api-management-howto-mutual-
Pros:
-* Easy configuration on the API Management side because it does not need to be injected into the cluster VNet and mTLS is natively supported
+* Easy configuration on the API Management side because it doesn't need to be injected into the cluster VNet and mTLS is natively supported
* Centralizes protection for inbound cluster traffic at the Ingress Controller layer * Reduces security risk by minimizing publicly visible cluster endpoints
To get a subscription key for accessing APIs, a subscription is required. A subs
### Option 3: Deploy APIM inside the cluster VNet
-In some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints. In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there is no reason to expose the cluster publicly as all API traffic will remain within the VNet. For these scenarios, you can deploy API Management into the cluster VNet. [API Management Developer and Premium tiers](https://aka.ms/apimpricing) support VNet deployment.
+In some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints. In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there's no reason to expose the cluster publicly as all API traffic will remain within the VNet. For these scenarios, you can deploy API Management into the cluster VNet. [API Management Developer and Premium tiers](https://aka.ms/apimpricing) support VNet deployment.
-There are two modes of [deploying API Management into a VNet](./api-management-using-with-vnet.md) ΓÇô External and Internal.
+There are two modes of [deploying API Management into a VNet](./virtual-network-concepts.md) ΓÇô External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig. 4) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic. ![External VNet mode](./media/api-management-aks/vnet-external.png)
-If all API consumers reside within the cluster VNet, then the Internal mode (Fig. 5) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.
+If all API consumers reside within the cluster VNet, then the Internal mode (Fig. 5) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There's no way to reach the API Management gateway or the AKS cluster from public internet.
![Internal VNet mode](./media/api-management-aks/vnet-internal.png)
- In both cases, the AKS cluster is not publicly visible. Compared to Option 2, the Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
+ In both cases, the AKS cluster isn't publicly visible. Compared to Option 2, the Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.
Pros: * The most secure option because the AKS cluster has no public endpoint
Cons:
## Next steps * Learn more about [Network concepts for applications in AKS](../aks/concepts-network.md)
-* Learn more about [How to use API Management with virtual networks](./api-management-using-with-vnet.md)
+* Learn more about [How to use API Management with virtual networks](./virtual-network-concepts.md)
api-management Api Management Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-subscriptions.md
A subscriber can use an API Management subscription key in one of two ways:
> [!TIP] > **Ocp-Apim-Subscription-Key** is the default name of the subscription key header, and **subscription-key** is the default name of the query parameter. If desired, you may modify these names in the settings for each API. For example, in the portal, update these names on the **Settings** tab of an API.
+> [!NOTE]
+> When included in a request header or query parameter, the subscription key by default is passed to the backend and may be exposed in backend monitoring logs or other systems. If this is considered sensitive data, you can configure a policy in the `outbound` section to remove the subscription key header ([`set-header`](set-header-policy.md)) or query parameter ([`set-query-parameter`](set-query-parameter-policy.md)).
+ ## Enable or disable subscription requirement for API or product access By default when you create an API, a subscription key is required for API access. Similarly, when you create a product, by default a subscription key is required to access any API that's added to the product. Under certain scenarios, an API publisher might want to publish a product or a particular API to the public without the requirement of subscriptions. While a publisher could choose to enable unsecured (anonymous) access to certain APIs, configuring another mechanism to secure client access is recommended.
api-management Cosmosdb Data Source Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cosmosdb-data-source-policy.md
Use the policy to configure a single query request, read request, delete request
<write-request type="insert | replace | upsert" consistency-level="bounded-staleness | consistent-prefix | eventual | session | strong" pre-trigger="myPreTrigger" post-trigger="myPostTrigger"> <id template="liquid"> "Item ID in container"
- </id>
+ </id>
+ <partition-key data-type="string | number | bool | none | null" template="liquid">
+ "Container partition key"
+ </partition-key>
<etag type="match | no-match" template="liquid" > "System-generated entity tag"
- </etag>
- <set-body template="liquid" >...set-body policy configuration...</set-body>
- <partition-key data-type="string | number | bool | none | null" template="liquid">
- "Container partition key"
- </partition-key>
+ </etag>
+ <set-body template="liquid" >...set-body policy configuration...</set-body>
</write-request> <response>
resourceGroupName="<MY-RESOURCE-GROUP>"
# Variable for subscription resourceGroupName="<MY-SUBSCRIPTION-NAME>"
-# Set principal variable to the value from Azure portal
+# Set principal variable to the value from Managed identities page of API Management instance in Azure portal
principal="<MY-APIM-MANAGED-ID-PRINCIPAL-ID>" # Get the scope value of Cosmos DB account
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
# Configure a Java app for Azure App Service
+> [!NOTE]
+> For Spring applications, we recommend using Azure Spring Apps. However, you can still use Azure App Service as a destination.
+ Azure App Service lets Java developers to quickly build, deploy, and scale their Java SE, Tomcat, and JBoss EAP web applications on a fully managed service. Deploy applications with Maven plugins, from the command line, or in editors like IntelliJ, Eclipse, or Visual Studio Code. This guide provides key concepts and instructions for Java developers using App Service. If you've never used Azure App Service, you should read through the [Java quickstart](quickstart-java.md) first. General questions about using App Service that aren't specific to Java development are answered in the [App Service FAQ](faq-configuration-and-management.yml).
app-service App Service App Service Environment Control Inbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-control-inbound-traffic.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Create Ilb Ase Resourcemanager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-create-ilb-ase-resourcemanager.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-intro.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service App Service Environment Layered Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-layered-security.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Since App Service Environments provide an isolated runtime environment deployed into a virtual network, developers can create a layered security architecture providing differing levels of network access for each physical application tier.
app-service App Service App Service Environment Network Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-architecture-overview.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> App Service Environments are always created within a subnet of a [virtual network][virtualnetwork] - apps running in an App Service Environment can communicate with private endpoints located within the same virtual network topology. Since customers may lock down parts of their virtual network infrastructure, it is important to understand the types of network communication flows that occur with an App Service Environment.
app-service App Service App Service Environment Network Configuration Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Customers can connect an [Azure ExpressRoute][ExpressRoute] circuit to their virtual network infrastructure to extend their on-premises network to Azure. App Service Environment is created in a subnet of the [virtual network][virtualnetwork] infrastructure. Apps that run on App Service Environment establish secure connections to back-end resources that are accessible only over the ExpressRoute connection.
app-service App Service App Service Environment Securely Connecting To Backend Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-app-service-environment-securely-connecting-to-backend-resources.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Since an App Service Environment is always created in **either** an Azure Resource Manager virtual network, **or** a classic deployment model [virtual network][virtualnetwork], outbound connections from an App Service Environment to other backend resources can flow exclusively over the virtual network. As of June 2016, ASEs can also be deployed into virtual networks that use either public address ranges or RFC1918 address spaces (private addresses).
app-service App Service Environment Auto Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-environment-auto-scale.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service environments support *autoscaling*. You can autoscale individual worker pools based on metrics or schedule.
app-service App Service Web Configure An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-configure-an-app-service-environment.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service App Service Web Scale A Web App In An App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/app-service-web-scale-a-web-app-in-an-app-service-environment.md
> [!IMPORTANT] > This article is about App Service Environment v1. [App Service Environment v1 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> In the Azure App Service there are normally three things you can scale:
app-service Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/certificates.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment(ASE) is a deployment of the Azure App Service that runs within your Azure Virtual Network(VNet). It can be deployed with an internet accessible application endpoint or an application endpoint that is in your VNet. If you deploy the ASE with an internet accessible endpoint, that deployment is called an External ASE. If you deploy the ASE with an endpoint in your VNet, that deployment is called an ILB ASE. You can learn more about the ILB ASE from the [Create and use an ILB ASE](./create-ilb-ase.md) document.
app-service Create External Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-external-ase.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> Azure App Service environments (ASEs) can be created with an internet-accessible endpoint or an endpoint on an internal address in an Azure Virtual Network. When created with an internal endpoint, that endpoint is provided by an Azure component called an internal load balancer (ILB). The ASE on an internal IP address is called an ILB ASE. The ASE with a public endpoint is called an External ASE.
app-service Create Ilb Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-ilb-ase.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The Azure App Service Environment is a deployment of Azure App Service into a subnet in an Azure virtual network (VNet). There are two ways to deploy an App Service Environment (ASE):
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment (ASE) has many external dependencies that it requires access to in order to function properly. The ASE lives in the customer Azure Virtual Network. Customers must allow the ASE dependency traffic, which is a problem for customers that want to lock down all egress from their virtual network.
app-service Forced Tunnel Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> The App Service Environment (ASE) is a deployment of Azure App Service in a customer's Azure Virtual Network. Many customers configure their Azure virtual networks to be extensions of their on-premises networks with VPNs or Azure ExpressRoute connections. Forced tunneling is when you redirect internet bound traffic to your VPN or a virtual appliance instead. Virtual appliances are often used to inspect and audit outbound network traffic.
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
Unlike earlier versions, the FTPS endpoints for your App Services on your App Se
## Prerequisites - ILB variation of App Service Environment v3.
+- The Azure Key Vault that has the certificate must be publicly accessible to fetch the certificate.
- Valid SSL/TLS certificate must be stored in an Azure Key Vault in .PFX format. For more information on using certificates with App Service, see [Add a TLS/SSL certificate in Azure App Service](../configure-ssl-certificate.md). ### Managed identity
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Under **Get new IP addresses**, confirm you understand the implications and star
## 3. Update dependent resources with new IPs
-When the previous step finishes, you're shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't move on to the next step until you confirmed that you made these updates.
+When the previous step finishes, you're shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't move on to the next step until you confirm that you made these updates.
:::image type="content" source="./media/migration/ip-sample.png" alt-text="Screenshot that shows sample IPs generated during premigration.":::
app-service Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/intro.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Overview
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> [!INCLUDE [azure-CLI-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 12/14/2023 Last updated : 01/30/2024
App Service can now automate migration of your App Service Environment v1 and v2
At this time, the migration feature doesn't support migrations to App Service Environment v3 in the following regions:
-### Azure Public
--- Jio India West- ### Microsoft Azure operated by 21Vianet - China East 2
The following App Service Environment configurations can be migrated using the m
|ILB App Service Environment v1 |ILB App Service Environment v3 | |ELB App Service Environment v1 |ELB App Service Environment v3 | |ILB App Service Environment v1 with a custom domain suffix |ILB App Service Environment v3 with a custom domain suffix |
+|[Zone pinned](zone-redundancy.md) App Service Environment v2 |App Service Environment v3 with optional zone redundancy configuration |
If you want your new App Service Environment v3 to use a custom domain suffix and you aren't using one currently, custom domain suffix can be configured at any time once migration is complete. For more information, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
The migration feature doesn't support the following scenarios. See the [manual m
- App Service Environment v1 in a [Classic VNet](/previous-versions/azure/virtual-network/create-virtual-network-classic) - ELB App Service Environment v2 with IP SSL addresses - ELB App Service Environment v1 with IP SSL addresses-- [Zone pinned](zone-redundancy.md) App Service Environment v2 - App Service Environment in a region not listed in the supported regions The App Service platform reviews your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to be available in your region. | |Migration cannot be called on this ASE, please contact support for help migrating. |Support needs to be engaged for migrating this App Service Environment. This issue is potentially due to custom settings used by this environment. |Engage support to resolve your issue. |
-|Migrate cannot be called on Zone Pinned ASEs. |App Service Environment v2 that is zone pinned can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). | |Migration to ASEv3 is not allowed for this ASE. |You can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
For more scenarios on cost changes and savings opportunities with App Service En
- **What if my App Service Environment has a custom domain suffix?** The migration feature supports this [migration scenario](#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after. - **What if my App Service Environment is zone pinned?**
- Zone pinned App Service Environment is currently not a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. To migrate to App Service Environment v3, see the [manual migration options](migration-alternatives.md).
+ Zone pinned App Service Environment v2 is now a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. When migrating to App Service Environment v3, you can choose to configure zone redundancy or not.
- **What if my App Service Environment has IP SSL addresses?** IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration. - **What properties of my App Service Environment will change?**
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: How to migrate your applications to App Service Environment v3 Previously updated : 07/24/2023 Last updated : 01/30/2024 # Migrate to App Service Environment v3
Once your migration and any testing with your new environment is complete, delet
No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3. If you're using IP SSL, you must remove the IP SSL bindings before migrating. - **What if my App Service Environment has a custom domain suffix?** The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after. -- **What if my App Service Environment is zone pinned?**
- Zone pinning isn't a supported feature on App Service Environment v3.
+- **What if my App Service Environment v2 is zone pinned?**
+ Zone pinning isn't a supported feature on App Service Environment v3. You can choose to enable zone redundancy when creating your App Service Environment v3.
- **What properties of my App Service Environment will change?** You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). - **Is backup and restore supported for moving apps from App Service Environment v2 to v3?**
app-service Network Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/network-info.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> [App Service Environment][Intro] is a deployment of Azure App Service into a subnet in your Azure virtual network. There are two deployment types for an App Service Environment:
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
description: Take the first steps toward upgrading to App Service Environment v3
Previously updated : 12/11/2023 Last updated : 1/31/2024 # Upgrade to App Service Environment v3
This page is your one-stop shop for guidance and resources to help you upgrade s
|**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| |**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).| |**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Explore reserved instance pricing, savings plans, and check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [How reservation discounts apply to Isolated v2 instances](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|
-|**5**|**Learn more**|Join the [free live webinar](https://developer.microsoft.com/en-us/reactor/events/20417) with FastTrack Architects.<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
+|**5**|**Learn more**|On-demand: [Learn Live webinar with Azure FastTrack Architects](https://www.youtube.com/watch?v=lI9TK_v-dkg&ab_channel=MicrosoftDeveloper).<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
## Additional information
app-service Using An Ase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/using-an-ase.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> An App Service Environment (ASE) is a deployment of Azure App Service into a subnet in a customer's Azure Virtual Network instance. An ASE consists of:
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
App Service Environment has three versions. App Service Environment v3 is the la
> App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted. There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> ## Comparison between versions
app-service Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/zone-redundancy.md
> [!IMPORTANT] > This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.-
-As of 15 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
+>
+> As of 29 January 2024, you can no longer create new App Service Environment v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss.
> App Service Environment v2 (ASE) can be deployed into Availability Zones (AZ). Customers can deploy an internal load balancer (ILB) ASEs into a specific AZ within an Azure region. If you pin your ILB ASE to a specific AZ, the resources used by a ILB ASE will either be pinned to the specified AZ, or deployed in a zone redundant manner.
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
# Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB
+> [!NOTE]
+> For Spring applications, we recommend using Azure Spring Apps. However, you can still use Azure App Service as a destination.
+ This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure. When you are finished, you will have a [Spring Boot](https://spring.io/projects/spring-boot) application storing data in [Azure Cosmos DB](../cosmos-db/index.yml) running on [Azure App Service on Linux](overview.md).
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
automation Quickstart Cli Support Powershell Runbook Runtime Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstart-cli-support-powershell-runbook-runtime-environment.md
description: This article shows how to add support for Azure CLI in PowerShell 7
Last updated 01/17/2024 -+ # Run Azure CLI commands in PowerShell 7.2 runbooks
automation Runtime Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runtime-environment-overview.md
Last updated 01/24/2024 -+ # Runtime environment in Azure Automation
While the new Runtime environment experience is recommended, you can also switch
* To work with runbooks and Runtime environment, see [Manage Runtime environment](manage-runtime-environment.md). * For details of PowerShell, see [PowerShell Docs](/powershell/scripting/overview).-
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
Last updated 3/20/2023
#Customer intent: As a .NET Framework developer, I want to use feature flags to control feature availability quickly and confidently.
-# Quickstart: Add feature flags to a .NET Framework app
+# Quickstart: Add feature flags to a .NET Framework console app
In this quickstart, you incorporate Azure App Configuration into a .NET Framework app to create an end-to-end implementation of feature management. You can use the App Configuration service to centrally store all your feature flags and control their states.
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
> [!div class="mx-imgBorder"] > ![Enable feature flag named Beta](media/add-beta-feature-flag.png)
-## Create a .NET console app
+## Create a .NET Framework console app
1. Start Visual Studio, and select **File** > **New** > **Project**.
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search and add the following NuGet packages to your project. ```
- Microsoft.Extensions.DependencyInjection
Microsoft.Extensions.Configuration.AzureAppConfiguration Microsoft.FeatureManagement ```
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
1. Open *Program.cs* and add the following statements: ```csharp
- using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration.AzureAppConfiguration; using Microsoft.FeatureManagement; using System.Threading.Tasks; ```
-1. Update the `Main` method to connect to App Configuration, specifying the `UseFeatureFlags` option so that feature flags are retrieved. Then display a message if the `Beta` feature flag is enabled.
+1. Update the `Main` method to connect to App Configuration, specifying the `UseFeatureFlags` option so that feature flags are retrieved. Create a `ConfigurationFeatureDefinitionProvider` to provide feature flag definitions from the configuration and a `FeatureManager` to evaluate feature flags' state. Then display a message if the `Beta` feature flag is enabled.
```csharp public static async Task Main(string[] args) {
- IConfigurationRoot configuration = new ConfigurationBuilder()
+ IConfiguration configuration = new ConfigurationBuilder()
.AddAzureAppConfiguration(options => { options.Connect(Environment.GetEnvironmentVariable("ConnectionString")) .UseFeatureFlags(); }).Build();
- IServiceCollection services = new ServiceCollection();
+ IFeatureDefinitionProvider featureDefinitionProvider = new ConfigurationFeatureDefinitionProvider(configuration);
- services.AddSingleton<IConfiguration>(configuration).AddFeatureManagement();
+ IFeatureManager featureManager = new FeatureManager(
+ featureDefinitionProvider,
+ new FeatureManagementOptions());
- using (ServiceProvider serviceProvider = services.BuildServiceProvider())
+ if (await featureManager.IsEnabledAsync("Beta"))
{
- IFeatureManager featureManager = serviceProvider.GetRequiredService<IFeatureManager>();
-
- if (await featureManager.IsEnabledAsync("Beta"))
- {
- Console.WriteLine("Welcome to the beta!");
- }
+ Console.WriteLine("Welcome to the beta!");
} Console.WriteLine("Hello World!");
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024 #
azure-arc Onboard Dsc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-dsc.md
Using [Windows PowerShell Desired State Configuration](/powershell/dsc/getting-s
- Windows PowerShell version 4.0 or higher -- The [AzureConnectedMachineDsc](https://www.powershellgallery.com/packages/AzureConnectedMachineDsc) DSC module
+- The AzureConnectedMachineDsc module
- A service principal to connect the machines to Azure Arc-enabled servers non-interactively. Follow the steps under the section [Create a Service Principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale) if you have not created a service principal for Azure Arc-enabled servers already.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Complete the following tutorial to create a new function app a secured storage a
# [Deployment templates](#tab/templates)
-Use Bicep or Azure Resource Manager (ARM) [quickstart templates](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/function-app-storage-private-endpoints) to create secured function app and storage account resources.
+Use Bicep files or Azure Resource Manager (ARM) templates to create a secured function app and storage account resources. When you create a secured storage account in an automated deployment, you must also specifically set the `WEBSITE_CONTENTSHARE` setting and create the file share as part of your deployment. For more information, including links to example deployments, see [Secured deployments](functions-infrastructure-as-code.md#secured-deployments).
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md
Before you begin, you must have the following:
+ The [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
-+ The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 or 11. The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK.
++ The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8, 11, 17, 21(Linux only). The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK. + [Apache Maven](https://maven.apache.org), version 3.0 or above.
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md
In this section, you use Visual Studio Code to create a local Azure Functions pr
|Prompt|Selection| |--|--| |**Select a language**| Choose `Java`.|
- |**Select a version of Java**| Choose `Java 11` or `Java 8`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. |
+ |**Select a version of Java**| Choose `Java 8`, `Java 11`, `Java 17` or `Java 21`, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. |
| **Provide a group ID** | Choose `com.function`. | | **Provide an artifact ID** | Choose `myFunction`. | | **Provide a version** | Choose `1.0-SNAPSHOT`. |
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The share is created when your function app is created. Changing or removing thi
The following considerations apply when using an Azure Resource Manager (ARM) template or Bicep file to create a function app during deployment: + When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. Not setting `WEBSITE_CONTENTSHARE` _is the recommended approach_ for an ARM template deployment.
-+ There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined share, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot.
++ There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined value, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot. In the case of a storage account secured by a virtual network, you must also create the share itself as part of your automated deployment. For more information, see [Secured deployments](functions-infrastructure-as-code.md#secured-deployments). + Don't make `WEBSITE_CONTENTSHARE` a slot setting. + When you specify `WEBSITE_CONTENTSHARE`, the value must follow [this guidance for share names](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#share-names).
azure-functions Functions Bindings Cosmosdb V2 Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md
Title: Azure Cosmos DB trigger for Functions 2.x and higher description: Learn to use the Azure Cosmos DB trigger in Azure Functions. Previously updated : 04/04/2023 Last updated : 01/19/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, powershell, python
The following examples depend on the extension version for the given C# mode.
# [Extension 4.x+](#tab/extensionv4/in-process)
-Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher will have different attribute properties, which are shown below. This example refers to a simple `ToDoItem` type.
+Apps using [Azure Cosmos DB extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4) or higher have different attribute properties, which are shown here. This example refers to a simple `ToDoItem` type.
```cs namespace CosmosDBSamplesV2
public void Run([CosmosDBTrigger(
The following code defines a `MyDocument` type: An [`IReadOnlyList<T>`](/dotnet/api/system.collections.generic.ireadonlylist-1) is used as the Azure Cosmos DB trigger binding parameter in the following example:
This example requires the following `using` statements:
::: zone-end ::: zone pivot="programming-language-java"
-This function is invoked when there are inserts or updates in the specified database and collection.
+This function is invoked when there are inserts or updates in the specified database and container.
+
+# [Extension 4.x+](#tab/extensionv4)
++
+```java
+ @FunctionName("CosmosDBTriggerFunction")
+ public void run(
+ @CosmosDBTrigger(
+ name = "items",
+ databaseName = "ToDoList",
+ containerName = "Items",
+ leaseContainerName="leases",
+ connection = "AzureCosmosDBConnection",
+ createLeaseContainerIfNotExists = true
+ )
+ Object inputItem,
+ final ExecutionContext context
+ ) {
+ context.getLogger().info("Items modified: " + inputItems.size());
+ }
+```
# [Functions 2.x+](#tab/functionsv2)
This function is invoked when there are inserts or updates in the specified data
context.getLogger().info(items.length + "item(s) is/are changed."); } ```
-# [Extension 4.x+](#tab/extensionv4)
-
The following example shows an Azure Cosmos DB trigger [TypeScript function](fun
# [Model v3](#tab/nodejs-v3)
-TypeScript samples are not documented for model v3.
+TypeScript samples aren't documented for model v3.
For Python functions defined by using *function.json*, see the [Configuration](#
::: zone pivot="programming-language-java" ## Annotations
+# [Extension 4.x+](#tab/extensionv4)
++
+Use the `@CosmosDBTrigger` annotation on parameters that read data from Azure Cosmos DB. The annotation supports the following properties:
+
+|Attribute property | Description|
+||-|
+|**connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](#connections).|
+|**name** | The name of the function. |
+|**databaseName** | The name of the Azure Cosmos DB database with the container being monitored. |
+|**containerName** | The name of the container being monitored. |
+|**leaseConnectionStringSetting** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `Connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.|
+|**leaseDatabaseName** | (Optional) The name of the database that holds the container used to store leases. When not set, the value of the `databaseName` setting is used. |
+|**leaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. |
+|**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Microsoft Entra identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't start.|
+|**leasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `CreateLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. |
+|**leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. |
+|**feedPollDelay**| (Optional) The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all current changes are drained. Default is 5,000 milliseconds, or 5 seconds.|
+|**leaseAcquireInterval**| (Optional) When set, it defines, in milliseconds, the interval to kick off a task to compute if partitions are distributed evenly among known host instances. Default is 13000 (13 seconds). |
+|**leaseExpirationInterval**| (Optional) When set, it defines, in milliseconds, the interval for which the lease is taken on a lease representing a partition. If the lease isn't renewed within this interval, it will expire and ownership of the partition moves to another instance. Default is 60000 (60 seconds).|
+|**leaseRenewInterval**| (Optional) When set, it defines, in milliseconds, the renew interval for all leases for partitions currently held by an instance. Default is 17000 (17 seconds). |
+|**maxItemsPerInvocation**| (Optional) When set, this property sets the maximum number of items received per Function call. If operations in the monitored container are performed through stored procedures, [transaction scope](../cosmos-db/nosql/stored-procedures-triggers-udfs.md#transactions) is preserved when reading items from the change feed. As a result, the number of items received could be higher than the specified value so that the items changed by the same transaction are returned as part of one atomic batch. |
+|**startFromBeginning**| (Optional) This option tells the Trigger to read changes from the beginning of the container's change history instead of starting at the current time. Reading from the beginning only works the first time the trigger starts, as in subsequent runs, the checkpoints are already stored. Setting this option to `true` when there are leases already created has no effect. |
+|**preferredLocations**| (Optional) Defines preferred locations (regions) for geo-replicated database accounts in the Azure Cosmos DB service. Values should be comma-separated. For example, "East US,South Central US,North Europe". |
+ # [Functions 2.x+](#tab/functionsv2)
-From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBInput` annotation on parameters that read data from Azure Cosmos DB. The annotation supports the following properties:
+From the [Java functions runtime library](/java/api/overview/azure/functions/runtime), use the `@CosmosDBTrigger` annotation on parameters that read data from Azure Cosmos DB. The annotation supports the following properties:
+ [name](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.name) + [connectionStringSetting](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.connectionstringsetting)
From the [Java functions runtime library](/java/api/overview/azure/functions/run
+ [startFromBeginning](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.startfrombeginning) + [preferredLocations](/java/api/com.microsoft.azure.functions.annotation.cosmosdbtrigger.preferredlocations)
-# [Extension 4.x+](#tab/extensionv4)
-- ::: zone-end
The following table explains the binding configuration properties that you set i
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python"
-# [Functions 2.x+](#tab/functionsv2)
-- # [Extension 4.x+](#tab/extensionv4) [!INCLUDE [functions-cosmosdb-settings-v4](../../includes/functions-cosmosdb-settings-v4.md)]
+# [Functions 2.x+](#tab/functionsv2)
++ ::: zone-end
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
The following table indicates which triggers support retries and where the retry
### Retry policies
-Starting with version 3.x of the Azure Functions runtime, you can define retry policies for Timer, Kafka, and Event Hubs triggers that are enforced by the Functions runtime.
+Starting with version 3.x of the Azure Functions runtime, you can define retry policies for Timer, Kafka, Event Hubs, and Azure Cosmos DB triggers that are enforced by the Functions runtime.
The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.
-A retry policy is evaluated when a Timer, Kafka, or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.
+A retry policy is evaluated when a Timer, Kafka, Event Hubs, or Azure Cosmos DB-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry.
> [!IMPORTANT] > Event Hubs checkpoints won't be written until the retry policy for the execution has finished. Because of this behavior, progress on the specific partition is paused until the current batch has finished.
azure-functions Functions Bindings Service Bus Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md
Title: Azure Service Bus output bindings for Azure Functions
description: Learn to send Azure Service Bus messages from Azure Functions. ms.assetid: daedacf0-6546-4355-a65c-50873e74f66b Previously updated : 03/06/2023 Last updated : 01/15/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, powershell, python
This example shows a [C# function](dotnet-isolated-process-guide.md) that receiv
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/ServiceBus/ServiceBusReceivedMessageFunctions.cs" id="docsnippet_servicebus_readmessage":::
+&nbsp;
+<hr/>
+
+This example uses an HTTP trigger with an `OutputType` object to both send an HTTP response and write the output message.
+
+```csharp
+[Function("HttpSendMsg")]
+public async Task<OutputType> Run([HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequestData req, FunctionContext context)
+{
+ _logger.LogInformation($"C# HTTP trigger function processed a request for {context.InvocationId}.");
+
+ HttpResponseData response = req.CreateResponse(HttpStatusCode.OK);
+ await response.WriteStringAsync("HTTP response: Message sent");
+
+ return new OutputType()
+ {
+ OutputEvent = "MyMessage",
+ HttpResponse = response
+ };
+}
+```
+
+This code defines the multiple output type `OutputType`, which includes the Service Bus output binding definition on `OutputEvent`:
+
+```csharp
+ public class OutputType
+{
+ [ServiceBusOutput("TopicOrQueueName", Connection = "ServiceBusConnection")]
+ public string OutputEvent { get; set; }
+
+ public HttpResponseData HttpResponse { get; set; }
+}
+```
+ # [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that sends a Service Bus queue message:
-```cs
+```csharp
[FunctionName("ServiceBusOutput")] [return: ServiceBus("myqueue", Connection = "ServiceBusConnection")] public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log)
return input.Text; } ```
+&nbsp;
+<hr/>
+
+Instead of using the return statement to send the message, this HTTP trigger function returns an HTTP response that is different from the output message.
+
+```csharp
+[FunctionName("HttpTrigger1")]
+public static async Task<IActionResult> Run(
+[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
+[ServiceBus("TopicOrQueueName", Connection = "ServiceBusConnection")] IAsyncCollector<string> message, ILogger log)
+{
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ await message.AddAsync("MyMessage");
+ await message.AddAsync("MyMessage2");
+
+ string responseMessage = "This HTTP triggered sent a message to Service Bus.";
+
+ return new OkObjectResult(responseMessage);
+}
+```
+ ::: zone-end
azure-functions Functions Bindings Storage Queue Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md
Access the queue message via the parameter typed as [QueueMessage](/python/api/a
## <a name="message-metadata"></a>Metadata
-The queue trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code.
+The queue trigger provides several [metadata properties](./functions-bindings-expressions-patterns.md#trigger-metadata). These properties can be used as part of binding expressions in other bindings or as parameters in your code, for language workers that provide this access to message metadata.
::: zone pivot="programming-language-csharp"
-The properties are members of the [CloudQueueMessage] class.
+The message metadata properties are members of the [CloudQueueMessage] class.
+The message metadata properties can be accessed from `context.triggerMetadata`.
+The message metadata properties can be accessed from the passed `$TriggerMetadata` parameter.
::: zone-end |Property|Type|Description| |--|-|--| |`QueueTrigger`|`string`|Queue payload (if a valid string). If the queue message payload is a string, `QueueTrigger` has the same value as the variable named by the `name` property in *function.json*.|
-|`DequeueCount`|`int`|The number of times this message has been dequeued.|
+|`DequeueCount`|`long`|The number of times this message has been dequeued.|
|`ExpirationTime`|`DateTimeOffset`|The time that the message expires.| |`Id`|`string`|Queue message ID.| |`InsertionTime`|`DateTimeOffset`|The time that the message was added to the queue.| |`NextVisibleTime`|`DateTimeOffset`|The time that the message will next be visible.| |`PopReceipt`|`string`|The message's pop receipt.|
+The following message metadata properties can be accessed from the passed binding parameter (`msg` in previous [examples](#example)).
+
+|Property|Description|
+|--|-|
+|`body`| Queue payload as a string.|
+|`dequeue_count`| The number of times this message has been dequeued.|
+|`expiration_time`|The time that the message expires.|
+|`id`| Queue message ID.|
+|`insertion_time`|The time that the message was added to the queue.|
+|`time_next_visible`|The time that the message will next be visible.|
+|`pop_receipt`|The message's pop receipt.|
++ [!INCLUDE [functions-storage-queue-connections](../../includes/functions-storage-queue-connections.md)] ## Poison messages
To handle poison messages manually, check the [dequeueCount](#message-metadata)
## Peek lock+ The peek-lock pattern happens automatically for queue triggers. As messages are dequeued, they are marked as invisible and associated with a 10-minute timeout managed by the Storage service. This timeout can't be changed. When the function starts, it starts processing a message under the following conditions.
azure-functions Functions Create First Java Gradle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-java-gradle.md
This article shows you how to build and publish a Java function project to Azure
To develop functions using Java, you must have the following installed: -- [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8
+- [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8, 11, 17 or 21. (Java 21 is currently supported in preview on Linux only)
- [Azure CLI] - [Azure Functions Core Tools](./functions-run-local.md#v2) version 2.6.666 or above - [Gradle](https://gradle.org/), version 6.8 and above
azure-functions Functions Create Maven Eclipse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-eclipse.md
This article shows you how to create a [serverless](https://azure.microsoft.com/
To develop a functions app with Java and Eclipse, you must have the following installed: -- [Java Developer Kit](https://www.azul.com/downloads/zulu/), version 8.
+- [Java Developer Kit](https://learn.microsoft.com/java/openjdk/download#openjdk-17), version 8, 11, 17 or 21. (Java 21 is currently supported in preview only on Linux)
- [Apache Maven](https://maven.apache.org), version 3.0 or above. - [Eclipse](https://www.eclipse.org/downloads/packages/), with Java and Maven support. - [Azure CLI](/cli/azure)
azure-functions Functions Create Maven Intellij https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-maven-intellij.md
Specifically, this article shows you:
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure) for Java, version 8, 11, or 17
+- An [Azure supported Java Development Kit (JDK)](/azure/developer/java/fundamentals/java-support-on-azure), version 8, 11, 17 or 21. (Java 21 is currently only supported in preview on Linux only)
- An [IntelliJ IDEA](https://www.jetbrains.com/idea/download/) Ultimate Edition or Community Edition installed - [Maven 3.5.0+](https://maven.apache.org/download.cgi) - Latest [Function Core Tools](https://github.com/Azure/azure-functions-core-tools)
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Keep the following considerations in mind when working with slot deployments:
:::zone pivot="premium-plan,dedicated-plan" ## Secured deployments
-You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. You function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md).
+You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. You function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md).
+
+When creating a deployment that uses a secured storage account, you must both explicitly set the `WEBSITE_CONTENTSHARE` setting and create the file share resource named in this setting. Make sure you create a `Microsoft.Storage/storageAccounts/fileServices/shares` resource using the value of `WEBSITE_CONTENTSHARE`, as shown in this example ([ARM template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/azuredeploy.json#L467)|[Bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/main.bicep#L351)).
These projects provide both Bicep and ARM template examples of how to deploy your function apps in a virtual network, including with network access restrictions:
azure-functions Functions Manually Run Non Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-manually-run-non-http.md
Title: Manually run a non HTTP-triggered Azure Functions description: Use an HTTP request to run a non-HTTP triggered Azure Functions Previously updated : 11/29/2023 Last updated : 01/15/2024 # Manually run a non HTTP-triggered function
In this example, replace `<APP_NAME>` and `<RESOURCE_GROUP>` with the name of yo
:::image type="content" source="./media/functions-manually-run-non-http/functions-manually-run-non-http-body.png" alt-text="Postman body settings." border="true":::
- The `<TRIGGER_INPUT>` you supply depends on the type of trigger. For services that use JSON payloads, such as Azure Service Bus, the test JSON payload should be escaped and serialized as a string. If you don't want to pass input data to the function, you must still supply an empty dictionary `{}` as the body of the POST request. For more information, see the reference article for the specific non-HTTP trigger.
+ The specific `<TRIGGER_INPUT>` you supply depends on the type of trigger, but it can only be a string, numeric, or boolean value. For services that use JSON payloads, such as Azure Service Bus, the test JSON payload should be escaped and serialized as a string.
+
+ If you don't want to pass input data to the function, you must still supply an empty dictionary `{}` as the body of the POST request. For more information, see the reference article for the specific non-HTTP trigger.
1. Select **Send**.
In this example, replace `<APP_NAME>` and `<RESOURCE_GROUP>` with the name of yo
:::image type="content" source="./media/functions-manually-run-non-http/azure-portal-functions-master-key-logs.png" alt-text="View the logs to see the master key test results." border="true":::
+The way that you access data sent to the trigger depends on the type of trigger and your function language. For more information, see the reference examples for your [specific trigger](functions-triggers-bindings.md).
+ ## Next steps > [!div class="nextstepaction"]
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
You can host function apps in several ways:
Use the following resources to quickly get started with Azure Functions networking scenarios. These resources are referenced throughout the article.
-* ARM, Bicep, and Terraform templates:
+* ARM templates, Bicep files, and Terraform templates:
* [Private HTTP triggered function app](https://github.com/Azure-Samples/function-app-with-private-http-endpoint) * [Private Event Hubs triggered function app](https://github.com/Azure-Samples/function-app-with-private-eventhub) * ARM templates only:
To learn more, see [Virtual network service endpoints](../virtual-network/virtua
To restrict access to a specific subnet, create a restriction rule with a **Virtual Network** type. You can then select the subscription, virtual network, and subnet that you want to allow or deny access to.
-If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they are automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
+If service endpoints aren't already enabled with `Microsoft.Web` for the subnet that you selected, they're automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app is configured for service endpoints in anticipation of having them enabled later on the subnet.
Currently, you can use non-HTTP trigger functions from within a virtual network
### Premium plan with virtual network triggers
-When you run a Premium plan, you can connect non-HTTP trigger functions to services that run inside a virtual network. To do this, you must enable virtual network trigger support for your function app. The **Runtime Scale Monitoring** setting is found in the [Azure portal](https://portal.azure.com) under **Configuration** > **Function runtime settings**.
+The [Premium plan](functions-premium-plan.md) lets you create functions that are triggered by services inside a virtual network. These non-HTTP triggers are known as _virtual network triggers_.
+By default, virtual network triggers don't cause your function app to scale beyond their pre-warmed instance count. However, certain extensions support virtual network triggers that cause your function app to scale dynamically. You can enable this _dynamic scale monitoring_ in your function app for supported extensions in one of these ways:
+
+#### [Azure portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
-### [Azure CLI](#tab/azure-cli)
+1. Under **Settings** select **Configuration**, then in the **Function runtime settings** tab set **Runtime Scale Monitoring** to **On**.
-You can also enable virtual network triggers by using the following Azure CLI command:
+1. Select **Save** to update the function app configuration and restart the app.
++
+#### [Azure CLI](#tab/azure-cli)
```azurecli-interactive az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.functionsRuntimeScaleMonitoringEnabled=1 --resource-type Microsoft.Web/sites ```
-### [Azure PowerShell](#tab/azure-powershell)
-
-You can also enable virtual network triggers by using the following Azure PowerShell command:
+#### [Azure PowerShell](#tab/azure-powershell)
```azurepowershell-interactive $Resource = Get-AzResource -ResourceGroupName <resource_group> -ResourceName <function_app_name>/config/web -ResourceType Microsoft.Web/sites
$Resource | Set-AzResource -Force
> [!TIP]
-> Enabling virtual network triggers may have an impact on the performance of your application since your App Service plan instances will need to monitor your triggers to determine when to scale. This impact is likely to be very small.
+> Enabling the monitoring of virtual network triggers may have an impact on the performance of your application, though this impact is likely to be very small.
+
+Support for dynamic scale monitoring of virtual network triggers isn't available in version 1.x of the Functions runtime.
+
+The extensions in this table support dynamic scale monitoring of virtual network triggers. To get the best scaling performance, you should upgrade to versions that also support [target-based scaling](functions-target-based-scaling.md#premium-plan-with-runtime-scale-monitoring-enabled).
-Virtual network triggers are supported in version 2.x and above of the Functions runtime. The following non-HTTP trigger types are supported.
+| Extension (minimum version) | Runtime scale monitoring only | With [target-based scaling](functions-target-based-scaling.md#premium-plan-with-runtime-scale-monitoring-enabled) |
+|--|| |
+|[Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB)| > 3.0.5 | > 4.1.0 |
+|[Microsoft.Azure.WebJobs.Extensions.DurableTask](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask)| > 2.0.0 | n/a |
+|[Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventHubs)| > 4.1.0 | > 5.2.0 |
+|[Microsoft.Azure.WebJobs.Extensions.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus)| > 3.2.0 | > 5.9.0 |
+|[Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/) | > 3.0.10 | > 5.1.0<sup>*</sup> |
-| Extension | Minimum version |
-|--||
-|[Microsoft.Azure.WebJobs.Extensions.Storage](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/) | 3.0.10 or above |
-|[Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventHubs)| 4.1.0 or above|
-|[Microsoft.Azure.WebJobs.Extensions.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus)| 3.2.0 or above|
-|[Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB)| 3.0.5 or above|
-|[Microsoft.Azure.WebJobs.Extensions.DurableTask](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask)| 2.0.0 or above|
+<sup>*</sup> Queue storage only.
> [!IMPORTANT]
-> When you enable virtual network trigger support, only the trigger types shown in the previous table scale dynamically with your application. You can still use triggers that aren't in the table, but they're not scaled beyond their pre-warmed instance count. For the complete list of triggers, see [Triggers and bindings](./functions-triggers-bindings.md#supported-bindings).
+> When you enable virtual network trigger monitoring, only triggers for these extensions can cause your app to scale dynamically. You can still use triggers from extensions that aren't in this table, but they won't cause scaling beyond their pre-warmed instance count. For a complete list of all trigger and binding extensions, see [Triggers and bindings](./functions-triggers-bindings.md#supported-bindings).
### App Service plan and App Service Environment with virtual network triggers
azure-functions Functions Node Upgrade V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-node-upgrade-v4.md
Last updated 03/15/2023 ms.devlang: javascript # ms.devlang: javascript, typescript-+ zone_pivot_groups: programming-languages-set-functions-nodejs
The types use the [`undici`](https://undici.nodejs.org/) package in Node.js. Thi
## Troubleshoot
-See the [Node.js Troubleshoot guide](./functions-node-troubleshoot.md).
+See the [Node.js Troubleshoot guide](./functions-node-troubleshoot.md).
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
The following table shows current supported Java versions for each major version
| Functions version | Java versions (Windows) | Java versions (Linux) | | -- | -- | |
-| 4.x |17 <br/>11 <br/>8 |17 <br/>11 <br/>8 |
+| 4.x | 17 <br/>11 <br/>8 | 21 (Preview) <br/>17 <br/>11 <br/>8 |
| 3.x | 11 <br/>8 | 11 <br/>8 | | 2.x | 8 | n/a |
Unless you specify a Java version for your deployment, the Maven archetype defau
### Specify the deployment version
-You can control the version of Java targeted by the Maven archetype by using the `-DjavaVersion` parameter. The value of this parameter can be either `8` or `11`.
+You can control the version of Java targeted by the Maven archetype by using the `-DjavaVersion` parameter. The value of this parameter can be either `8`, `11`, `17` or `21`.
The Maven archetype generates a pom.xml that targets the specified Java version. The following elements in pom.xml indicate the Java version to use:
-| Element | Java 8 value | Java 11 value | Java 17 value | Description |
-| - | - | - | - | |
-| **`Java.version`** | 1.8 | 11 | 17 | Version of Java used by the maven-compiler-plugin. |
-| **`JavaVersion`** | 8 | 11 | 17 | Java version hosted by the function app in Azure. |
+| Element | Java 8 value | Java 11 value | Java 17 value | Java 21 value (Preview, Linux) | Description |
+| - | - | - | - | - | |
+| **`Java.version`** | 1.8 | 11 | 17 | 21 | Version of Java used by the maven-compiler-plugin. |
+| **`JavaVersion`** | 8 | 11 | 17 | 21 | Java version hosted by the function app in Azure. |
The following examples show the settings for Java 8 in the relevant sections of the pom.xml file:
The following example shows the operating system setting in the `runtime` sectio
## JDK runtime availability and support
-Microsoft and [Adoptium](https://adoptium.net/) builds of OpenJDK are provided and supported on Functions for Java 8 (Adoptium), 11 (MSFT) and 17(MSFT). These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and running Java SE applications.
+Microsoft and [Adoptium](https://adoptium.net/) builds of OpenJDK are provided and supported on Functions for Java 8 (Adoptium), Java 11, 17 and 21 (MSFT). These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and running Java SE applications.
For local development or testing, you can download the [Microsoft build of OpenJDK](/java/openjdk/download) or [Adoptium Temurin](https://adoptium.net/?variant=openjdk8&jvmVariant=hotspot) binaries for free. [Azure support](https://azure.microsoft.com/support/) for issues with the JDKs and function apps is available with a [qualified support plan](https://azure.microsoft.com/support/plans/).
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
To learn more, see the [example configurations for the supported extensions](#su
## Premium plan with runtime scale monitoring enabled
-In [runtime scale monitoring](functions-networking-options.md?tabs=azure-cli#premium-plan-with-virtual-network-triggers), the extensions handle target-based scaling. Hence, in addition to the function app runtime version requirement, your extension packages must meet the following minimum versions:
+When [runtime scale monitoring](functions-networking-options.md#premium-plan-with-virtual-network-triggers) is enabled, the extensions themselves handle dynamic scaling. This is because the [scale controller](event-driven-scaling.md#runtime-scaling) doesn't have access to services secured by a virtual network. After you enable runtime scale monitoring, you'll need to upgrade your extension packages to these minimum versions to unlock the extra target-based scaling functionality:
| Extension Name | Minimum Version Needed | | -- | - |
azure-functions Language Support Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/language-support-policy.md
To learn more about specific language version support policy timeline, visit the
|--|--| |C# (in-process model) |[link](./functions-dotnet-class-library.md#supported-versions)| |C# (isolated worker model) |[link](./dotnet-isolated-process-guide.md#supported-versions)|
+|Java |[link](./update-language-versions.md#update-the-language-version)|
|Node |[link](./functions-reference-node.md#setting-the-node-version)| |PowerShell |[link](./functions-reference-powershell.md#changing-the-powershell-version)| |Python |[link](./functions-reference-python.md#python-version)|
azure-functions Migrate Service Bus Version 4 Version 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-service-bus-version-4-version-5.md
Title: Migrate Azure Service Bus extension for Azure Functions to version 5.x description: This article shows you how to upgrade your existing function apps using the Azure Service Bus extension version 4.x to be able to use version 5.x of the extension. + Last updated 01/12/2024 zone_pivot_groups: programming-languages-set-functions
The Azure Functions Azure Service Bus extension version 5 is built on top of the
## Next steps - [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md)-- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
+- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md)
azure-health-insights Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/deploy-portal.md
Once deployment is complete, you can use the Azure portal to navigate to the new
2. Create a new **Resource group**. 3. Add a new Azure AI services account to your Resource group and search for **Health Insights**.
- ![Screenshot of how to create the new Azure AI Health Insights service.](media/create-service.png)
+ [ ![Screenshot of how to create the new Azure AI Health Insights service.](media/create-service.png)](media/create-service.png#lightbox)
or Use this [link](https://portal.azure.com/#create/Microsoft.CognitiveServicesHealthInsights) to create a new Azure AI services account.
Once deployment is complete, you can use the Azure portal to navigate to the new
- **Name**: Enter an Azure AI services account name. - **Pricing tier**: Select your pricing tier.
- ![Screenshot of how to create new Azure AI services account.](media/create-health-insights.png)
+ [ ![Screenshot of how to create new Azure AI services account.](media/create-health-insights.png)](media/create-health-insights.png#lightbox)
5. Navigate to your newly created service.
- ![Screenshot of the Overview of Azure AI services account.](media/created-health-insights.png)
+ [ ![Screenshot of the Overview of Azure AI services account.](media/created-health-insights.png)](media/created-health-insights.png#lightbox)
## Configure private endpoints
-With private endpoints, the network traffic between the clients on the VNet and the Azure AI services account run over the VNet and a private link on the Microsoft backbone network. This eliminates exposure from the public internet.
+With private endpoints, the network traffic between the clients on the VNet and the Azure AI services account run over the VNet and a private link on the Microsoft backbone network. Using private endpoints as described eliminates exposure from the public internet.
Once the Azure AI services account is successfully created, configure private endpoints from the Networking page under Resource Management.
-![Screenshot of Private Endpoint.](media/private-endpoints.png)
+[ ![Screenshot of Private Endpoint.](media/private-endpoints.png)](media/private-endpoints.png#lightbox)
## Next steps
To get started using Azure AI Health Insights, get started with one of the follo
>[!div class="nextstepaction"] > [Trial Matcher](trial-matcher/index.yml) +
+>[!div class="nextstepaction"]
+> [Radiology Insights](radiology-insights/index.yml)
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/oncophenotype/get-started.md
Once deployment is complete, you use the Azure portal to navigate to the newly c
## Example request and results
-To send an API request, you need your Azure AI services account endpoint and key. You can also find a full view on the [request parameters here](../request-info.md)
+To send an API request, you need your Azure AI services account endpoint and key. You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
-![Screenshot of the Keys and Endpoints for the Onco-Phenotype.](../media/keys-and-endpoints.png)
+![[Screenshot of the Keys and Endpoints for the Onco-Phenotype.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox)
> [!IMPORTANT] > Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
GET http://{cognitive-services-account-endpoint}/healthinsights/oncophenotype/jo
} ```
-More information on the [response information can be found here](../response-info.md)
+You can also find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/get-job)
+ ## Request validation
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/overview.md
Azure AI Health Insights is a Cognitive Service that provides prebuilt models th
## Available models
-There are currently two models available in Azure AI Health Insights:
+There are currently three models available in Azure AI Health Insights:
The [Trial Matcher](./trial-matcher/overview.md) model receives patients' data and clinical trials protocols, and provides relevant clinical trials based on eligibility criteria. The [Onco-Phenotype](./oncophenotype/overview.md) receives clinical records of oncology patients and outputs cancer staging, such as **clinical stage TNM** categories and **pathologic stage TNM categories** as well as **tumor site** and **histology**.
+The [Radiology Insights](./radiology-insights/overview.md) model receives patients' radiology report and provides quality checks with feedback on errors and mismatches to ensure critical findings are surfaced and presented using the full context of a radiology report. In addition, follow-up recommendations and clinical findings with measurements documented by the radiologist are flagged.
## Architecture ![Diagram that shows Azure AI Health Insights architecture.](media/architecture.png)
+ [ ![Diagram that shows Azure AI Health Insights architecture.](media/architecture.png)](media/architecture.png#lightbox)
-Azure AI Health Insights service receives patient data through multiple input channels. This can be unstructured healthcare data, FHIR resources or specific JSON format data. This in combination with the correct model configuration, such as ```includeEvidence```.
-With these input channels and configuration, the service can run the data through several health insights AI models, such as Trial Matcher or Onco-Phenotype.
+Azure AI Health Insights service receives patient data in different modalities, such as unstructured healthcare data, FHIR resources or specific JSON format data. In addition, the service receives a model configuration, such as ```includeEvidence``` parameter.
+With these input patient data and configuration, the service can run the data through the selected health insights AI model, such as Trial Matcher, Onco-Phenotype or Radiology Insights.
## Next steps
Review the following information to learn how to deploy Azure AI Health Insights
> [Onco-Phenotype](oncophenotype/overview.md) >[!div class="nextstepaction"]
-> [Trial Matcher](trial-matcher//overview.md)
+> [Trial Matcher](trial-matcher//overview.md)
+
+>[!div class="nextstepaction"]
+> [Radiology Insights](radiology-insights//overview.md)
azure-health-insights Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/faq.md
+
+ Title: Radiology Insights frequently asked questions
+
+description: Radiology Insights frequently asked questions
+++++ Last updated : 12/12/2023++
+# Radiology Insights Frequently Asked Questions
+
+- Does the Radiology Insights service take into account specific formatting like bold and italic?
+
+ Radiology Insights expects plain text, bolding or other formatting is not taken into account.
++
+- What happens when you process a document with non radiology content?
+
+ The Radiology Insights service processes any document as a radiology document.
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/get-started.md
+
+ Title: Use Radiology Insights (Preview)
+
+description: This article describes how to use the Radiology Insights model (Preview)
+++++ Last updated : 12/06/2023++++
+# Quickstart: Use the Radiology Insights (Preview)
+
+This quickstart provides an overview on how to use the Radiology Insights (Preview).
+
+## Prerequisites
+To use the Radiology Insights (Preview) model, you must have an Azure AI services account created.
+
+If you have no Azure AI services account, see [Deploy Azure AI Health Insights using the Azure portal.](../deploy-portal.md)
+
+Once deployment is complete, you use the Azure portal to navigate to the newly created Azure AI services account to see the details, including your Service URL.
+The Service URL to access your service is: https://```YOUR-NAME```.cognitiveservices.azure.com.
+
+## Example request and results
+
+To send an API request, you need your Azure AI services account endpoint and key.
+
+You can find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
+
+![[Screenshot of the Keys and Endpoints for the Radiology Insights.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox)
+
+> [!IMPORTANT]
+> Prediction is performed upon receipt of the API request and the results will be returned asynchronously. The API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
+
+## Example request
+
+### Starting with a request that contains a case
+
+You can use the data from this example, to test your first request to the Radiology Insights model.
+
+```url
+POST
+http://{cognitive-services-account-endpoint}/health-insights/radiology-insights/jobs?api-version=2023-09-01-preview
+Content-Type: application/json
+Ocp-Apim-Subscription-Key: {cognitive-services-account-key}
+```
+```json
+{
+ "configuration" : {
+ "inferenceOptions" : {
+ "followupRecommendationOptions" : {
+ "includeRecommendationsWithNoSpecifiedModality" : false,
+ "includeRecommendationsInReferences" : false,
+ "provideFocusedSentenceEvidence" : false
+ },
+ "findingOptions" : {
+ "provideFocusedSentenceEvidence" : false
+ }
+ },
+ "inferenceTypes" : [ "lateralityDiscrepancy" ],
+ "locale" : "en-US",
+ "verbose" : false,
+ "includeEvidence" : false
+ },
+ "patients" : [ {
+ "id" : "11111",
+ "info" : {
+ "sex" : "female",
+ "birthDate" : "1986-07-01T21:00:00+00:00",
+ "clinicalInfo" : [ {
+ "resourceType" : "Observation",
+ "status" : "unknown",
+ "code" : {
+ "coding" : [ {
+ "system" : "http://www.nlm.nih.gov/research/umls",
+ "code" : "C0018802",
+ "display" : "MalignantNeoplasms"
+ } ]
+ },
+ "valueBoolean" : "true"
+ } ]
+ },
+ "encounters" : [ {
+ "id" : "encounterid1",
+ "period" : {
+ "start" : "2021-08-28T00:00:00",
+ "end" : "2021-08-28T00:00:00"
+ },
+ "class" : "inpatient"
+ } ],
+ "patientDocuments" : [ {
+ "type" : "note",
+ "clinicalType" : "radiologyReport",
+ "id" : "docid1",
+ "language" : "en",
+ "authors" : [ {
+ "id" : "authorid1",
+ "name" : "authorname1"
+ } ],
+ "specialtyType" : "radiology",
+ "createdDateTime" : "2021-8-28T00:00:00",
+ "administrativeMetadata" : {
+ "orderedProcedures" : [ {
+ "code" : {
+ "coding" : [ {
+ "system" : "Https://loinc.org",
+ "code" : "26688-1",
+ "display" : "US BREAST - LEFT LIMITED"
+ } ]
+ },
+ "description" : "US BREAST - LEFT LIMITED"
+ } ],
+ "encounterId" : "encounterid1"
+ },
+ "content" : {
+ "sourceType" : "inline",
+ "value" : "Exam: US LT BREAST TARGETED\r\n\r\nTechnique: Targeted imaging of the right breast is performed.\r\n\r\nFindings:\r\n\r\nTargeted imaging of the left breast is performed from the 6:00 to the 9:00 position. \r\n\r\nAt the 6:00 position, 5 cm from the nipple, there is a 3 x 2 x 4 mm minimally hypoechoic mass with a peripheral calcification. This may correspond to the mammographic finding. No other cystic or solid masses visualized.\r\n"
+ }
+ } ]
+ } ]
+}
+```
+
+You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/create-job).
+
+### Evaluating a response that contains a case
+
+You get the status of the job by sending a request to the Radiology Insights model by adding the job ID from the initial request in the URL.
+
+Example code snippet:
+
+```url
+GET
+http://{cognitive-services-account-endpoint}/health-insights/radiology-insights/jobs/d48b4f4d-939a-446f-a000-002a80aa58dc?api-version=2023-09-01-preview
+```
+
+```json
+{
+ "result": {
+ "patientResults": [
+ {
+ "patientId": "11111",
+ "inferences": [
+ {
+ "kind": "lateralityDiscrepancy",
+ "lateralityIndication": {
+ "coding": [
+ {
+ "system": "*SNOMED",
+ "code": "24028007",
+ "display": "RIGHT (QUALIFIER VALUE)"
+ }
+ ]
+ },
+ "discrepancyType": "orderLateralityMismatch"
+ }
+ ]
+ }
+ ]
+ },
+ "id": "862768cf-0590-4953-966b-1cc0ef8b8256",
+ "createdDateTime": "2023-12-18T12:25:37.8942771Z",
+ "expirationDateTime": "2023-12-18T12:42:17.8942771Z",
+ "lastUpdateDateTime": "2023-12-18T12:25:49.7221986Z",
+ "status": "succeeded"
+}
+```
+You can find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/onco-phenotype/get-job).
++
+## Data limits
+
+Limit, Value
+- Maximum # patients per request, 1
+- Maximum # patientdocuments per request, 1
+- Maximum # encounters per request, 1
+- Maximum # characters per patient, 50,000 for data[i].content.value all combined
+
+## Request validation
+
+Every request contains required and optional fields that should be provided to the Radiology Insights model. When you're sending data to the model, make sure that you take the following properties into account:
+
+Within a request:
+- patients should be set
+- patients should contain one entry
+- ID in patients entry should be set
+
+Within configuration:
+If set, configuration locale should be one of the following values (case-insensitive):
+- en-CA
+- en-US
+- en-AU
+- en-DE
+- en-IE
+- en-NZ
+- en-GB
++
+Within patients:
+- should contain one patientDocument entry
+- ID in patientDocument should be set
+- if encounters and/or info are used, ID should be set
++
+For the patientDocuments within a patient:
+- createdDateTime (serviceDate) should be set
+- Patient Document language should be EN (case-insensitive)
+- documentType should be set to Note
+- Patient Document clinicalType should be set to radiology report or pathology report
+- Patient Document specialtyType should be radiology or pathology
+- If set, orderedProcedures in administrativeMetadata should contain code -with code and display- and description
+- Document content shouldn't be blank/empty/null
++
+```json
+"patientDocuments" : [ {
+ "type" : "note",
+ "clinicalType" : "radiologyReport",
+ "id" : "docid1",
+ "language" : "en",
+ "authors" : [ {
+ "id" : "authorid1",
+ "name" : "authorname1"
+ } ],
+ "specialtyType" : "radiology",
+ "createdDateTime" : "2021-8-28T00:00:00",
+ "administrativeMetadata" : {
+ "orderedProcedures" : [ {
+ "code" : {
+ "coding" : [ {
+ "system" : "Https://loinc.org",
+ "code" : "41806-1",
+ "display" : "CT ABDOMEN"
+ } ]
+ },
+ "description" : "CT ABDOMEN"
+ } ],
+ "encounterId" : "encounterid1"
+ },
+ "content" : {
+ "sourceType" : "inline",
+ "value" : "CT ABDOMEN AND PELVIS\n\nProvided history: \n78 years old Female\nAbnormal weight loss\n\nTechnique: Routine protocol helical CT of the abdomen and pelvis were performed after the injection of intravenous nonionic iodinated contrast. Axial, Sagittal and coronal 2-D reformats were obtained. Oral contrast was also administered.\n\nFindings:\nLimited evaluation of the included lung bases demonstrates no evidence of abnormality. \n\nGallbladder is absent. "
+ }
+ } ]
+```
+++
+## Next steps
+
+To get better insights into the request and responses, you can read more on following pages:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
+
+>[!div class="nextstepaction"]
+> [Inference information](inferences.md)
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/inferences.md
+
+ Title: Radiology Insight inference information
+
+description: This article provides RI inference information.
+++++ Last updated : 12/12/2023++++
+# Inference information
+
+This document describes details of all inferences generated by application of RI to a radiology document.
+
+The Radiology Insights feature of Azure Health Insights uses natural language processing techniques to process unstructured medical radiology documents. It adds several types of inferences that help the user to effectively monitor, understand, and improve financial and clinical outcomes in a radiology workflow context.
+
+The types of inferences currently supported by the system are: AgeMismatch, SexMismatch, LateralityDiscrepancy, CompleteOrderDiscrepancy, LimitedOrderDiscrepancy, Finding, CriticalResult, FollowupRecommendation, RadiologyProcedure, Communication.
++
+## List of inferences in scope of RI
+
+- Age Mismatch
+- Laterality Discrepancy
+- Sex Mismatch
+- Complete Order Discrepancy
+- Limited Order Discrepancy
+- Finding
+- Critical Result
+- follow-up Recommendation
+- Communication
+- Radiology Procedure
+++
+To interact with the Radiology-Insights model, you can provide several model configuration parameters that modify the outcome of the responses. One of the configurations is ΓÇ£inferenceTypesΓÇ¥, which can be used if only part of the Radiology Insights inferences is required. If this list is omitted or empty, the model returns all the inference types.
+
+```json
+"configuration" : {
+ "inferenceOptions" : {
+ "followupRecommendationOptions" : {
+ "includeRecommendationsWithNoSpecifiedModality" : false,
+ "includeRecommendationsInReferences" : false,
+ "provideFocusedSentenceEvidence" : false
+ },
+ "findingOptions" : {
+ "provideFocusedSentenceEvidence" : false
+ }
+ },
+ "inferenceTypes" : [ "finding", "ageMismatch", "lateralityDiscrepancy", "sexMismatch", "completeOrderDiscrepancy", "limitedOrderDiscrepancy", "criticalResult", "followupRecommendation", "followupCommunication", "radiologyProcedure" ],
+ "locale" : "en-US",
+ "verbose" : false,
+ "includeEvidence" : true
+ }
+```
++
+**Age Mismatch**
+
+An age mismatch occurs when the document gives a certain age for the patient, which differs from the age that is calculated based on the patientΓÇÖs info birthdate and the encounter period in the request.
+- kind: RadiologyInsightsInferenceType.AgeMismatch;
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Laterality Discrepancy**
+
+A laterality mismatch is mostly flagged when the orderedProcedure is for a body part with a laterality and the text refers to the opposite laterality.
+Example: ΓÇ£x-ray right footΓÇ¥, ΓÇ£left foot is normalΓÇ¥
+- kind: RadiologyInsightsInferenceType.LateralityDiscrepancy
+- LateralityIndication: FHIR.R4.CodeableConcept
+- DiscrepancyType: LateralityDiscrepancyType
+
+There are three possible discrepancy types:
+- ΓÇ£orderLateralityMismatchΓÇ¥ means that the laterality in the text conflicts with the one in the order.
+- ΓÇ£textLateralityContradictionΓÇ¥ means that there's a body part with left or right in the finding section, and the same body part occurs with the opposite laterality in the impression section.
+- ΓÇ£textLateralityMissingΓÇ¥ means that the laterality mentioned in the order never occurs in the text.
++
+The lateralityIndication is a FHIR.R4.CodeableConcept. There are two possible values (SNOMED codes):
+- 20028007: RIGHT (QUALIFIER VALUE)
+- 7771000: LEFT (QUALIFIER VALUE)
+
+The meaning of this field is as follows:
+- For orderLateralityMismatch: concept in the text that the laterality was flagged for.
+- For textLateralityContradiction: concept in the impression section that the laterality was flagged for.
+- For ΓÇ£textLateralityMissingΓÇ¥, this field isn't filled in.
+
+A mismatch with discrepancy type ΓÇ£textLaterityMissingΓÇ¥ has no token extensions.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Sex Mismatch**
+This mismatch occurs when the document gives a different sex for the patient than stated in the patientΓÇÖs info in the request. If the patient info contains no sex, then the mismatch can also be flagged when there's contradictory language about the patientΓÇÖs sex in the text.
+- kind: RadiologyInsightsInferenceType.SexMismatch
+- sexIndication: FHIR.R4.CodeableConcept
+Field ΓÇ£sexIndicationΓÇ¥ contains one coding with a SNOMED concept for either MALE (FINDING) if the document refers to a male or FEMALE (FINDING) if the document refers to a female:
+- 248153007: MALE (FINDING)
+- 248152002: FEMALE (FINDING)
++
+<details><summary>Examples request/response json</summary>
+</details>
++++
+**Complete Order Discrepancy**
+CompleteOrderDiscrepancy is created if there's a complete orderedProcedure - meaning that some body parts need to be mentioned in the text, and possibly also measurements for some of them - and not all the body parts or their measurements are in the text.
+- kind: RadiologyInsightsInferenceType.CompleteOrderDiscrepancy
+- orderType: FHIR.R4.CodeableConcept
+- MissingBodyParts: Array FHIR.R4.CodeableConcept
+- missingBodyPartMeasurements: Array FHIR.R4.CodeableConcept
+
+Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes:
+- 24558-9: US Abdomen
+- 24869-0: US Pelvis
+- 24531-6: US Retroperitoneum
+- 24601-7: US breast
+
+Fields ΓÇ£missingBodyPartsΓÇ¥ and/or ΓÇ£missingBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are missing or whose measurements are missing. The token extensions refer to body parts or measurements that are present (or words that imply them).
+
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+
+**Limited Order Discrepancy**
+
+This inference is created if there's a limited order, meaning that not all body parts and measurements for a corresponding complete order should be in the text.
+- kind: RadiologyInsightsInferenceType.LimitedOrderDiscrepancy
+- orderType: FHIR.R4.CodeableConcept
+- PresentBodyParts: Array FHIR.R4.CodeableConcept
+- PresentBodyPartMeasurements: Array FHIR.R4.CodeableConcept
+
+Field ΓÇ£ordertypeΓÇ¥ contains one Coding, with one of the following Loinc codes:
+- 24558-9: US Abdomen
+- 24869-0: US Pelvis
+- 24531-6: US Retroperitoneum
+- 24601-7: US breast
+
+Fields ΓÇ£presentBodyPartsΓÇ¥ and/or ΓÇ£presentBodyPartsMeasurementsΓÇ¥ contain body parts (radlex codes) that are present or whose measurements are present. The token extensions refer to body parts or measurements that are present (or words that imply them).
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Finding**
+
+This inference is created for a medical problem (for example ΓÇ£acute infection of the lungsΓÇ¥) or for a characteristic or a nonpathologic finding of a body part (for example ΓÇ£stomach normalΓÇ¥).
+- kind: RadiologyInsightsInferenceType.finding
+- finding: FHIR.R4.Observation
+
+Finding: Section and ci_sentence
+Next to the token extensions, there can be an extension with url ΓÇ£sectionΓÇ¥. This extension has an inner extension with a display name that describes the section. The inner extension can also have a LOINC code.
+There can also be an extension with url ΓÇ£ci_sentenceΓÇ¥. This extension refers to the sentence containing the first token of the clinical indicator (that is, the medical problem), if any. The generation of such a sentence is switchable.
+
+Finding: fields within field ΓÇ£findingΓÇ¥
+list of fields within field ΓÇ£findingΓÇ¥, except ΓÇ£componentΓÇ¥:
+- status: is always set to ΓÇ£unknownΓÇ¥
+- resourceType: is always set to "ObservationΓÇ¥
+- interpretation: contains a sublist of the following SNOMED codes:
+- 7147002: NEW (QUALIFIER VALUE)
+- 36692007: KNOWN (QUALIFIER VALUE)
+- 260413007: NONE (QUALIFIER VALUE)
+- 260385009: NEGATIVE (QUALIFIER VALUE)
+- 723506003: RESOLVED (QUALIFIER VALUE)
+- 64957009: UNCERTAIN (QUALIFIER VALUE)
+- 385434005: IMPROBABLE DIAGNOSIS (CONTEXTUAL QUALIFIER) (QUALIFIER VALUE)
+- 60022001: POSSIBLE DIAGNOSIS (CONTEXTUAL QUALIFIER) (QUALIFIER VALUE)
+- 2931005: PROBABLE DIAGNOSIS (CONTEXTUAL QUALIFIER) (QUALIFIER VALUE)
+- 15841000000104: CANNOT BE EXCLUDED (QUALIFIER VALUE)
+- 260905004: CONDITION (ATTRIBUTE)
+- 441889009: DENIED (QUALIFIER VALUE)
+- 722291000000108: HISTORY (QUALIFIER VALUE)
+- 6493001: RECENT (QUALIFIER VALUE)
+- 2667000: ABSENT (QUALIFIER VALUE)
+- 17621005: NORMAL (QUALIFIER VALUE)
+- 263730007: CONTINUAL (QUALIFIER VALUE)
+
+In this list, the string before the colon is the code, and the string after the colon is the display name.
+If the value is ΓÇ£NONE (QUALIFIER VALUE)ΓÇ¥, the finding is absent. This value is, for example, ΓÇ£no sepsisΓÇ¥.
+category: if filled, this field contains an array with one element. It contains one of the following SNOMED concepts:
+- 439401001: DIAGNOSIS (OBSERVABLE ENTITY)
+- 404684003: CLINICAL FINDING (FINDING)
+- 162432007: SYMPTOM: GENERALIZED (FINDING)
+- 246501002: TECHNIQUE (ATTRIBUTE)
+- 91722005: PHYSICAL ANATOMICAL ENTITY (BODY STRUCTURE)
+
+code:
+- SNOMED code 404684003: CLINICAL FINDING (FINDING) (meaning that the finding has a clinical indicator)
+or
+- SNOMED code 123037004: BODY STRUCTURE (BODY STRUCTURE) (no clinical indicator.)
+
+Finding: field ΓÇ£componentΓÇ¥
+Much relevant information is in the components. The componentΓÇÖs ΓÇ£codeΓÇ¥ field contains one CodeableConcept with one SNOMED code.
++
+Component description:
+(some of the components are optional)
+
+Finding: component ΓÇ£subject of informationΓÇ¥
+This component has SNOMED code 131195008: SUBJECT OF INFORMATION (ATTRIBUTE). It also has the ΓÇ£valueCodeableConceptΓÇ¥ field filled. The value is a SNOMED code describing the medical problem that the finding pertains to.
+At least one ΓÇ£subject of informationΓÇ¥ component is present if and only if the ΓÇ£finding.codeΓÇ¥ field has 404684003: CLINICAL FINDING (FINDING). There can be several "subject of informationΓÇ¥ components, with different concepts in the ΓÇ£valueCodeableConceptΓÇ¥ field.
+
+Finding: component ΓÇ£anatomyΓÇ¥
+Zero or more components with SNOMED code ΓÇ£722871000000108: ANATOMY (QUALIFIER VALUE)ΓÇ¥. This component has field ΓÇ£valueCodeConceptΓÇ¥ filled with a SNOMED or radlex code. For example, for ΓÇ£lung infectionΓÇ¥ this component contains a code for the lungs.
+
+Finding: component ΓÇ£regionΓÇ¥
+Zero or more components with SNOMED code 45851105: REGION (ATTRIBUTE). Like anatomy, this component has field ΓÇ£valueCodeableConceptΓÇ¥ filled with a SNOMED or radlex code. Such a concept refers to the body region of the anatomy. For example, if the anatomy is a code for the vagina, the region may be a code for the female reproductive system.
+
+Finding: component ΓÇ£lateralityΓÇ¥
+Zero or more components with code 45651917: LATERALITY (ATTRIBUTE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to the laterality of the finding. For example, this component is filled for a finding pertaining to the right arm.
+
+Finding: component ΓÇ£change valuesΓÇ¥
+Zero or more components with code 288533004: CHANGE VALUES (QUALIFIER VALUE). Each has field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED concept pertaining to a size change in the finding (for example, a nodule that is growing or decreasing).
+
+Finding: component ΓÇ£percentageΓÇ¥
+At most one component with code 45606679: PERCENT (PROPERTY) (QUALIFIER VALUE). It has field ΓÇ£valueStringΓÇ¥ set with either a value or a range consisting of a lower and upper value, separated by ΓÇ£-ΓÇ£.
+
+Finding: component ΓÇ£severityΓÇ¥
+At most one component with code 272141005: SEVERITIES (QUALIFIER VALUE), indicating how severe the medical problem is. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list:
+- 255604002: MILD (QUALIFIER VALUE)
+- 6736007: MODERATE (SEVERITY MODIFIER) (QUALIFIER VALUE)
+- 24484000: SEVERE (SEVERITY MODIFIER) (QUALIFIER VALUE)
+- 371923003: MILD TO MODERATE (QUALIFIER VALUE)
+- 371924009: MODERATE TO SEVERE (QUALIFIER VALUE)
+
+Finding: component ΓÇ£chronicityΓÇ¥
+At most one component with code 246452003: CHRONICITY (ATTRIBUTE), indicating whether the medical problem is chronic or acute. It has field ΓÇ£valueCodeableConceptΓÇ¥ set with a SNOMED code from the following list:
+- 255363002: SUDDEN (QUALIFIER VALUE)
+- 90734009: CHRONIC (QUALIFIER VALUE)
+- 19939008: SUBACUTE (QUALIFIER VALUE)
+- 255212004: ACUTE-ON-CHRONIC (QUALIFIER VALUE)
+
+Finding: component ΓÇ£causeΓÇ¥
+At most one component with code 135650694: CAUSES OF HARM (QUALIFIER VALUE), indicating what the cause is of the medical problem. It has field ΓÇ£valueStringΓÇ¥ set to the strings of one or more tokens from the text, separated by ΓÇ£;;ΓÇ¥.
+
+Finding: component ΓÇ£qualifier valueΓÇ¥
+Zero or more components with code 362981000: QUALIFIER VALUE (QUALIFIER VALUE). This component refers to a feature of the medical problem.
+Every component has either:
+- Field ΓÇ£valueStringΓÇ¥ set with token strings from the text, separated by ΓÇ£;;ΓÇ¥
+- Or field ΓÇ£valueCodeableConceptΓÇ¥ set to a SNOMED code
+- Or no field set (then the meaning can be retrieved from the token extensions (rare occurrence))
+
+Finding: component ΓÇ£multipleΓÇ¥
+Exactly one component with code 46150521: MULTIPLE (QUALIFIER VALUE). It has field ΓÇ£valueBooleanΓÇ¥ set to true or false. This component indicates the difference between, for example, one nodule (multiple is false) or several nodules (multiple is true). This component has no token extensions.
+
+Finding: component ΓÇ£sizeΓÇ¥
+Zero or more components with code 246115007, "SIZE (ATTRIBUTE)". Even if there's just one size for a finding, there are several components if the size has two or three dimensions, for example, ΓÇ£2.1 x 3.3 cmΓÇ¥ or ΓÇ£1.2 x 2.2 x 1.5 cmΓÇ¥. There's a size component for every dimension.
+Every component has field ΓÇ£interpretationΓÇ¥ set to either SNOMED code 15240007: CURRENT or 9130008: PREVIOUS, depending on whether the size was measured during this visit or in the past.
+Every component has either field ΓÇ£valueQuantityΓÇ¥ or ΓÇ£valueRangeΓÇ¥ set.
+If ΓÇ£valueQuantityΓÇ¥ is set, then ΓÇ£valueQuantity.valueΓÇ¥ is always set. In most cases, ΓÇ£valueQuantity.unitΓÇ¥ is set. It's possible that ΓÇ£valueQuantity.comparatorΓÇ¥ is also set, to either ΓÇ£>ΓÇ¥, ΓÇ£<ΓÇ¥, ΓÇ£>=ΓÇ¥ or ΓÇ£<=ΓÇ¥. For example, the component is set to ΓÇ£<=ΓÇ¥ for ΓÇ£the tumor is up to 2 cmΓÇ¥.
+If ΓÇ£valueRangeΓÇ¥ is set, then ΓÇ£valueRange.lowΓÇ¥ and ΓÇ£valueRange.highΓÇ¥ are set to quantities with the same data as described in the previous paragraph. This field contains, for example, ΓÇ£The tumor is between 2.5 cm and 2.6 cm in size".
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Critical Result**
+This inference is made for a new medical problem that requires attention within a specific time frame, possibly urgently.
+- kind: RadiologyInsightsInferenceType.criticalResult
+- result: CriticalResult
+
+Field ΓÇ£result.descriptionΓÇ¥ gives a description of the medical problem, for example ΓÇ£MALIGNANCYΓÇ¥.
+Field ΓÇ£result.findingΓÇ¥, if set, contains the same information as the ΓÇ£findingΓÇ¥ field in a finding inference.
+
+Next to token extensions, there can be an extension for a section. This field contains the most specific section that the first token of the critical result is in (or to be precise, the first token that is in a section). This section is in the same format as a section for a finding.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Follow-up Recommendation**
+
+This inference is created when the text recommends a specific medical procedure or follow-up for the patient.
+- kind: RadiologyInsightsInferenceType.FollowupRecommendation
+- effectiveDateTime: utcDateTime
+- effectivePeriod: FHIR.R4.Period
+- Findings: Array RecommendationFinding
+- isConditional: boolean
+- isOption: boolean
+- isGuideline: boolean
+- isHedging: boolean
+
+recommendedProcedure: ProcedureRecommendation
+- follow up Recommendation: sentences
+Next to the token extensions, there can be an extension containing sentences. This behavior is switchable.
+- follow up Recommendation: boolean fields
+ΓÇ£isHedgingΓÇ¥ mean that the recommendation is uncertain, for example, ΓÇ£a follow-up could be doneΓÇ¥. ΓÇ£isConditionalΓÇ¥ is for input like ΓÇ£If the patient continues having pain, an MRI should be performed.ΓÇ¥
+ΓÇ£isOptionsΓÇ¥: is also for conditional input.
+ΓÇ£isGuidelineΓÇ¥ means that the recommendation is in a general guideline like the following:
+
+BI-RADS CATEGORIES:
+- (0) Incomplete: Needs more imaging evaluation
+- (1) Negative
+- (2) Benign
+- (3) Probably benign - Short interval follow-up suggested
+- (4) Suspicious abnormality - Biopsy should be considered
+- (5) Highly suggestive of malignancy - Appropriate action should be taken.
+- (6) Known biopsy-proven malignancy
+
+- follow up Recommendation: effectiveDateTime and effectivePeriod
+Field ΓÇ£effectiveDateTimeΓÇ¥ will be set when the procedure needs to be done (recommended) at a specific point in time. For example, ΓÇ£next WednesdayΓÇ¥. Field ΓÇ£effectivePeriodΓÇ¥ will be set if a specific period is mentioned, with a start and end datetime. For example, for ΓÇ£within six monthsΓÇ¥, the start datetime will be the date of service, and the end datetime will be the day six months after that.
+- follow up Recommendation: findings
+If set, field ΓÇ£findingsΓÇ¥ contains one or more findings that have to do with the recommendation. For example, a leg scan (procedure) can be recommended because of leg pain (finding).
+Every array element of field ΓÇ£findingsΓÇ¥ is a RecommendationFinding. Field RecommendationFinding.finding has the same information as a FindingInference.finding field.
+For field ΓÇ£RecommendationFinding.RecommendationFindingStatusΓÇ¥, see the OpenAPI specification for the possible values.
+Field ΓÇ£RecommendationFinding.criticalFindingΓÇ¥ is set if a critical result is associated with the finding. It then contains the same information as described for a critical result inference.
+- follow up Recommendation: recommended procedure
+Field ΓÇ£recommendedProcedureΓÇ¥ is either a GenericProcedureRecommendation, or an ImagingProcedureRecommendation. (Type ΓÇ£procedureRecommendationΓÇ¥ is a supertype for these two types.)
+A GenericProcedureRecommendation has the following:
+- Field ΓÇ£kindΓÇ¥ has value ΓÇ£genericProcedureRecommendationΓÇ¥
+- Field ΓÇ£descriptionΓÇ¥ has either value ΓÇ£MANAGEMENT PROCEDURE (PROCEDURE)ΓÇ¥ or ΓÇ£CONSULTATION (PROCEDURE)ΓÇ¥
+- Field ΓÇ£codeΓÇ¥ only contains an extension with tokens
+ An ImagingProcedureRecommendation has the following:
+- Field ΓÇ£kindΓÇ¥ has value ΓÇ£imagingProcedureRecommendationΓÇ¥
+- Field ΓÇ£imagingProceduresΓÇ¥ contains an array with one element of type ImagingProcedure.
+
+This type has the following fields, the first 2 of which are always filled:
+- ΓÇ£modalityΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.
+- ΓÇ£anatomyΓÇ¥: a CodeableConcept containing at most one coding with a SNOMED code.
+- ΓÇ£laterality: a CodeableConcept containing at most one coding with a SNOMED code.
+- ΓÇ£contrastΓÇ¥: not set.
+- ΓÇ£viewΓÇ¥: not set.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Follow up Communication**
+
+This inference is created when findings or test results were communicated to a medical professional.
+- kind: RadiologyInsightsInferenceType.FollowupCommunication
+- dateTime: Array utcDateTime
+- recipient: Array MedicalProfessionalType
+- wasAcknowledged: boolean
+
+Field ΓÇ£wasAcknowledgedΓÇ¥ is set to true if the communication was verbal (nonverbal communication might not have reached the recipient yet and cannot be considered acknowledged). Field ΓÇ£dateTimeΓÇ¥ is set if the date-time of the communication is known. Field ΓÇ£recipientΓÇ¥ is set if the recipient(s) are known. See the OpenAPI spec for its possible values.
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Radiology Procedure**
+
+This inference is for the ordered radiology procedure(s).
+- kind: RadiologyInsightsInferenceType.RadiologyProcedure
+- procedureCodes: Array FHIR.R4.CodeableConcept
+- imagingProcedures: Array ImagingProcedure
+- orderedProcedure: OrderedProcedure
+
+Field ΓÇ£imagingProceduresΓÇ¥ contains one or more instances of an imaging procedure, as documented for the follow up recommendations.
+Field ΓÇ£procedureCodesΓÇ¥, if set, contains LOINC codes.
+Field ΓÇ£orderedProcedureΓÇ¥ contains the description(s) and the code(s) of the ordered procedure(s) as given by the client. The descriptions are in field ΓÇ£orderedProcedure.descriptionΓÇ¥, separated by ΓÇ£;;ΓÇ¥. The codes are in ΓÇ£orderedProcedure.code.codingΓÇ¥. In every coding in the array, only field ΓÇ£codingΓÇ¥ is set.
+
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+## Next steps
+
+To get better insights into the request and responses, read more on following page:
+
+>[!div class="nextstepaction"]
+> [Model configuration](model-configuration.md)
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/model-configuration.md
+
+ Title: Radiology Insights model configuration
+
+description: This article provides Radiology Insights model configuration information.
+++++ Last updated : 12/06/2023+++
+# Radiology Insights model configuration
+
+To interact with the Radiology Insights model, you can provide several model configuration parameters that modify the outcome of the responses.
+
+> [!IMPORTANT]
+> Model configuration is applied to ALL the patients within a request.
+
+```json
+ "configuration": {
+ "inferenceOptions": {
+ "followupRecommendationOptions": {
+ "includeRecommendationsWithNoSpecifiedModality": false,
+ "includeRecommendationsInReferences": false,
+ "provideFocusedSentenceEvidence": false
+ },
+ "findingOptions": {
+ "provideFocusedSentenceEvidence": false
+ }
+ },
+ "locale": "en-US",
+ "verbose": false,
+ "includeEvidence": true
+ }
+```
+
+## Case finding
+
+Through the model configuration, the API allows you to seek evidence from the provided clinical documents as part of the inferences.
+
+**Include Evidence** |**Behavior**
+- |--
+true | Evidence is returned as part of the inferences
+false | No Evidence is returned
+
+## Inference Options
+
+**FindingOptions**
+- provideFocusedSentenceEvidence
+- type: boolean
+- Provide a single focused sentence as evidence for the finding, default is false.
+
+**FollowupRecommendationOptions**
+- includeRecommendationsWithNoSpecifiedModality
+ - type: boolean
+ - description: Include/Exclude follow-up recommendations with no specific radiologic modality, default is false.
++
+- includeRecommendationsInReferences
+ - type: boolean
+ - description: Include/Exclude follow-up recommendations in references to a guideline or article, default is false.
+
+- provideFocusedSentenceEvidence
+ - type: boolean
+ - description: Provide a single focused sentence as evidence for the recommendation, default is false.
+
+When includeEvidence is false, no evidence is returned.
+
+This configuration overrules includeRecommendationsWithNoSpecifiedModality and provideFocusedSentenceEvidence and no evidence is shown.
+
+When includeEvidence is true, it depends on the value set on the two other configurations whether the evidence of the inference or a single focused sentence is given as evidence.
+
+## Examples
++
+**Example 1**
+
+CDARecommendation_GuidelineFalseUnspecTrueLimited
+
+The includeRecommendationsWithNoSpecifiedModality is true, includeRecommendationsInReferences is false, provideFocusedSentenceEvidence for recommendations is true and includeEvidence is true.
+
+As a result, the model includes evidence for all inferences.
+- The model checks for follow-up recommendations with a specified modality.
+- The model checks for follow-up recommendations with no specific radiologic modality.
+- The model provides a single focused sentence as evidence for the recommendation.
+
+<details><summary>Examples request/response json</summary>
+</details>
+++
+**Example 2**
+
+CDARecommendation_GuidelineTrueUnspecFalseLimited
+
+The includeRecommendationsWithNoSpecifiedModality is false, includeRecommendationsInReferences is true, provideFocusedSentenceEvidence for findings is true and includeEvidence is true.
+
+As a result, the model includes evidence for all inferences.
+- The model checks for follow-up recommendations with a specified modality.
+- The model checks for a recommendation in a guideline.
+- The model provides a single focused sentence as evidence for the finding.
++
+<details><summary>Examples request/response json</summary>
+</details>
+++
+## Next steps
+
+Refer to the following page to get better insights into the request and responses:
+
+>[!div class="nextstepaction"]
+> [Inference information](inferences.md)
azure-health-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/overview.md
+
+ Title: What is Radiology Insights (Preview)
+
+description: Enable healthcare organizations to process radiology documents and add various inferences.
+++++ Last updated : 12/6/2023++++
+# What is Radiology Insights (Preview)?
+
+Radiology Insights is a model that aims to provide quality checks as feedback on errors and inconsistencies (mismatches).
+The model ensures that critical findings are identified and communicated using the full context of the report. Follow-up recommendations and clinical findings with measurements (sizes) documented by the radiologist are also identified.
+
+> [!IMPORTANT]
+> The Radiology Insights model is a capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTSΓÇ¥. The Radiology Insights model isn't intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability isn't designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of the Radiology Insights model. The customer is responsible for ensuring compliance with those license terms, including any geographic or other applicable restrictions.
+
+## Radiology Insights features
+
+To remain competitive and successful, healthcare organizations and radiology teams must have visibility into trends and outcomes. The focus is on radiology operational excellence and performance and quality.
+The Radiology Insights model extracts valuable information from radiology documents for a radiologist.
+
+**Identifying Mismatches**: A radiologist is provided with possible mismatches. These are identified by the model by comparing what the radiologist has documented in the radiology report and the information that was present in the metadata of the report.
+
+Mismatches can be identified for sex, age and body site laterality. Mismatches identify potential discrepancies between the dictated text and the provided metadata. They also identify potential inconsistencies within the dictated/written text. Inconsistencies are limited to gender, age, laterality and type of imaging.
+
+This enables the radiologist to rectify any potential inconsistencies during reporting. The system isn't aware of the image the radiologist is reporting on.
+
+This model does not provide any clinical judgment of the radiologist's interpretation of the image. The radiologist is responsible for the diagnosis and treatment of patient and the correct documentation thereof.
+
+**Providing Clinical Findings**: The model extracts as structured data two types of clinical findings: critical findings and actionable findings. Only clinical findings that are documented in the report are extracted. Clinical findings produced by the model aren't deduced from pieces of information in the report nor from the image. These findings merely serve as a potential reminder for the radiologist to communicate with the provider.
+
+The model produces two categories of clinical findings, Actionable Finding and Critical Result, and are based on the clinical finding, explicitly stated in the report, and criteria formulated by ACR (American College of Radiology). The model extracts all findings explicitly documented by the radiologist. The extracted findings may be used to alert a radiologist of possible clinical findings that need to be clearly communicated and acted on in a timely fashion by a healthcare professional. Customers may also utilize the extracted findings to populate downstream or related systems (such as EHRs or autoschedule functions).
+
+**Communicating Follow-up Recommendations**: A radiologist uncovers findings for which in some cases a follow-up is recommended. The documented recommendation is extracted and normalized by the model. It can be used for communication to a healthcare professional (physician).
+Follow-up recommendations aren't generated, deduced or proposed. The model merely extracts follow-up recommendation statements documented explicitly by the radiologist. Follow-up recommendations are normalized by coding to SNOMED.
+
+**Reporting Measurements**: A radiologist documents clinical findings with measurements. The model extracts clinically relevant information pertaining to the finding. The model extracts measurements the radiologist explicitly stated in the report.
+
+The model is simply searching for measurements reviewed by the radiologist. This info is extracted from the relevant text-based record and structures. The extracted and structured measurement data may be used to identify trends in measurements for a particular patient over time. Alternatively a customer could search a set of patients based on the measurement data extracted by the model.
+
+**Reports on Productivity and Key Quality Metrics**: The Radiology Insights model extracted information can be used to generate reports and support analytics for a team of radiologists.
+
+Based on the extracted information, dashboards and retrospective analyses can provide insight on productivity and key quality metrics.
+The insights can be used to guide improvement efforts, minimize errors, and improve report quality and consistency.
+
+The RI model isn't creating dashboards but delivers extracted information. The information can be aggregated by a user for research and administrative purposes. The model is stateless.
+
+## Language support
+
+The service currently supports the English language.
+
+## Limits and quotas
+
+For the Public Preview, you can select the Free F0 SKU. The official pricing will be released after Public Preview.
+
+## Next steps
+
+Get started using the Radiology Insights model:
+
+>[!div class="nextstepaction"]
+> [Deploy the service via the portal](../deploy-portal.md)
azure-health-insights Support And Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/support-and-help.md
+
+ Title: Radiology Insights support and help options
+
+description: How to obtain help and support for questions and problems when you create applications that use with Radiology Insights model
+++++ Last updated : 12/12/2023++++
+# Radiology Insights model support and help options
+
+Are you just starting to explore the functionality of the Radiology Insights model? Perhaps you're implementing a new feature in your application. Or after using the service, do you have suggestions on how to improve it? Here are options for where you can get support, stay up-to-date, give feedback, and report bugs for Azure AI Health Insights.
+
+## Create an Azure support request
+
+Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal.
+
+* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)
+* [Azure portal for the United States government](https://portal.azure.us)
++
+## Post a question on Microsoft Q&A
+
+For quick and reliable answers on your technical product questions from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our expert community, engage with us on [Microsoft Q&A](/answers/products/azure?product=all), Azure's preferred destination for community support.
azure-health-insights Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/radiology-insights/transparency-note.md
+
+ Title: Transparency Note for Radiology Insights
+description: Transparency Note for Radiology Insights
++++ Last updated : 06/12/2023+++
+# Transparency Note for Radiology Insights (Preview)
+
+## What is a Transparency Note?
+
+An AI system includes technology and the people who will use it, the people are affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. MicrosoftΓÇÖs Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
+Microsoft’s Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the Microsoft AI principles.
+
+## The basics of Radiology Insights
+
+### Introduction
+
+Radiology Insights (RI) is a model that aims to provide quality checks as feedback on errors and inconsistencies (mismatches) and helps identify and communicate critical findings using the full context of the report. Follow-up recommendations and clinical findings with measurements (sizes) documented by the radiologist are also identified.
+
+- Radiology Insights is a built-in AI software model, delivered within Project Health Insights Azure AI service
+- Radiology Insights doesn't provide external references. As a Health Insights model, Radiology Insights provides inferences to the provided input, to be used as reference for profound understanding of the conclusions of the model.
+
+The Radiology Insights feature of Azure Health Insights uses natural language processing techniques to process unstructured medical radiology documents. It adds several types of inferences that help the user to effectively monitor, understand, and improve financial and clinical outcomes in a radiology workflow context.
+The types of inferences currently supported by the system are: AgeMismatch, SexMismatch, LateralityDiscrepancy, CompleteOrderDiscrepancy, LimitedOrderDiscrepancy, Finding, CriticalResult, FollowupRecommendation, RadiologyProcedure, Communication.
+These inferences can be used both to support clinical analytics or to provide real time assistance during the document creation process.
+
+- RI enables to slice and dice the radiology workflow data and create insights that matter, leading to actionable information.
+
+- RI enables to analyze the past and improve the future by generating meaningful insights that reveal strengths and pinpoint areas needing intervention.
+
+- RI enables to create quality checks and automated, inΓÇæline alerts for mismatches and possible critical findings.
+
+- RI improve followΓÇæup recommendation consistency with AIΓÇædriven, automated guidance support and quality checks that drive evidenceΓÇæbased clinical decisions.
+
+Radiology Insights can receive unstructured text in English as part of its current offering.
+
+Radiology Insights uses TA4H for NER, extraction of relations between identified entities, to surfaces assertions such as negation and conditionality, and to link detected entities to common vocabularies.
+
+### Key terms
+
+|Term | Definition |
+|--||
+|Document| The input of the RI model is a Radiology Clinical document, which next to the narrative information also contains meta-data containing patient info and procedure order specifications.|
+|Inference| The output of the RI model is a list of inferences or annotations added to the document processed.|
+|AgeMismatch| Annotation triggered when there's a discrepancy between age information in meta-data and narrative text.|
+|SexMismatch| Annotation triggered when there's a discrepancy between sex information in meta-data and narrative text (includes patient references, sex specific findings and sex specific body parts).|
+|LateralityDiscrepancy| Annotation triggered when there's a discrepancy between laterality information in meta-data and narrative text or between findings and impression section in report text.
+|CompleteOrderDiscrepancy| Annotation triggered when report text doesn't contain all relevant body parts according to information in the meta-data that a complete study is ordered.
+|LimitedOrderDiscrepancy| Annotation triggered when limited selection of body parts according to the procedure order present in meta-data should be checked, but report text includes all relevant body parts.
+|Finding| Annotation that identifies and highlights an assembly of clinical information pertaining to a, clinically relevant, notion found in the report text.
+|CriticalResult| Annotation that identifies and highlights findings in report text that should be communicated within a certain time limit according to regulatory compliance.
+|FollowupRecommendation| Annotation that identifies and highlights one or more recommendations in the report text and provides a normalization of each recommendation to a set of structured data fields.
+|RadiologyProcedure| Normalization of procedure order information present in meta-data using Loinc/Radlex codes.
+|Communication| Annotation that identifies and highlights when noted in report text that the findings are strict or nonstrictly communicated with the recipient.
+
+## Capabilities
+
+### System behavior
+
+The Radiology Insight adds several types of inferences/annotations to the original radiology clinical document. A document can trigger one or more annotations. Several instances of the same annotation in one document are possible.
+
+- AgeMismatch
+- SexMismatch
+- LateralityDiscrepancy
+- CompleteOrderDiscrepancy
+- LimitedOrderDiscrepancy
+- Finding
+- CriticalResult
+- FollowupRecommendation
+- RadiologyProcedure
+- Communication
++
+Example of a Clinical Radiology document with inferences:
+
+![[Screenshot of a radiology document with a Mismatch and Follow-up Recommendation inference.](../media/radiology-insights/radiology-doc-with-inferences.png)](../media/radiology-insights/radiology-doc-with-inferences.png#lightbox)
+
+### Functional description of the inferences in scope and examples
+
+**Age Mismatch**
+
+Age mismatches are identified based on comparison of available Patient age information within PatientΓÇÖs demographic meta-data and the report text. Conflicting age information are tagged in the text.
+
+**Sex Mismatch**
+
+Sex mismatches identification is based on a comparison of the Patient sex information within PatientΓÇÖs demographic meta-data on the one hand and on the other hand patient references (female/male, he/she/his/her), gender specific findings and gender-specific body parts in the text.
+Opposite gender terms are tagged in the report text.
+
+**Laterality Discrepancy**
+
+A laterality, defined as ΓÇ£LeftΓÇ¥ (Lt, lft) and ΓÇ£RightΓÇ¥ (rt, rght), along with an Anatomy (body parts) in the Procedure Description of the meta-data Procedure Order is used to create Laterality mismatches in the report.
+No Mismatches on past content.
+If only Laterality and no Anatomy is available in the Procedure Description, all opposite laterality in the text is tagged. For example: ΓÇ£left viewsΓÇ¥ in Procedure Description will list all ΓÇ£rightΓÇ¥ words in the report text.
+
+**CompleteOrder Discrepancy**
+
+Completeness mismatches can be made if the ordered procedure is an ultrasound for the ABDOMEN, RETROPERITONEAL, PELVIS, or US BREAST.
+A completeness mismatch is made if either the order is complete and the text isn't, or vice versa.
+
+**LimitedOrder Discrepancy**
+
+Completeness mismatches can be made if the ordered procedure is an ultrasound for the ABDOMEN, RETROPERITONEAL, PELVIS, or US BREAST.
+A completeness mismatch is made if either the order is complete and the text isn't, or vice versa.
+
+**Finding**
+
+A Finding is an NLU-based assembly of clinical information pertaining to a, clinically relevant, notion found in medical records. It's created as such that it's application-independent.
+A Finding inference consists out of different fields, all containing pieces to assemble a complete overview of what the Finding is.
+A Finding can consist out of the following fields:
+Clinical Indicator, AnatomyLateralityInfo about Size, Acuity, Severity, Cause, Status, Multiple ΓÇô check, RegionFeatures, Timing
+
+**Critical Result**
+
+Identifies and highlights potential critical results dictated in a report.
+Identifies and highlights potential ACR Actionable Findings dictated in a report.
+Only Identifies Critical Result in the report text (not in meta-data)
+The terms are based on Mass Coalition for the Prevention of Medical Errors:
+<http://www.macoalition.org/Initiatives/docs/CTRstarterSet.xls>.
+
+**FollowupRecommendation**
+
+This inference surfaces a potential visit that needs to be scheduled. Each recommendation contains one modality and one body part. In addition, it contains a time, a laterality, one or multiple findings and an indication that a conditional phrase is present (true or false).
+
+**RadiologyProcedure**
+
+Radiology Insights extracts information such as modality, body part, laterality, view and contrast from the procedure order. Ordered procedures are normalized using the Loinc codes using the LOINC/RSNA Radiology Playbook that is developed and maintained by the LOINC/RadLex Committee:
+<http://playbook.radlex.org/playbook/SearchRadlexAction>.
+
+**Communication**
+
+RI captures language in the text, typically a verb indicating communication in combination with a proper name (typical First and Last Name) or a reference to a clinician or nurse. There can be several such recipients.
+Communication to nonmedical staff (secretary, clerks, etc.) isn't tagged as communication unless the proper name of this person is mentioned.
+Language identified as past communication (for example in history sections) or future communication (for example "will be communicated") isn't tagged as communication.
++
+## Use cases
+
+Healthcare organizations and radiology teams must have visibility into trends and outcomes specific to radiology operations and performance, with constant attention to quality.
+The Radiology Insights model extracts valuable information from radiology documents for a radiologist.
+
+The scope of each of these use cases is always the current document the radiologist is dictating. There's no image analysis nor patient record information involved. The meta-data provides administrative context for the current report and is limited to patient age, patient sex, and the procedure that was ordered. (for example: CT of abdomen, MRI of the brain,…)
+
+Microsoft is providing this functionality as an API with the model that allows for the information in scope to be identified or extracted. The customer would incorporate the model into their own or third-party radiology reporting software and would determine the user interface for the information. Customers could be an ISV or a health system developing or modifying radiology reporting software for use within the health system.
+
+Thus, the specific use cases by customers and how the information would be presented or used by a radiologist may vary slightly from that described, but the descriptions illustrate the intended purpose of the API functionality.
+
+**Use Case 1 ΓÇô Identifying Mismatches**: A radiologist is provided with possible mismatches that are identified by the model between what the radiologist has documented in the radiology report and the information that was present in the meta-data of the report. Mismatches can be identified for sex, age and body site laterality. Mismatches identify potential discrepancies between the dictated text and the provided meta-data. They also identify potential inconsistencies within the dictated/written text. Inconsistencies are limited to gender, age, laterality and type of imaging. This is only to allow the radiologist to rectify any potential inconsistencies during reporting. The system isn't aware of the image the radiologist is reporting on. In no way does this model provides any clinical judgment of the radiologist's interpretation of the image. The radiologist is responsible for the diagnosis and treatment of patient and the correct documentation thereof.
+
+**Use Case 2 ΓÇô Providing Clinical Findings**: The model extracts as structured data two types of clinical findings: critical findings and actionable findings. Only clinical findings that are explicitly documented in the report by the radiologist are extracted by the model. Clinical findings produced by the model aren't deduced from pieces of information in the report nor from the image. These merely serve as a potential reminder for the radiologist to communicate with the provider.
+The model produces two categories of clinical findings, Actionable Finding and Critical Result, and is based on the clinical finding, explicitly stated in the report, and criteria formulated by ACR (American College of Radiology). The model will always extract all findings explicitly documented by the radiologist. The extracted findings may be used to alert a radiologist of possible clinical findings that need to be clearly communicated and acted on in a timely fashion by a healthcare professional. Customers may also utilize the extracted findings to populate downstream or related systems (such as EHRs or autoschedule functions).
+
+**Use Case 3 ΓÇô Communicating Follow-up Recommendations**: A radiologist uncovers findings for which in some cases a follow-up is recommended. The documented recommendation is extracted and normalized by the model for communication to a healthcare professional (physician).
+Follow-up recommendations aren't generated, deduced or proposed. The model merely extracts follow-up recommendation statements documented explicitly by the radiologist. Follow-up recommendations are normalized by coding to SNOMED.
+
+**Use Case 4 ΓÇô Reporting Measurements**: A radiologist documents clinical findings with measurements. The model extracts clinically relevant information pertaining to the finding. The model extracts measurements the radiologist explicitly stated in the report. The model is searching for measurements that have already been taken and reviewed by the radiologist. Extracting these measurements from the relevant text-based record and structures them. The extracted and structured measurement data may be used to see trends in measurements for a particular patient over time. A customer could search a set of patients based on the measurement data extracted by the model.
+
+**Use Case 5 - Reports on Productivity and Key Quality Metrics**: The Radiology Insights model extracted information (information extracted in use cases 1 to 5) can be used to generate reports and support analytics for a team of radiologists. Based on the extracted information, dashboards and retrospective analyses can provide updates on productivity and key quality metrics to guide improvement efforts, minimize errors, and improve report quality and consistency.
+The RI model isn't creating dashboards but delivers extracted information, not deduced, that could be aggregated by a user for research and administrative purposes. The model is stateless.
+
+### Considerations when choosing other use cases
+
+Radiology Insights is a valuable tool to extract knowledge from unstructured medical text and support the radiology documentation workflow. However, given the sensitive nature of health-related data, it's important to consider your use cases carefully. In all cases, a human should be making decisions assisted by the information the system returns, and in all cases, you should have a way to review the source data and correct errors. Here are some considerations when choosing a use case:
+
+- Avoid scenarios that use this service as a medical device, to provide clinical support, or as a diagnostic tool to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions without human intervention. A qualified medical professional should always do due diligence, verify source data that might influence patient care decisions and make decisions.
+
+- Avoid scenarios related to automatically granting or denying medical services or health insurance without human intervention. Because decisions that affect coverage levels are impactful, source data should always be verified in these scenarios.
+
+- Avoid scenarios that use personal health information for a purpose not permitted by patient consent or applicable law. Health information has special protections regarding privacy and consent. Make sure that all data you use has patient consent for the way you use the data in your system or you're otherwise compliant with applicable law as it relates to the use of health information.
+
+- Carefully consider using detected inferences to update patient records without human intervention. Make sure that there's always a way to report, trace, and correct any errors to avoid propagating incorrect data to other systems. Ensure that any updates to patient records are reviewed and approved by qualified professionals.
+
+- Carefully consider using detected inferences in patient billing without human intervention. Make sure that providers and patients always have a way to report, trace, and correct data that generates incorrect billing.
+
+- Radiology Insights isn't intended to be used for administrative functions.
+
+### Limitations
+
+The specific characteristics of the input radiology document are crucial to get actionable, accurate output from the RI model. Some of the items playing an important role in this are:
+
+- Languages: Currently RI capabilities are enabled for English text only.
+- Unknown words: radiology documents sometimes contain unknown abbreviations/words or out of context homonyms or spelling mistakes.
+- Input meta-data: RI expects for certain types of inferences that input information is available in the document or in the meta data of the document.
+- Templates and formatting: RI is developed using a real world, representative set of documents, but it's possible that specific use cases and/or document templates can cause challenges for the RI logic to be accurate. As an example, nested tables or complicated structures can cause suboptimal parsing.
+- Vocabulary & descriptions: RI is developed and tested on real world documents. However, natural language is rich and description of certain clinical facts can vary over time possibly impacting the output of the logic.
+
+### System performance
+
+The performance of the system can be assessed by computing statistics based on true positive, true negative, false positive, and false negative instances. In order to do so, a representative set of documents has to build, eventually annotated with the expected outcomes. Output of RI can be compared with the desired output to determine the accuracy numbers.
+
+The main reasons for Radiology Insights to trigger False Positive / False Negative output are:
+
+- Input document not containing all necessary meta information
+- Input document format and formatting (Section headings, Punctuation, ...)
+- Non English text (partial)
+- Unknown words (abbreviations, misspellings, …)
+- Issues with parsing complex formatting (nested tables, …)
+
+## Evaluation of Radiology Insights
+
+### Evaluation methods
+
+Radiology insight logic is developed and evaluated using a large set of real world clinical radiology documents. A train set of 5000+ docs annotated by human experts and is used to implement and refine the logic triggering the RI inferences. Part of this set is randomly sampled from a corpus provided by a US medical center and focused mostly on adult patients.
+
+The set used provides almost equal representation of US based male and female patients, and adequate representation of every age group. It should be noted that no further analysis of the training data representativeness (for example, geographic, demographic, or ethnographic representation) is done since the data doesn't includes that type of meta data. The train set and other evaluation sets used are constructed making sure that all types of inferences are present for different types of patient characteristics (Age, Sex).
+Accuracy or regression of the logic is tested using unit and functional tests covering the complete logic scope. Generalization of RI models is assessed by using left-out sets of documents sharing the same characteristics of the train set.
+
+Targeted minimum performance levels for each inference across the complete population are evaluated, tracked and reviewed with Subject matter experts.
+All underlying core NLP & NLU components are separately checked and reviewed using specific testsets.
+
+### Evaluation results
+
+Evaluation metrics used are precision, recall and f1 scoring when manual golden truth annotations are present.
+Regression testing is done via discrepancy analysis and human expert feedback cycles.
+It was observed that the inferences, and the medical info surfaced do add value in the intended use cases targeted, and have positive effect on the radiology workflow.
+
+Evaluating and integrating Radiology Insights for your use
+When you're getting ready to deploy Radiology Insights, the following activities help set you up for success:
+
+- Understand what it can do: Fully assess the capabilities of RI to understand its capabilities and limitations. Understand how it performs in your scenario and context.
+
+- Test with real, diverse data: Understand RI how performs in your scenario by thoroughly testing it by using real-life conditions and data that reflect the diversity in your users, geography, and deployment contexts. Small datasets, synthetic data, and tests that don't reflect your end-to-end scenario are unlikely to sufficiently represent your production performance.
+
+- Respect an individual's right to privacy: Only collect or use data and information from individuals for lawful and justifiable purposes. Use only the data and information that you have consent to use or are legally permitted to use.
+
+- Legal review: Obtain appropriate legal review of your solution, particularly if you use it in sensitive or high-risk applications. Understand what restrictions you might need to work within and any risks that need to be mitigated prior to use. It's your responsibility to mitigate such risks and resolve any issues that might come up.
+
+- System review: If you plan to integrate and responsibly use an AI-powered product or feature into an existing system for software or customer or organizational processes, take time to understand how each part of your system is affected. Consider how your AI solution aligns with Microsoft Responsible AI principles.
+
+- Human in the loop: Keep a human in the loop and include human oversight as a consistent pattern area to explore. This means constant human oversight of the AI-powered product or feature and ensuring humans making any decisions that are based on the modelΓÇÖs output. To prevent harm and to manage how the AI model performs, ensure that humans have a way to intervene in the solution in real time.
+
+- Security: Ensure that your solution is secure and that it has adequate controls to preserve the integrity of your content and prevent unauthorized access.
+
+- Customer feedback loop: Provide a feedback channel that users and individuals can use to report issues with the service after deployment. After you deploy an AI-powered product or feature, it requires ongoing monitoring and improvement. Have a plan and be ready to implement feedback and suggestions for improvement.
azure-health-insights Request Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/request-info.md
- Title: Azure AI Health Insights request info
-description: this article describes the required properties to interact with Azure AI Health Insights
----- Previously updated : 02/17/2023---
-# Azure AI Health Insights request info
-
-This page describes the request models and parameters that are used to interact with Azure AI Health Insights service.
-
-## Request
-The generic part of Azure AI Health Insights request, common to all models.
-
-Name |Required|Type |Description
|--||--
-`patients`|yes |Patient[]|The list of patients, including their clinical information and data.
--
-## Patient
-A patient record, including their clinical information and data.
-
-Name|Required|Type |Description
--|--||-
-`id` |yes |string |A given identifier for the patient. Has to be unique across all patients in a single request.
-`info`|no |PatientInfo |Patient structured information, including demographics and known structured clinical information.
-`data`|no |PatientDocument|Patient unstructured clinical data, given as documents.
---
-## PatientInfo
-Patient structured information, including demographics and known structured clinical information.
-
-Name |Required|Type |Description
-|--|-|--
-`gender` |no |string |[ female, male, unspecified ]
-`birthDate` |no |string |The patient's date of birth.
-`clinicalInfo`|no |ClinicalCodeElement|A piece of clinical information, expressed as a code in a clinical coding system.
-
-## ClinicalCodeElement
-A piece of clinical information, expressed as a code in a clinical coding system.
-
-Name |Required|Type |Description
-|--||-
-`system`|yes |string|The clinical coding system, for example ICD-10, SNOMED-CT, UMLS.
-`code` |yes |string|The code within the given clinical coding system.
-`name` |no |string|The name of this coded concept in the coding system.
-`value` |no |string|A value associated with the code within the given clinical coding system.
--
-## PatientDocument
-A clinical unstructured document related to a patient.
-
-Name |Required|Type |Description
-|--||--
-`type ` |yes |string |[ note, fhirBundle, dicom, genomicSequencing ]
-`clinicalType` |no |string |[ consultation, dischargeSummary, historyAndPhysical, procedure, progress, imaging, laboratory, pathology ]
-`id` |yes |string |A given identifier for the document. Has to be unique across all documents for a single patient.
-`language` |no |string |A 2 letter ISO 639-1 representation of the language of the document.
-`createdDateTime`|no |string |The date and time when the document was created.
-`content` |yes |DocumentContent|The content of the patient document.
-
-## DocumentContent
-The content of the patient document.
-
-Name |Required|Type |Description
--|--||-
-`sourceType`|yes |string|The type of the content's source.<br>If the source type is 'inline', the content is given as a string (for instance, text).<br>If the source type is 'reference', the content is given as a URI.[ inline, reference ]
-`value` |yes |string|The content of the document, given either inline (as a string) or as a reference (URI).
-
-## Next steps
-
-To get started using the service, you can
-
->[!div class="nextstepaction"]
-> [Deploy the service via the portal](deploy-portal.md)
azure-health-insights Response Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/response-info.md
- Title: Azure AI Health Insights response info
-description: this article describes the response from the service
----- Previously updated : 02/17/2023---
-# Azure AI Health Insights response info
-
-This page describes the response models and parameters that are returned by Azure AI Health Insights service.
--
-## Response
-The generic part of Azure AI Health Insights response, common to all models.
-
-Name |Required|Type |Description
-|--||
-`jobId` |yes |string|A processing job identifier.
-`createdDateTime` |yes |string|The date and time when the processing job was created.
-`expirationDateTime`|yes |string|The date and time when the processing job is set to expire.
-`lastUpdateDateTime`|yes |string|The date and time when the processing job was last updated.
-`status ` |yes |string|The status of the processing job. [ notStarted, running, succeeded, failed, partiallyCompleted ]
-`errors` |no |Error|An array of errors, if any errors occurred during the processing job.
-
-## Error
-
-Name |Required|Type |Description
--|--|-|
-`code` |yes |string |Error code
-`message` |yes |string |A human-readable error message.
-`target` |no |string |Target of the particular error. (for example, the name of the property in error.)
-`details` |no |collection|A list of related errors that occurred during the request.
-`innererror`|no |object |An object containing more specific information about the error.
-
-## Next steps
-
-To get started using the service, you can
-
->[!div class="nextstepaction"]
-> [Deploy the service via the portal](deploy-portal.md)
azure-health-insights Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/get-started.md
Once deployment is complete, you use the Azure portal to navigate to the newly c
## Submit a request and get results To send an API request, you need your Azure AI services account endpoint and key.
-![Screenshot of the Keys and Endpoints for the Trial Matcher.](../media/keys-and-endpoints.png)
+
+![[Screenshot of the Keys and Endpoints for the Trial Matcher.](../media/keys-and-endpoints.png)](../media/keys-and-endpoints.png#lightbox)
> [!IMPORTANT]
-> The Trial Matcher is an asynchronous API. Trial Matcher prediction is performed upon receipt of the API request and the results are returned asynchronously. The API results are available for 1 hour from the time the request was ingested and is indicated in the response. After the time period, the results are purged and are no longer available for retrieval.
+> The Trial Matcher is an asynchronous API. Trial Matcher prediction is performed upon receipt of the API request and the results are returned asynchronously. The API results are available for 24 hours from the time the request was ingested and is indicated in the response. After the time period, the results are purged and are no longer available for retrieval.
### Example Request
Ocp-Apim-Subscription-Key: {your-cognitive-services-api-key}
```
+You can also find a full view of the [request parameters here](/rest/api/cognitiveservices/healthinsights/trial-matcher/create-job)
The response includes the operation-location in the response header. The value looks similar to the following URL: ```https://eastus.api.cognitive.microsoft.com/healthinsights/trialmatcher/jobs/b58f3776-c6cb-4b19-a5a7-248a0d9481ff?api_version=2022-01-01-preview```
An example response:
} ```
+You can also find a full view of the [response parameters here](/rest/api/cognitiveservices/healthinsights/trial-matcher/get-job)
## Data limits
azure-health-insights Inferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/inferences.md
+ # Trial Matcher inference information The result of the Trial Matcher model includes a list of inferences made regarding the patient. For each trial that was queried for the patient, the model returns an indication of whether the patient appears eligible or ineligible for the trial. If the model concluded the patient is ineligible for a trial, it also provides a piece of evidence to support its conclusion (unless the ```evidence``` flag was set to false).
+> [!NOTE]
+> The examples below are based on API version: 2023-03-01-preview.
+ ## Example model result ```json "inferences":[
azure-health-insights Model Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/model-configuration.md
Last updated 02/02/2023
++ # Trial Matcher model configuration The Trial Matcher includes a built-in Knowledge graph, which uses trials taken from [clinicaltrials.gov](https://clinicaltrials.gov/), and is being updated periodically.
The Trial Matcher includes a built-in Knowledge graph, which uses trials taken f
When you're matching patients to trials, you can define a list of filters to query a subset of clinical trials. Each filter can be defined based on ```trial conditions```, ```types```, ```recruitment statuses```, ```sponsors```, ```phases```, ```purposes```, ```facility names```, ```locations```, or ```trial IDs```. - Specifying multiple values for the same filter category results in a trial set that is a union of the two sets.
+> [!NOTE]
+> The examples below are based on API version: 2023-03-01-preview.
In the following configuration, the model queries trials that are in recruitment status ```recruiting``` or ```not yet recruiting```.
To provide a custom trial, the input to the Trial Matcher service should include
"Id":"CustomTrial1", "EligibilityCriteriaText":"INCLUSION CRITERIA:\n\n 1. Patients diagnosed with Diabetes\n\n2. patients diagnosed with cancer\n\nEXCLUSION CRITERIA:\n\n1. patients with RET gene alteration\n\n 2. patients taking Aspirin\n\n3. patients treated with Chemotherapy\n\n", "Demographics":{
- "AcceptedGenders":[
- "Female"
- ],
- "AcceptedAgeRange":{
+ "AcceptedSex":"female",
+ "acceptedAgeRange":{
"MinimumAge":{ "Unit":"Years", "Value":0
azure-health-insights Patient Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/patient-info.md
+
+ # Trial Matcher patient info Trial Matcher uses patient information to match relevant patient(s) with the clinical trial(s). You can provide the information in four different ways:
Trial Matcher uses patient information to match relevant patient(s) with the cli
- gradual Matching (question and answer) - JSON key/value
+> [!NOTE]
+> The examples below are based on API version: 2023-03-01-preview.
+ ## Unstructured clinical note Patient data can be provided to the Trial Matcher as an unstructured clinical note.
The Trial Matcher performs a prior step of language understanding to analyze the
When providing patient data in clinical notes, use ```note``` value for ```Patient.PatientDocument.type```. Currently, Trial Matcher only supports one clinical note per patient. ++ The following example shows how to provide patient information as an unstructured clinical note: ```json
Entity type concepts are concepts that are grouped by common entity types, such
When entity type concepts are sent by customers to the Trial Matcher as part of the patientΓÇÖs clinical info, customers are expected to concatenate the entity type string to the value, separated with a semicolon. + Example concept from neededClinicalInfo API response: ```json {
azure-health-insights Trial Matcher Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-health-insights/trial-matcher/trial-matcher-modes.md
Trial Matcher provides two main modes of operation to users of the service: a **
On the diagram, you can see how patients' or clinical trials can be found through the two different modes. ![Diagram that shows the Trial Matcher operation modes.](../media/trial-matcher/overview.png) -
+[ ![Diagram that shows the Trial Matcher operation modes.](../media/trial-matcher/overview.png)](../media/trial-matcher/overview.png#lightbox)
## Patient centric
azure-monitor Azure Monitor Agent Mma Removal Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md
Title: Azure Monitor Agent MMA legacy agent removal tool
-description: This article describes a PowerShell script used to remove MMA agent from systems that users have migrated to AMA.
+ Title: MMA Discovery and Removal Utility
+description: This article describes a PowerShell script to remove the legacy agent from systems that have migrated to the Azure Monitor Agent.
Last updated 01/09/2024
-# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
+# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from the Log Analytics Agent to the Azure Monitor Agent and track the status of the migration in my account.
-# MMA Discovery and Removal Tool (Preview)
-After you migrate your machines to AMA, you need to remove the MMA agent to avoid duplication of logs. AzTS MMA Discovery and Removal Utility can centrally remove MMA extension from Azure Virtual Machine (VMs), Azure Virtual Machine Scale Sets and Azure Arc Servers from a tenant.
-The utility works in two steps
-1. Discovery ΓÇô First the utility creates an inventory of all machines that have the MMA agents installed. We recommend that no new VMs, Virtual Machine Scale Sets or Azure Arc Servers with MMA extension are created while the utility is running.
-2. Removal - Second the utility selects machines with both MMA and AMA and removes the MMA extension. You can disable this step and run after validating the list of machines. There's an option remove from machines that only have the MMA agent, but we recommended that you first migrate all dependencies to AMA and then remove MMA.
+# MMA Discovery and Removal Utility
+
+After you migrate your machines to the Azure Monitor Agent (AMA), you need to remove the Log Analytics Agent (also called the Microsoft Management Agent or MMA) to avoid duplication of logs. The Azure Tenant Security Solution (AzTS) MMA Discovery and Removal Utility can centrally remove the MMA extension from Azure virtual machines (VMs), Azure virtual machine scale sets, and Azure Arc servers from a tenant.
+
+The utility works in two steps:
+
+1. *Discovery*: The utility creates an inventory of all machines that have the MMA installed. We recommend that you don't create any new VMs, virtual machine scale sets, or Azure Arc servers with the MMA extension while the utility is running.
+
+2. *Removal*: The utility selects machines that have both the MMA and the AMA and removes the MMA extension. You can disable this step and run it after you validate the list of machines. There's an option to remove the extension from machines that have only the MMA, but we recommend that you first migrate all dependencies to the AMA and then remove the MMA.
## Prerequisites
-You do all the setup steps in a [Visual Studio Code](https://code.visualstudio.com/) with the [PowerShell Extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell).
--
-## Download Deployment package
- The package contains:
-- Bicep templates, which contain resource configuration details that you create as part of setup. -- Deployment set up scripts, which provides the cmdlet to run installation. -- Download deployment package zip from [here](https://github.com/azsk/AzTS-docs/raw/main/TemplateFiles/AzTSMMARemovalUtilityDeploymentFiles.zip) to your local machine. -- Extract zip to local folder location.-- Unblock the files with this script.-
- ``` PowerShell
- Get-ChildItem -Path "<Extracted folder path>" -Recurse | Unblock-File
- ```
-
-## Set up the tool
-
-### [Single Tenant](#tab/Single)
-
-You perform set up in two steps:
-1. Go to deployment folder and load consolidated setup script. You must have **Owner** access on the subscription.
-
- ``` PowerShell
- CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
- . ".\MMARemovalUtilitySetupConsolidated.ps1"
- ```
-
-2. The Install-AzTSMMARemovalUtilitySolutionConsolidated does the following operations:
- - Installs required Az modules.
- - Set up remediation user-assigned managed identity.
- - Prompts and collects onboarding details for usage telemetry collection based on user preference.
- - Creates or updates the RG.
- - Creates or updates the resources with MIs assigned.
- - Creates or updates the monitoring dashboard.
- - Configures target scopes.
-
-You must log in to Azure Account using the following PowerShell command.
-``` PowerShell
-$TenantId = "<TenantId>"
-Connect-AzAccount -Tenant $TenantId
-```
-Run the setup script
-``` PowerShell
-$SetupInstallation = Install-AzTSMMARemovalUtilitySolutionConsolidated `
- -RemediationIdentityHostSubId <MIHostingSubId> `
- -RemediationIdentityHostRGName <MIHostingRGName> `
- -RemediationIdentityName <MIName> `
- -TargetSubscriptionIds @("<SubId1>","<SubId2>","<SubId3>") `
- -TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>") `
- -TenantScope `
- -SubscriptionId <HostingSubId> `
- -HostRGName <HostingRGName> `
- -Location <Location> `
- -AzureEnvironmentName <AzureEnvironmentName>
-```
+Do all the setup steps in [Visual Studio Code](https://code.visualstudio.com/) with the [PowerShell extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). You need:
-Parameters
+- Windows 10 or later, or Windows Server 2019 or later.
+- PowerShell 5.0 or later. Check the version by running `$PSVersionTable`.
+- PowerShell. The language must be set to `FullLanguage` mode. Check the mode by running `$ExecutionContext.SessionState.LanguageMode` in PowerShell. For more information, see the [PowerShell reference](/powershell/module/microsoft.powershell.core/about/about_language_modes?source=recommendations).
+- Bicep. The setup scripts use Bicep to automate the installation. Check the installation by running `bicep --version`. For more information, see [Install Bicep tools](/azure/azure-resource-manager/bicep/install#azure-powershell).
+- A [user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview) that has **Reader**, **Virtual Machine Contributor**, and **Azure Arc ScVmm VM Contributor** access on target scopes.
+- A new resource group to contain all the Azure resources that the setup automation creates automatically.
+- Appropriate permission on the configured scopes. To grant the remediation user-assigned managed identity with the previously mentioned roles on the target scopes, you must have **User Access Administrator** or **Owner** permission. For example, if you're configuring the setup for a particular subscription, you must have the **User Access Administrator** role assignment on that subscription so that the script can provide the permissions for the remediation user-assigned managed identity.
-|Param Name | Description | Required |
-|:-|:-|:-:|
-|RemediationIdentityHostSubId| Subscription ID to create remediation resources | Yes |
-|RemediationIdentityHostRGName| New ResourceGroup name to create remediation. Defaults to 'AzTS-MMARemovalUtility-RG'| No |
-|RemediationIdentityName| Name of the remediation MI| Yes |
-|TargetSubscriptionIds| List of target subscription ID(s) to run on | No |
-|TargetManagementGroupNames| List of target management group name(s) to run on | No|
-|TenantScope| Activate tenant scope and assigns roles using your tenant id| No|
-|SubscriptionId| Subscription ID where setup is installed| Yes|
-|HostRGName| New resource group name where remediation MI is created. Default value is 'AzTS-MMARemovalUtility-Host-RG'| No|
-|Location| Location DC where setup is created. Default value is 'EastUS2'| No|
-|AzureEnvironmentName| Azure environment where solution is to be installed: AzureCloud, AzureGovernmentCloud. Default value is 'AzureCloud'| No|
-
-### [MultiTenant](#tab/MultiTenant)
-
-In this section, we walk you through the steps for setting up multitenant AzTS MMA Removal Utility. This set up may take up to 30 minutes and has 9 steps
-
-1. Load setup script
-Point the current path to the folder containing the extracted deployment package and run the setup script.
-
- ``` PowerShell
- CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
- . ".\MMARemovalUtilitySetup.ps1"
+## Download the deployment package
+
+The deployment package contains:
+
+- Bicep templates, which contain resource configuration details that you create as part of setup.
+- Deployment setup scripts, which provide the cmdlet to run the installation.
+
+To install the package:
+
+1. Go to the [AzTS-docs GitHub repository](https://github.com/azsk/AzTS-docs/tree/main/TemplateFiles). Download the deployment package file, *AzTSMMARemovalUtilityDeploymentFiles.zip*, to your local machine.
+
+1. Extract the .zip file to your local folder location.
+
+1. Unblock the files by using this script:
+
+ ``` PowerShell
+ Get-ChildItem -Path "<Extracted folder path>" -Recurse | Unblock-File
+ ```
+
+## Set up the utility
+
+### [Single tenant](#tab/single-tenant)
+
+1. Go to the deployment folder and load the consolidated setup script. You must have **Owner** access on the subscription.
+
+ ``` PowerShell
+ CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+ . ".\MMARemovalUtilitySetupConsolidated.ps1"
+ ```
+
+1. Sign in to the Azure account by using the following PowerShell command:
+
+ ``` PowerShell
+ $TenantId = "<TenantId>"
+ Connect-AzAccount -Tenant $TenantId
+ ```
+
+1. Run the setup script to perform the following operations:
+
+ - Install required Az modules.
+ - Set up the remediation user-assigned managed identity.
+ - Prompt and collect onboarding details for usage telemetry collection based on user preference.
+ - Create or update the resource group.
+ - Create or update the resources with assigned managed identities.
+ - Create or update the monitoring dashboard.
+ - Configure target scopes.
+
+ ``` PowerShell
+ $SetupInstallation = Install-AzTSMMARemovalUtilitySolutionConsolidated `
+ -RemediationIdentityHostSubId <MIHostingSubId> `
+ -RemediationIdentityHostRGName <MIHostingRGName> `
+ -RemediationIdentityName <MIName> `
+ -TargetSubscriptionIds @("<SubId1>","<SubId2>","<SubId3>") `
+ -TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>") `
+ -TenantScope `
+ -SubscriptionId <HostingSubId> `
+ -HostRGName <HostingRGName> `
+ -Location <Location> `
+ -AzureEnvironmentName <AzureEnvironmentName>
+ ```
+
+ The script contains these parameters:
+
+ |Parameter name | Description | Required |
+ |:-|:-|:-:|
+ |`RemediationIdentityHostSubId`| Subscription ID to create remediation resources. | Yes |
+ |`RemediationIdentityHostRGName`| New resource group name to create remediation. Defaults to `AzTS-MMARemovalUtility-RG`.| No |
+ |`RemediationIdentityName`| Name of the remediation managed identity.| Yes |
+ |`TargetSubscriptionIds`| List of target subscription IDs to run on. | No |
+ |`TargetManagementGroupNames`| List of target management group names to run on. | No|
+ |`TenantScope`| Tenant scope for assigning roles via your tenant ID.| No|
+ |`SubscriptionId`| ID of the subscription where the setup is installed.| Yes|
+ |`HostRGName`| Name of the new resource group where the remediation managed identity is created. Default value is `AzTS-MMARemovalUtility-Host-RG`.| No|
+ |`Location`| Location domain controller where the setup is created. Default value is `EastUS2`.| No|
+ |`AzureEnvironmentName`| Azure environment where the solution is installed: `AzureCloud` or `AzureGovernmentCloud`. Default value is `AzureCloud`.| No|
+
+### [Multitenant](#tab/multitenant)
+
+This section walks you through the steps for setting up the multitenant AzTS MMA Discovery and Removal Utility. This setup might take up to 30 minutes.
+
+#### Load the setup script
+
+Point the current path to the folder that contains the extracted deployment package and run the setup script:
+
+``` PowerShell
+CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+. ".\MMARemovalUtilitySetup.ps1"
```
-2. Installing required Az modules.
-Az modules contain cmdlets to deploy Azure resources, which are used to create resources. Install the required Az PowerShell Modules using this command. For more details of Az Modules, refer [link](/powershell/azure/install-az-ps). You must point current path to the extracted folder location.
+#### Install required Az modules
+
+Az PowerShell modules contain cmdlets to deploy Azure resources. Install the required Az modules by using the following command. For more information about Az modules, see [How to install Azure PowerShell](/powershell/azure/install-az-ps). You must point the current path to the extracted folder location.
``` PowerShell Set-Prerequisites ```
-3. Set up multitenant identity
-The Microsoft Entra ID Application identity is used to associate the MEI Application using service principal. You perform the following operations. You must log in to the Microsoft Entra ID account where you want to install the Removal Utility setup using the PowerShell command.
- - Creates a new multitenant MEI application if not provided with pre-existing MEI application objectId.
- - Creates password credentials for the MEI application.
+#### Set up multitenant identity
+
+In this step, you set up a Microsoft Entra application identity by using a service principal. You must sign in to the Microsoft Entra account where you want to install the MMA Discovery and Removal Utility setup by using the PowerShell command.
+
+You perform the following operations:
+
+- Create a multitenant Microsoft Entra application if one isn't provided with a preexisting Microsoft Entra application object ID.
+- Create password credentials for the Microsoft Entra application.
``` PowerShell Disconnect-AzAccount
$Identity.ObjectId
$Identity.Secret ```
-Parameters
+The script contains these parameters:
-|Param Name| Description | Required |
+|Parameter name| Description | Required |
|:-|:-|:-:|
-| DisplayName | Display Name of the Remediation Identity| Yes |
-| ObjectId | Object Id of the Remediation Identity | No |
-| AdditionalOwnerUPNs | User Principal Names (UPNs) of the owners for the App to be created | No |
+| `DisplayName` | Display name of the remediation identity.| Yes |
+| `ObjectId` | Object ID of the remediation identity. | No |
+| `AdditionalOwnerUPNs` | User principal names (UPNs) of the owners for the app to be created. | No |
+
+#### Set up storage
-4. Set up secrets storage
-In this step you create secrets storage. You must have owner access on the subscription to create a new RG. You perform the following operations.
- - Creates or updates the resource group for Key Vault.
- - Creates or updates the Key Vault.
- - Store the secret.
+In this step, you set up storage for secrets. You must have **Owner** access on the subscription to create a resource group.
+
+You perform the following operations:
+
+- Create or update the resource group for a key vault.
+- Create or update the key vault.
+- Store the secret.
``` PowerShell $KeyVault = Set-AzTSMMARemovalUtilitySolutionSecretStorage `
$KeyVault.Outputs.secretURI.Value
$KeyVault.Outputs.logAnalyticsResourceId.Value ```
-Parameters
+The script contains these parameters:
-|Param Name|Description|Required?
+|Parameter name|Description|Required|
|:-|:-|:-|
-| SubscriptionId | Subscription ID where keyvault is created.| Yes |
-| ResourceGroupName | Resource group name where Key Vault is created. Should be in a different RG from the set up RG | Yes |
-|Location| Location DC where Key Vault is created. For better performance, we recommend creating all the resources related to set up to be in one location. Default value is 'EastUS2'| No |
-|KeyVaultName| Name of the Key Vault that is created.| Yes |
-|AADAppPasswordCredential| Removal Utility MEI application password credentials| Yes |
-
-5. Set up Installation
-This step install the MMA Removal Utility, which discovers and removes MMA agents installed on Virtual Machines. You must have owner access to the subscription where the setup is created. We recommend that you use a new resource group for the tool. You perform the following operations.
- - Prompts and collects onboarding details for usage telemetry collection based on user preference.
- - Creates the RG if it doesn't exist.
- - Creates or updates the resources with MIs.
- - Creates or updates the monitoring dashboard.
+| `SubscriptionId` | Subscription ID where the key vault is created.| Yes |
+| `ResourceGroupName` | Name of the resource group where the key vault is created. It should be a different resource group from the setup resource group. | Yes |
+|`Location`| Location domain controller where the key vault is created. For better performance, we recommend creating all the resources related to setup in one location. Default value is `EastUS2`.| No |
+|`KeyVaultName`| Name of the key vault that's created.| Yes |
+|`AADAppPasswordCredential`| Microsoft Entra application password credentials for the MMA Discovery and Removal Utility.| Yes |
+
+#### Set up installation
+
+In this step, you install the MMA Discovery and Removal Utility. You must have **Owner** access to the subscription where the setup is created. We recommend that you use a new resource group for the utility.
+
+You perform the following operations:
+
+- Prompt and collect onboarding details for usage telemetry collection based on user preference.
+- Create the resource group if it doesn't exist.
+- Create or update the resources with managed identities.
+- Create or update the monitoring dashboard.
``` PowerShell $Solution = Install-AzTSMMARemovalUtilitySolution `
$Solution = Install-AzTSMMARemovalUtilitySolution `
$Solution.Outputs.internalMIObjectId.Value ```
-Parameters
+The script contains these parameters:
-| Param Name | Description | Required |
+| Parameter name | Description | Required |
|:-|:-|:-|
-| SubscriptionId | Subscription ID where setup is created | Yes |
-| HostRGName | Resource group name where setup is created Default value is 'AzTS-MMARemovalUtility-Host-RG'| No |
-| Location | Location DC where setup is created. For better performance, we recommend hosting the MI and Removal Utility in the same location. Default value is 'EastUS2'| No |
-| SupportMultiTenant | Switch to support multitenant set up | No |
-| IdentityApplicationId | MEI application Id.| Yes |
-|I dentitySecretUri | MEI application secret uri| No |
+| `SubscriptionId` | ID of the subscription where the setup is created. | Yes |
+| `HostRGName` | Name of the resource group where the setup is created. Default value is `AzTS-MMARemovalUtility-Host-RG`.| No |
+| `Location` | Location domain controller where the setup is created. For better performance, we recommend hosting the managed identity and the MMA Discovery and Removal Utility in the same location. Default value is `EastUS2`.| No |
+| `SupportMultiTenant` | Switch to support multitenant setup. | No |
+| `IdentityApplicationId` | Microsoft Entra application ID.| Yes |
+| `IdentitySecretUri` | Microsoft Entra application secret URI.| No |
+
+#### Grant an internal remediation identity with access to the key vault
-6. Grant internal remediation identity with access to Key Vault
-In this step a user assigned managed ident is created to enable function apps to read the Key Vault for authentication. You must have Owner access to the RG.
+In this step, you create a user-assigned managed identity to enable function apps to read the key vault for authentication. You must have **Owner** access to the resource group.
``` PowerShell Grant-AzTSMMARemediationIdentityAccessOnKeyVault `
Grant-AzTSMMARemediationIdentityAccessOnKeyVault `
-DeployMonitoringAlert ```
-Parameters
+The script contains these parameters:
-| Param Name | Description | Required |
+| Parameter name | Description | Required |
|:-|:-|:-:|
-|SubscriptionId| Subscription ID where setup is created | Yes |
-|ResourceId| Resource Id of existing key vault | Yes |
-|UserAssignedIdentityObjectId| Object ID of your managed identity | Yes |
-|SendAlertsToEmailIds| User email Ids to whom alerts should be sent| No, Yes if DeployMonitoringAlert switch is enabled |
-| SecretUri | Key Vault SecretUri of the Removal Utility App's credentials | No, Yes if DeployMonitoringAlert switch is enabled |
-| LAWorkspaceResourceId | ResourceId of the LA Workspace associated with key vault| No, Yes if DeployMonitoringAlert switch is enabled.|
-| DeployMonitoringAlert | Create alerts on top of Key Vault auditing logs | No, Yes if DeployMonitoringAlert switch is enabled |
-
-7. Set up runbook for managing key vault IP ranges
-This step creates a secure Key Vault with public network access disabled. IP Ranges for function apps must be allowed access to the Key Vault. You must have owner access to the RG. You perform the following operations:
- - Creates or updates the automation account.
- - Grants access for automation account using system-assigned managed identity on Key Vault.
- - Set up the runbook with script to fetch the IP ranges published by Azure every week.
- - Runs the runbook one-time at the time of set up and schedule task to run every week.
-
-```
+|`SubscriptionId`| ID of the subscription where the setup is created. | Yes |
+|`ResourceId`| Resource ID of the existing key vault. | Yes |
+|`UserAssignedIdentityObjectId`| Object ID of your managed identity. | Yes |
+|`SendAlertsToEmailIds`| User email IDs to whom alerts should be sent.| No; yes if the `DeployMonitoringAlert` switch is enabled |
+| `SecretUri` | Key vault secret URI of the MMA Discovery and Removal Utility app's credentials. | No; yes if the `DeployMonitoringAlert` switch is enabled |
+| `LAWorkspaceResourceId` | Resource ID of the Log Analytics workspace associated with the key vault.| No; yes if the `DeployMonitoringAlert` switch is enabled.|
+| `DeployMonitoringAlert` | Creation of alerts on top of the key vault's auditing logs. | No; yes if the `DeployMonitoringAlert` switch is enabled |
+
+#### Set up a runbook for managing key vault IP ranges
+
+In this step, you create a secure key vault with public network access disabled. IP ranges for function apps must be allowed access to the key vault. You must have **Owner** access to the resource group.
+
+You perform the following operations:
+
+- Create or update the automation account.
+- Grant access for the automation account by using a system-assigned managed identity on the key vault.
+- Set up the runbook with a script to fetch the IP ranges that Azure publishes every week.
+- Run the runbook one time at the time of setup, and schedule a task to run every week.
+
+``` PowerShell
Set-AzTSMMARemovalUtilityRunbook ` -SubscriptionId <HostingSubId> ` -ResourceGroupName <HostingRGName> `
Set-AzTSMMARemovalUtilityRunbook `
-KeyVaultResourceId $KeyVault.Outputs.keyVaultResourceId.Value ```
-Parameters
+The script contains these parameters:
-|Param Name |Description | Required|
+|Parameter name |Description | Required|
|:-|:-|:-|
-|SubscriptionId| Subscription ID where the automation account and key vault are present.| Yes|
-|ResourceGroupName| Name of resource group where the automation account and key vault are | Yes|
-|Location| Location where your automation account is created. For better performance, we recommend creating all the resources related to setup in the same location. Default value is 'EastUS2'| No|
-|FunctionAppUsageRegion| Location of dynamic ip addresses that are allowed on keyvault. Default location is EastUS2| Yes|
-|KeyVaultResourceId| Resource ID of the keyvault for ip addresses that are allowed.| Yes|
+|`SubscriptionId`| ID of the subscription that includes the automation account and key vault.| Yes|
+|`ResourceGroupName`| Name of resource group that contains the automation account and key vault. | Yes|
+|`Location`| Location where your automation account is created. For better performance, we recommend creating all the resources related to setup in the same location. Default value is `EastUS2`.| No|
+|`FunctionAppUsageRegion`| Location of dynamic IP addresses that are allowed on the key vault. Default location is `EastUS2`.| Yes|
+|`KeyVaultResourceId`| Resource ID of the key vault for allowed IP addresses.| Yes|
+
+#### Set up SPNs and grant required roles for each tenant
-8. Set up SPN and grant required roles for each tenant
-In this step you create SPNs for each tenant and grant permission on each tenant. Set up requires Reader, Virtual Machine Contributor, and Azure Arc ScVmm VM contributor access on your scopes. Scopes Configured can be a Tenant/ManagementGroup(s)/Subscription(s) or both ManagementGroup(s) and Subscription(s).
-For each tenant, perform the steps and make sure you have enough permissions on the other tenant for creating SPNs. You must have **User Access Administrator (UAA) or Owner** on the configured scopes. For example, to run setup on subscription 'X' you have to have UAA role assignment on subscription 'X' to grant the SPN with the required permissions.
+In this step, you create service principal names (SPNs) for each tenant and grant permission on each tenant. Setup requires **Reader**, **Virtual Machine Contributor**, and **Azure Arc ScVmm VM Contributor** access on your scopes. Configured scopes can be tenant, management group, or subscription, or they can be both management group and subscription.
+
+For each tenant, perform the steps and make sure you have enough permissions on the other tenant for creating SPNs. You must have **User Access Administrator** or **Owner** permission on the configured scopes. For example, to run the setup on a particular subscription, you must have a **User Access Administrator** role assignment on that subscription to grant the SPN with the required permissions.
``` PowerShell $TenantId = "<TenantId>"
Grant-AzSKAzureRoleToMultiTenantIdentitySPN -AADIdentityObjectId $SPN.ObjectId `
-TargetManagementGroupNames @("<MGName1>","<MGName2>","<MGName3>") ```
-Parameters
-For Set-AzSKTenantSecuritySolutionMultiTenantIdentitySPN,
+The script contains these parameters for `Set-AzSKTenantSecuritySolutionMultiTenantIdentitySPN`:
-|Param Name | Description | Required |
+|Parameter name | Description | Required |
|:-|:-|:-:|
-|AppId| Your application Id that is created| Yes |
+|`AppId`| Your created application ID.| Yes |
-For Grant-AzSKAzureRoleToMultiTenantIdentitySPN,
+The script contains these parameters for `Grant-AzSKAzureRoleToMultiTenantIdentitySPN`:
-|Param Name | Description | Required|
+|Parameter name | Description | Required|
|:-|:-|:-:|
-| AADIdentityObjectId | Your identity object| Yes|
-| TargetSubscriptionIds| Your list of target subscription ID(s) to run set up on | No |
-| TargetManagementGroupNames | Your list of target management group name(s) to run set up on | No|
+| `AADIdentityObjectId` | Your identity object.| Yes|
+| `TargetSubscriptionIds`| Your list of target subscription IDs to run the setup on. | No |
+| `TargetManagementGroupNames` | Your list of target management group names to run the setup on. | No|
+
+#### Configure target scopes
-9. Configure target scopes
-You can configure target scopes using the `Set-AzTSMMARemovalUtilitySolutionScopes`
+You can configure target scopes by using `Set-AzTSMMARemovalUtilitySolutionScopes`:
``` PowerShell $ConfiguredTargetScopes = Set-AzTSMMARemovalUtilitySolutionScopes `
$ConfiguredTargetScopes = Set-AzTSMMARemovalUtilitySolutionScopes `
-ResourceGroupName <HostingRGName> ` -ScopesFilePath <ScopesFilePath> ```
-Parameters
-|Param Name|Description|Required|
+The script contains these parameters:
+
+|Parameter name|Description|Required|
|:-|:-|:-:|
-|SubscriptionId| Your subscription ID where setup is installed | Yes |
-|ResourceGroupName| Your resource group name where setup is installed| Yes|
-|ScopesFilePath| File path with target scope configurations. See scope configuration| Yes |
+|`SubscriptionId`| ID of your subscription where the setup is installed. | Yes |
+|`ResourceGroupName`| Name of your resource group where the setup is installed.| Yes|
+|`ScopesFilePath`| File path with target scope configurations.| Yes |
-Scope configuration file is a CSV file with a header row and three columns
+The scope configuration file is a CSV file with a header row and three columns:
| ScopeType | ScopeId | TenantId | |:|:|:|
-| Subscription | /subscriptions/abb5301a-22a4-41f9-9e5f-99badff261f8 | 72f988bf-86f1-41af-91ab-2d7cd011db47 |
-| Subscription | /subscriptions/71bdd12b-ae1d-499a-a4ea-e32d4c1d9c35 | e60f12c0-e1dc-4be1-8d86-e979a5527830 |
+| Subscription | `/subscriptions/abb5301a-22a4-41f9-9e5f-99badff261f8` | `72f988bf-86f1-41af-91ab-2d7cd011db47` |
+| Subscription | `/subscriptions/71bdd12b-ae1d-499a-a4ea-e32d4c1d9c35` | `e60f12c0-e1dc-4be1-8d86-e979a5527830` |
++
-## Run The Tool
+## Run the utility
-### [Discovery](#tab/Discovery)
+### [Discovery](#tab/discovery)
``` PowerShell Update-AzTSMMARemovalUtilityDiscoveryTrigger `
Update-AzTSMMARemovalUtilityDiscoveryTrigger `
-StartExtensionDiscoveryAfterMinutes 30 ```
-Parameters
+The script contains these parameters:
-|Param Name|Description|Required?
+|Parameter name|Description|Required|
|:-|:-|:-:|
-|SubscriptionId| Subscription ID where you installed the Utility | Yes|
-|ResourceGroupName| ResourceGroup name where you installed the Utility | Yes|
-|StartScopeResolverAfterMinutes| Time in minutes to wait before running resolver | Yes (Mutually exclusive with param '-StartScopeResolverImmediatley')|
-|StartScopeResolverImmediatley | Run resolver immediately | Yes (Mutually exclusive with param '-StartScopeResolverAfterMinutes') |
-|StartExtensionDiscoveryAfterMinutes | Time in minutes to wait to run discovery (should be after resolver is done) | Yes (Mutually exclusive with param '-StartExtensionDiscoveryImmediatley')|
-|StartExtensionDiscoveryImmediatley | Run extensions discovery immediately | Yes (Mutually exclusive with param '-StartExtensionDiscoveryAfterMinutes')|
+|`SubscriptionId`| ID of the subscription where you installed the utility. | Yes|
+|`ResourceGroupName`| Name of the resource group where you installed the utility. | Yes|
+|`StartScopeResolverAfterMinutes`| Time, in minutes, to wait before running the resolver. | Yes (mutually exclusive with `-StartScopeResolverImmediately`)|
+|`StartScopeResolverImmediately` | Indicator to run the resolver immediately. | Yes (mutually exclusive with `-StartScopeResolverAfterMinutes`) |
+|`StartExtensionDiscoveryAfterMinutes` | Time, in minutes, to wait to run discovery (should be after the resolver is done). | Yes (mutually exclusive with `-StartExtensionDiscoveryImmediatley`)|
+|`StartExtensionDiscoveryImmediatley` | Indicator to run extension discovery immediately. | Yes (mutually exclusive with `-StartExtensionDiscoveryAfterMinutes`)|
-### [Removal](#tab/Removal)
+### [Removal](#tab/removal)
By default, the removal phase is disabled. We recommend that you run it after validating the inventory of machines from the discovery step.+ ``` PowerShell Update-AzTSMMARemovalUtilityRemovalTrigger ` -SubscriptionId <HostingSubId> `
Update-AzTSMMARemovalUtilityRemovalTrigger `
-RemovalCondition 'CheckForAMAPresence' ```
-Parameters
+The script contains these parameters:
-| Param Name | Description | Required?
+| Parameter name | Description | Required |
|:-|:-|:-:|
-| SubscriptionId | Subscription ID where you installed the Utility | Yes |
-| ResourceGroupName | ResourceGroup name where you installed the Utility| Yes|
-| StartAfterMinutes | Time in minutes to wait before starting removal | Yes (Mutually exclusive with param '-StartImmediately')|
-| StartImmediately | Run removal phase immediately | Yes (Mutually exclusive with param '-StartAfterMinutes') |
-| EnableRemovalPhase | Enable removal phase | Yes (Mutually exclusive with param '-DisableRemovalPhase')|
-| RemovalCondition | MMA extension should be removed when:</br>ChgeckForAMAPresence AMA extension is present </br> SkipAMAPresenceCheck in all cases whether AMA extension is present or not) | No |
-| DisableRemovalPhase | Disable removal phase | Yes (Mutually exclusive with param '-EnableRemovalPhase')|
-
-**Know issues**
-- Removal of MMA agent in Virtual Machine Scale Set(VMSS) where orchestration mode is 'Uniform' depend on its upgrade policy. We recommend that you manually upgrade the instance if the policy is set to 'Manual.' -- If you get the error message, "The deployment MMARemovalenvironmentsetup-20233029T103026 failed with error(s). Showing 1 out of 1 error(s). Status Message: (Code:BadRequest) - We observed intermittent issue with App service deployment." Rerun the installation command with same parameter values. Command should proceed without any error in next attempt. -- Extension removal progress tile on Monitoring dashboards shows some failures - Progress tile groups failures by error code, some known error code, reason and next steps to resolve are listed: -
-| Error Code | Description/Reason | Next steps
-|:-|:-|:-|
-| AuthorizationFailed | Remediation Identity doesn't have permission to perform 'Extension delete' operation on VM(s), VMSS, Azure Arc Servers.| Grant 'VM Contributor' role to Remediation Identity on VM(s) and Grant 'Azure Arc ScVmm VM Contributor' role to Remediation Identity on VMSS and rerun removal phase.|
-| OperationNotAllowed | Resource(s) are in a de-allocated state or a Lock is applied on the resource(s) | Turn on failed resource(s) and/or Remove Lock and rerun removal phase |
-
-The utility collects error details in the Log Analytics workspace that was used during set up. Go to Log Analytics workspace > Select Logs and run following query:
+| `SubscriptionId` | ID of the subscription where you installed the utility. | Yes |
+| `ResourceGroupName` | Name of the resource group where you installed the utility.| Yes|
+| `StartAfterMinutes` | Time, in minutes, to wait before starting removal. | Yes (mutually exclusive with `-StartImmediately`)|
+| `StartImmediately` | Indicator to run the removal phase immediately. | Yes (mutually exclusive with `-StartAfterMinutes`) |
+| `EnableRemovalPhase` | Indicator to enable the removal phase. | Yes (mutually exclusive with `-DisableRemovalPhase`)|
+| `RemovalCondition` | Indicator that the MMA extension should be removed when the `CheckForAMAPresence` AMA extension is present. It's `SkipAMAPresenceCheck` in all cases, whether an AMA extension is present or not. | No |
+| `DisableRemovalPhase` | Indicator of disabling the removal phase. | Yes (mutually exclusive with `-EnableRemovalPhase`)|
+
+Here are known issues with removal:
+
+- Removal of the MMA in a virtual machine scale set where the orchestration mode is `Uniform` depends on its upgrade policy. We recommend that you manually upgrade the instance if the policy is set to `Manual`.
+- If you get the following error message, rerun the installation command with the same parameter values:
+
+ `The deployment MMARemovalenvironmentsetup-20233029T103026 failed with error(s). Showing 1 out of 1 error(s). Status Message: (Code:BadRequest) - We observed intermittent issue with App service deployment.`
+
+ The command should proceed without any error in the next attempt.
+- If the progress tile for extension removal shows failures on monitoring dashboards, use the following information to resolve them:
+
+ | Error code | Description/reason | Next steps|
+ |:-|:-|:-|
+ | `AuthorizationFailed` | The remediation identity doesn't have permission to perform an extension deletion operation on VMs, virtual machine scale sets, or Azure Arc servers.| Grant the **VM Contributor** role to the remediation identity on VMs. Grant the **Azure Arc ScVmm VM Contributor** role to the remediation identity on virtual machine scale sets. Then rerun the removal phase.|
+ | `OperationNotAllowed` | Resources are in a deallocated state, or a lock is applied on the resources. | Turn on failed resources and/or remove the lock, and then rerun the removal phase. |
+
+The utility collects error details in the Log Analytics workspace that you used during setup. Go to the Log Analytics workspace, select **Logs**, and then run the following query:
``` KQL let timeago = timespan(7d);
InventoryProcessingStatus_CL
| project ResourceId, ProcessingStatus_s, ProcessErrorDetails_s ```
-## [CleanUp](#tab/CleanUp)
+### [Cleanup](#tab/cleanup)
-The utility creates resources that you should clean up once you have remove MMA from your infrastructure. Execute the following steps to clean up.
- 1. Go to the folder containing the deployment package and load the cleanup script
+The MMA Discovery and Removal Utility creates resources that you should clean up after you remove the MMA from your infrastructure. Complete the following steps to clean up:
- ``` PowerShell
- CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
- . ".\MMARemovalUtilityCleanUpScript.ps1"
-```
+1. Go to the folder that contains the deployment package and load the cleanup script:
-2. Run the cleanup script
+ ``` PowerShell
+ CD "<LocalExtractedFolderPath>\AzTSMMARemovalUtilityDeploymentFiles"
+ . ".\MMARemovalUtilityCleanUpScript.ps1"
+ ```
-``` PowerShell
-Remove-AzTSMMARemovalUtilitySolutionResources `
- -SubscriptionId <HostingSubId> `
- -ResourceGroupName <HostingRGName> `
- [-DeleteResourceGroup `]
- -KeepInventoryAndProcessLogs
-```
+2. Run the cleanup script:
+
+ ``` PowerShell
+ Remove-AzTSMMARemovalUtilitySolutionResources `
+ -SubscriptionId <HostingSubId> `
+ -ResourceGroupName <HostingRGName> `
+ [-DeleteResourceGroup `]
+ -KeepInventoryAndProcessLogs
+ ```
-Parameters
+The script contains these parameters:
-|Param Name|Description|Required|
+|Parameter name|Description|Required|
|:-|:-|:-:|
-|SubscriptionId| Subscription ID that the Utility is deleting| Yes|
-|ResourceGroupName| ResourceGroup name, which is deleting| Yes|
-|DeleteResourceGroup| Boolean flag to delete entire resource group| Yes|
-|KeepInventoryAndProcessLogs| Boolean flag to exclude log analytics workspace and application insights. CanΓÇÖt be used with DeleteResourceGroup.| No|
+|`SubscriptionId`| ID of the subscription that you're deleting.| Yes|
+|`ResourceGroupName`| Name of the resource group that you're deleting.| Yes|
+|`DeleteResourceGroup`| Boolean flag to delete an entire resource group.| Yes|
+|`KeepInventoryAndProcessLogs`| Boolean flag to exclude the Log Analytics workspace and Application Insights. You can't use it with `DeleteResourceGroup`.| No|
++
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section lists all supported platforms and frameworks.
* [Azure Virtual Machines and Azure Virtual Machine Scale Sets](./azure-vm-vmss-apps.md) * [Azure App Service](./azure-web-apps.md) * [Azure Functions](../../azure-functions/functions-monitoring.md)
-* [Azure Spring Apps](../../spring-apps/how-to-application-insights.md)
+* [Azure Spring Apps](../../spring-apps/enterprise/how-to-application-insights.md)
* [Azure Cloud Services](./azure-web-apps-net-core.md), including both web and worker roles #### Logging frameworks
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
For more information, see [Monitoring Azure Functions with Azure Monitor Applica
## Azure Spring Apps
-For more information, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](../../spring-apps/how-to-application-insights.md).
+For more information, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](../../spring-apps/enterprise/how-to-application-insights.md).
## Containers
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
Autoscale supports the following services.
| Azure Stream Analytics | [Autoscale streaming units (preview)](../../stream-analytics/stream-analytics-autoscale.md) | | Azure SignalR Service (Premium tier) | [Automatically scale units of an Azure SignalR service](../../azure-signalr/signalr-howto-scale-autoscale.md) | | Azure Machine Learning workspace | [Autoscale an online endpoint](../../machine-learning/how-to-autoscale-endpoints.md) |
-| Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/how-to-setup-autoscale.md) |
+| Azure Spring Apps | [Set up autoscale for applications](../../spring-apps/enterprise/how-to-setup-autoscale.md) |
| Azure Media Services | [Autoscaling in Media Services](/azure/media-services/latest/release-notes#autoscaling) | | Azure Service Bus | [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md) | | Azure Logic Apps - Integration service environment (ISE) | [Add ISE capacity](../../logic-apps/ise-manage-integration-service-environment.md#add-ise-capacity) |
azure-monitor Container Insights Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-custom-metrics.md
Container insights collects [custom metrics](../essentials/metrics-custom-overvi
- Pin performance charts in Azure portal dashboards. - Take advantage of [metric alerts](../alerts/alerts-types.md#metric-alerts).
-> [!NOTE]
-> This article describes collection of custom metrics from Kubernetes clusters. You can also collect Prometheus metrics as described in [Collect Prometheus metrics with Container insights](container-insights-prometheus.md).
+> [!IMPORTANT]
+> These metrics will no longer be collected starting May 31, 2024 as described in [Container insights recommended alerts (custom metrics) (preview) retirement moving up to 31 May 2024](https://azure.microsoft.com/updates/container-insights-recommended-alerts-custom-metrics-preview-retirement-moving-up-to-31-may-2024). See [Enable Prometheus and Grafana](kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) to enable collection of Prometheus metrics.
## Use custom metrics Custom metrics collected by Container insights can be accessed with the same methods as custom metrics collected from other data sources, including [metrics explorer](../essentials/metrics-getting-started.md) and [metrics alerts](../alerts/alerts-types.md#metric-alerts).
azure-monitor Container Insights Data Collection Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-data-collection-dcr.md
Title: Configure Container insights data collection using data collection rule description: Describes how you can configure cost optimization and other data collection for Container insights using a data collection rule. + Last updated 12/19/2023
resources
## Next steps -- See [Configure data collection in Container insights using ConfigMap](container-insights-data-collection-configmap.md) to configure data collection using ConfigMap instead of the DCR.
+- See [Configure data collection in Container insights using ConfigMap](container-insights-data-collection-configmap.md) to configure data collection using ConfigMap instead of the DCR.
azure-monitor Container Insights Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-reports.md
The **event anomaly** analyzer groups similar events together for easier analysi
### Container optimizer The **container optimizer** analyzer shows containers with excessive cpu and memory limits and requests. Each tile can represent multiple containers with the same spec. For example, if a deployment creates 100 identical pods each with a container C1 and C2, then there will be a single tile for all C1 containers and a single tile for all C2 containers. Containers with set limits and requests are color-coded in a gradient from green to red.
+> [!IMPORTANT]
+> This view doesn't include containers in the **kube-system** namespace and doesn't support Windows Server nodes.
+>
+ The number on each tile represents how far the container limits/requests are from the optimal/suggested value. The closer the number is to 0 the better it is. Each tile has a color to indicate the following: - green: well set limits and requests
azure-monitor Kubernetes Monitoring Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-disable.md
Title: Disable monitoring of your Kubernetes cluster
description: Describes how to remove Container insights and scraping of Prometheus metrics from your Kubernetes cluster. Last updated 01/23/2024-+ ms.devlang: azurecli
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
Title: Enable monitoring for Azure Kubernetes Service (AKS) cluster
description: Learn how to enable Container insights and Managed Prometheus on an Azure Kubernetes Service (AKS) cluster. Last updated 11/14/2023-+
azure-monitor Prometheus Metrics Scrape Configuration Minimal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration-minimal.md
Following targets are **enabled/ON** by default - meaning you don't have to prov
- `nodeexporter` (`job=node`) - `kubelet` (`job=kubelet`) - `kube-state-metrics` (`job=kube-state-metrics`)
+- `controlplane-apiserver` (`job=controlplane-apiserver`)
+- `controlplane-etcd` (`job=controlplane-etcd`)
Following targets are available to scrape, but scraping isn't enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they're disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section - `core-dns` (`job=kube-dns`) - `kube-proxy` (`job=kube-proxy`) - `api-server` (`job=kube-apiserver`)
+- `controlplane-cluster-autoscaler` (`job=controlplane-cluster-autoscaler`)
+- `controlplane-kube-scheduler` (`job=controlplane-kube-scheduler`)
+- `controlplane-kube-controller-manager` (`job=controlplane-kube-controller-manager`)
> [!NOTE] > The default scrape frequency for all default targets and scrapes is `30 seconds`. You can override it per target using the [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-targets-scrape-interval-settings` section.
+> The control plane targets have a fixed scrape interval of `30 seconds` and cannot be overwritten.
> You can read more about four different configmaps used by metrics addon [here](prometheus-metrics-scrape-configuration.md) ## Configuration setting
The following metrics are allow-listed with `minimalingestionprofile=true` for d
- `node_time_seconds` - `node_uname_info"`
+**controlplane-apiserver**<br>
+- `apiserver_request_total`
+- `apiserver_cache_list_fetched_objects_total`
+- `apiserver_cache_list_returned_objects_total`
+- `apiserver_flowcontrol_demand_seats_average`
+- `apiserver_flowcontrol_current_limit_seats`
+- `apiserver_request_sli_duration_seconds_bucket`
+- `apiserver_request_sli_duration_seconds_count`
+- `apiserver_request_sli_duration_seconds_sum`
+- `process_start_time_seconds`
+- `apiserver_request_duration_seconds_bucket`
+- `apiserver_request_duration_seconds_count`
+- `apiserver_request_duration_seconds_sum`
+- `apiserver_storage_list_fetched_objects_total`
+- `apiserver_storage_list_returned_objects_total`
+- `apiserver_current_inflight_requests`
+
+**controlplane-etcd**<br>
+- `etcd_server_has_leader`
+- `rest_client_requests_total`
+- `etcd_mvcc_db_total_size_in_bytes`
+- `etcd_mvcc_db_total_size_in_use_in_bytes`
+- `etcd_server_slow_read_indexes_total`
+- `etcd_server_slow_apply_total`
+- `etcd_network_client_grpc_sent_bytes_total`
+- `etcd_server_heartbeat_send_failures_total`
+ ### Minimal ingestion for default OFF targets The following are metrics that are allow-listed with `minimalingestionprofile=true` for default OFF targets. These metrics are not collected by default as these targets are not scraped by default (due to being OFF by default). You can turn ON scraping for these targets using `default-scrape-settings-enabled.<target-name>=true`' using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section.
The following are metrics that are allow-listed with `minimalingestionprofile=tr
- `process_cpu_seconds_total` - `go_goroutines`
+**controlplane-cluster-autoscaler**<br>
+- `rest_client_requests_total`
+- `cluster_autoscaler_last_activity`
+- `cluster_autoscaler_cluster_safe_to_autoscale`
+- `cluster_autoscaler_scale_down_in_cooldown`
+- `cluster_autoscaler_scaled_up_nodes_total`
+- `cluster_autoscaler_unneeded_nodes_count`
+- `cluster_autoscaler_unschedulable_pods_count`
+- `cluster_autoscaler_nodes_count`
+- `cloudprovider_azure_api_request_errors`
+- `cloudprovider_azure_api_request_duration_seconds_bucket`
+- `cloudprovider_azure_api_request_duration_seconds_count`
+
+**controlplane-kube-scheduler**<br>
+- `scheduler_pending_pods`
+- `scheduler_unschedulable_pods`
+- `scheduler_pod_scheduling_attempts`
+- `scheduler_queue_incoming_pods_total`
+- `scheduler_preemption_attempts_total`
+- `scheduler_preemption_victims`
+- `scheduler_scheduling_attempt_duration_seconds`
+- `scheduler_schedule_attempts_total`
+- `scheduler_pod_scheduling_duration_seconds`
+
+**controlplane-kube-controller-manager**<br>
+- `rest_client_request_duration_seconds`
+- `rest_client_requests_total`
+- `workqueue_depth`
+ ## Next steps - [Learn more about customizing Prometheus metric scraping in Container insights](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Scrape Default https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-default.md
Following targets are **enabled/ON** by default - meaning you don't have to prov
- `nodeexporter` (`job=node`) - `kubelet` (`job=kubelet`) - `kube-state-metrics` (`job=kube-state-metrics`)
+- `controlplane-apiserver` (`job=controlplane-apiserver`)
+- `controlplane-etcd` (`job=controlplane-etcd`)
## Metrics collected from default targets
The following metrics are collected by default from each default target. All oth
- `kube_resource_labels` (ex - kube_pod_labels, kube_deployment_labels) - `kube_resource_annotations` (ex - kube_pod_annotations, kube_deployment_annotations)
+ **controlplane-apiserver (job=controlplane-apiserver)**<br>
+ - `apiserver_request_total`
+ - `apiserver_cache_list_fetched_objects_total`
+ - `apiserver_cache_list_returned_objects_total`
+ - `apiserver_flowcontrol_demand_seats_average`
+ - `apiserver_flowcontrol_current_limit_seats`
+ - `apiserver_request_sli_duration_seconds_bucket`
+ - `apiserver_request_sli_duration_seconds_count`
+ - `apiserver_request_sli_duration_seconds_sum`
+ - `process_start_time_seconds`
+ - `apiserver_request_duration_seconds_bucket`
+ - `apiserver_request_duration_seconds_count`
+ - `apiserver_request_duration_seconds_sum`
+ - `apiserver_storage_list_fetched_objects_total`
+ - `apiserver_storage_list_returned_objects_total`
+ - `apiserver_current_inflight_requests`
+
+ **controlplane-etcd (job=controlplane-etcd)**<br>
+ - `etcd_server_has_leader`
+ - `rest_client_requests_total`
+ - `etcd_mvcc_db_total_size_in_bytes`
+ - `etcd_mvcc_db_total_size_in_use_in_bytes`
+ - `etcd_server_slow_read_indexes_total`
+ - `etcd_server_slow_apply_total`
+ - `etcd_network_client_grpc_sent_bytes_total`
+ - `etcd_server_heartbeat_send_failures_total`
+ ## Default targets scraped for Windows Following Windows targets are configured to scrape, but scraping is not enabled (**disabled/OFF**) by default - meaning you don't have to provide any scrape job configuration for scraping these targets but they are disabled/OFF by default and you need to turn ON/enable scraping for these targets using [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) under `default-scrape-settings-enabled` section
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
If the data export rule includes an unsupported table, the configuration will su
| AACAudit | | | AACHttpRequest | | | AADB2CRequestLogs | |
+| AADCustomSecurityAttributeAuditLogs | |
| AADDomainServicesAccountLogon | | | AADDomainServicesAccountManagement | | | AADDomainServicesDirectoryServiceAccess | |
If the data export rule includes an unsupported table, the configuration will su
| ACSBillingUsage | | | ACSCallAutomationIncomingOperations | | | ACSCallAutomationMediaSummary | |
+| ACSCallClientMediaStatsTimeSeries | |
+| ACSCallClientOperations | |
+| ACSCallClosedCaptionsSummary | |
| ACSCallDiagnostics | | | ACSCallRecordingIncomingOperations | | | ACSCallRecordingSummary | |
If the data export rule includes an unsupported table, the configuration will su
| ACSEmailSendMailOperational | | | ACSEmailStatusUpdateOperational | | | ACSEmailUserEngagementOperational | |
+| ACSJobRouterIncomingOperations | |
| ACSNetworkTraversalDiagnostics | | | ACSNetworkTraversalIncomingOperations | | | ACSRoomsIncomingOperations | |
If the data export rule includes an unsupported table, the configuration will su
| AegDataPlaneRequests | | | AegDeliveryFailureLogs | | | AegPublishFailureLogs | |
+| AEWAssignmentBlobLogs | |
| AEWAuditLogs | | | AEWComputePipelinesLogs | |
+| AFSAuditLogs | |
+| AGCAccessLogs | |
| AgriFoodApplicationAuditLogs | | | AgriFoodFarmManagementLogs | | | AgriFoodFarmOperationLogs | |
If the data export rule includes an unsupported table, the configuration will su
| AgriFoodSensorManagementLogs | | | AgriFoodWeatherLogs | | | AGSGrafanaLoginEvents | |
+| AGWAccessLogs | |
+| AGWFirewallLogs | |
+| AGWPerformanceLogs | |
| AHDSDicomAuditLogs | | | AHDSDicomDiagnosticLogs | | | AHDSMedTechDiagnosticLogs | |
If the data export rule includes an unsupported table, the configuration will su
| AMSStreamingEndpointRequests | | | ANFFileAccess | | | Anomalies | |
+| AOIDatabaseQuery | |
+| AOIDigestion | |
+| AOIStorage | |
| ApiManagementGatewayLogs | | | AppAvailabilityResults | | | AppBrowserTimings | |
If the data export rule includes an unsupported table, the configuration will su
| AppServiceAntivirusScanAuditLogs | | | AppServiceAppLogs | | | AppServiceAuditLogs | |
+| AppServiceAuthenticationLogs | |
| AppServiceConsoleLogs | | | AppServiceEnvironmentPlatformLogs | | | AppServiceFileAuditLogs | |
If the data export rule includes an unsupported table, the configuration will su
| AppServiceServerlessSecurityPluginData | | | AppSystemEvents | | | AppTraces | |
+| ArcK8sAudit | |
+| ArcK8sAuditAdmin | |
+| ArcK8sControlPlane | |
| ASCAuditLogs | | | ASCDeviceEvents | | | ASimAuditEventLogs | | | ASimAuthenticationEventLogs | |
+| ASimDhcpEventLogs | |
| ASimDnsActivityLogs | |
-| ASimNetworkSessionLogs | |
+| ASimFileEventLogs | |
| ASimNetworkSessionLogs | | | ASimProcessEventLogs | |
+| ASimRegistryEventLogs | |
+| ASimUserManagementActivityLogs | |
| ASimWebSessionLogs | | | ASRJobs | | | ASRReplicatedItems | |
If the data export rule includes an unsupported table, the configuration will su
| AuditLogs | | | AutoscaleEvaluationsLog | | | AutoscaleScaleActionsLog | |
+| AVNMConnectivityConfigurationChange | |
+| AVNMIPAMPoolAllocationChange | |
| AVNMNetworkGroupMembershipChange | |
+| AVNMRuleCollectionChange | |
| AVSSyslog | | | AWSCloudTrail | | | AWSCloudWatch | |
If the data export rule includes an unsupported table, the configuration will su
| AZMSOperationalLogs | | | AZMSRunTimeAuditLogs | | | AZMSVnetConnectionEvents | |
+| AzureActivity | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
| AzureAssessmentRecommendation | | | AzureAttestationDiagnostics | |
+| AzureBackupOperations | |
| AzureDevOpsAuditing | |
+| AzureDiagnostics | |
| AzureLoadTestingOperation | |
+| AzureMetricsV2 | |
| BehaviorAnalytics | | | CassandraAudit | | | CassandraLogs | |
If the data export rule includes an unsupported table, the configuration will su
| ConfigurationData | Partial support. Some of the data is ingested through internal services that aren't supported in export. Currently, this portion is missing in export. | | ContainerAppConsoleLogs | | | ContainerAppSystemLogs | |
+| ContainerEvent | |
| ContainerImageInventory | |
+| ContainerInstanceLog | |
| ContainerInventory | | | ContainerLog | | | ContainerLogV2 | |
If the data export rule includes an unsupported table, the configuration will su
| DatabricksUnityCatalog | | | DatabricksWebTerminal | | | DatabricksWorkspace | |
+| DatabricksWorkspaceLogs | |
| DataTransferOperations | |
+| DataverseActivity | |
+| DCRLogErrors | |
+| DCRLogTroubleshooting | |
+| DevCenterBillingEventLogs | |
| DevCenterDiagnosticLogs | |
+| DevCenterResourceOperationLogs | |
| DeviceEvents | | | DeviceFileCertificateInfo | | | DeviceFileEvents | |
If the data export rule includes an unsupported table, the configuration will su
| DeviceTvmSoftwareVulnerabilitiesKB | | | DnsEvents | | | DnsInventory | |
+| DNSQueryLogs | |
| DSMAzureBlobStorageLogs | | | DSMDataClassificationLogs | | | DSMDataLabelingLogs | |
-| DynamicEventCollection | |
| Dynamics365Activity | | | DynamicSummary | |
+| EGNFailedMqttConnections | |
+| EGNFailedMqttPublishedMessages | |
+| EGNFailedMqttSubscriptions | |
+| EGNMqttDisconnections | |
+| EGNSuccessfulMqttConnections | |
| EmailAttachmentInfo | | | EmailEvents | | | EmailPostDeliveryEvents | | | EmailUrlInfo | | | EnrichedMicrosoft365AuditLogs | |
+| ETWEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
| Event | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. | | ExchangeAssessmentRecommendation | | | ExchangeOnlineAssessmentRecommendation | | | FailedIngestion | | | FunctionAppLogs | | | GCPAuditLogs | |
+| GoogleCloudSCC | |
| HDInsightAmbariClusterAlerts | | | HDInsightAmbariSystemMetrics | | | HDInsightGatewayAuditLogs | |
If the data export rule includes an unsupported table, the configuration will su
| KubePVInventory | | | KubeServices | | | LAQueryLogs | |
+| LASummaryLogs | |
+| LinuxAuditLog | |
| LogicAppWorkflowRuntime | | | McasShadowItReporting | | | MCCEventLogs | |
If the data export rule includes an unsupported table, the configuration will su
| MicrosoftAzureBastionAuditLogs | | | MicrosoftDataShareReceivedSnapshotLog | | | MicrosoftDataShareSentSnapshotLog | |
+| MicrosoftDataShareShareLog | |
| MicrosoftGraphActivityLogs | | | MicrosoftHealthcareApisAuditLogs | | | MicrosoftPurviewInformationProtection | |
+| MNFDeviceUpdates | |
+| MNFSystemStateMessageUpdates | |
+| NCBMBreakGlassAuditLogs | |
+| NCBMSecurityDefenderLogs | |
+| NCBMSecurityLogs | |
+| NCBMSystemLogs | |
+| NCCKubernetesLogs | |
+| NCCVMOrchestrationLogs | |
+| NCSStorageAlerts | |
+| NCSStorageLogs | |
| NetworkAccessTraffic | |
+| NetworkMonitoring | |
+| NGXOperationLogs | |
| NSPAccessLogs | | | NTAIpDetails | | | NTANetAnalytics | |
If the data export rule includes an unsupported table, the configuration will su
| PowerBIDatasetsTenant | | | PowerBIDatasetsWorkspace | | | PowerBIReportUsageWorkspace | |
+| PowerPlatformAdminActivity | |
| PowerPlatformConnectorActivity | | | PowerPlatformDlpActivity | | | ProjectActivity | |
If the data export rule includes an unsupported table, the configuration will su
| PurviewScanStatusLogs | | | PurviewSecurityLogs | | | REDConnectionEvents | |
+| RemoteNetworkHealthLogs | |
| ResourceManagementPublicAccessLogs | | | SCCMAssessmentRecommendation | | | SCOMAssessmentRecommendation | | | SecureScoreControls | | | SecureScores | | | SecurityAlert | |
+| SecurityAttackPathData | |
| SecurityBaseline | | | SecurityBaselineSummary | | | SecurityDetection | |
If the data export rule includes an unsupported table, the configuration will su
| SecurityRegulatoryCompliance | | | SentinelAudit | | | SentinelHealth | |
+| ServiceFabricOperationalEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
+| ServiceFabricReliableActorEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
+| ServiceFabricReliableServiceEvent | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. |
+| SfBAssessmentRecommendation | |
| SharePointOnlineAssessmentRecommendation | | | SignalRServiceDiagnosticLogs | | | SigninLogs | |
If the data export rule includes an unsupported table, the configuration will su
| Usage | | | UserAccessAnalytics | | | UserPeerAnalytics | |
+| VCoreMongoRequests | |
| VIAudit | | | VIIndexing | |
+| VMConnection | Partial support. Some of the data is ingested through internal services that aren't supported in export. Currently, this portion is missing in export. |
| W3CIISLog | Partial support. Data arriving from the Log Analytics agent or Azure Monitor Agent is fully supported in export. Data arriving via the Diagnostics extension agent is collected through storage. This path isn't supported in export. | | WaaSDeploymentStatus | | | WaaSInsiderStatus | |
If the data export rule includes an unsupported table, the configuration will su
| WebPubSubConnectivity | | | WebPubSubHttpRequest | | | WebPubSubMessaging | |
+| Windows365AuditLogs | |
| WindowsClientAssessmentRecommendation | | | WindowsEvent | | | WindowsFirewall | |
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-monitor Workbooks Interactive Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-interactive-reports.md
Title: Create interactive reports with Azure Monitor Workbooks description: This article explains how to create interactive reports in Azure Workbooks. + Last updated 01/08/2024
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
-+ Last updated 10/02/2023
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* East Asia * East US 2 * France Central
+* Germany West Central
* North Central US * North Europe * Southeast Asia * Switzerland North * Switzerland West
+* Sweden Central
* UAE North
+* UK South
* US Gov Arizona * US Gov Texas * US Gov Virginia
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
Title: Bicep functions - string
description: Describes the functions to use in a Bicep file to work with strings. Previously updated : 07/07/2023 Last updated : 01/31/2024 # String functions for Bicep
The output from the preceding example with the default values is:
`first(arg1)`
-Returns the first character of the string, or first element of the array.
+Returns the first character of the string, or first element of the array. If an empty string is given, the function results in an empty string. In the case of an empty array, the function returns `null`.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
Title: Use MSBuild to convert Bicep to JSON description: Use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON. Previously updated : 09/26/2022 Last updated : 01/31/2024
# Quickstart: Use MSBuild to convert Bicep to JSON
-This article describes how to use MSBuild to convert a Bicep file to Azure Resource Manager template (ARM template) JSON. The examples use MSBuild from the command line with C# project files that convert Bicep to JSON. The project files are examples that can be used in an MSBuild continuous integration (CI) pipeline.
+Learn the process of utilizing [MSBuild](/visualstudio/msbuild/msbuild) for the conversion of Bicep files to Azure Resource Manager JSON templates (ARM templates). Additionally, MSBuild can be utilized for the conversion of [Bicep parameter files](./parameter-files.md?tabs=Bicep) to [Azure Resource Manager parameter files](../templates/parameter-files.md) with the NuGet packages version 0.23.x or later. The provided examples demonstrate the use of MSBuild from the command line with C# project files for the conversion. These project files serve as examples that can be utilized in an MSBuild continuous integration (CI) pipeline.
## Prerequisites
-You'll need the latest versions of the following software:
+You need the latest versions of the following software:
-- [Visual Studio](/visualstudio/install/install-visual-studio). The free community version will install .NET 6.0, .NET Core 3.1, .NET SDK, MSBuild, .NET Framework 4.8, NuGet package manager, and C# compiler. From the installer, select **Workloads** > **.NET desktop development**.-- [Visual Studio Code](https://code.visualstudio.com/) with the extensions for [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools).
+- [Visual Studio](/visualstudio/install/install-visual-studio), or [Visual Studio Code](./install.md#visual-studio-code-and-bicep-extension). The Visual Studio community version, available for free, installs .NET 6.0, .NET Core 3.1, .NET SDK, MSBuild, .NET Framework 4.8, NuGet package manager, and C# compiler. From the installer, select **Workloads** > **.NET desktop development**. With Visual Studio Code, you also need the extensions for [Bicep](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep) and [Azure Resource Manager (ARM) Tools](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools)
- [PowerShell](/powershell/scripting/install/installing-powershell) or a command-line shell for your operating system.
-## MSBuild tasks and CLI packages
+## MSBuild tasks and Bicep packages
-If your existing continuous integration (CI) pipeline relies on [MSBuild](/visualstudio/msbuild/msbuild), you can use MSBuild tasks and CLI packages to convert Bicep files into ARM template JSON.
-
-The functionality relies on the following NuGet packages. The latest NuGet package versions match the latest Bicep CLI version.
+From your continuous integration (CI) pipeline, you can use MSBuild tasks and CLI packages to convert Bicep files and Bicep parameter files into JSON. The functionality relies on the following NuGet packages:
| Package Name | Description | | - |- |
-| [Azure.Bicep.MSBuild](https://www.nuget.org/packages/Azure.Bicep.MSBuild) | Cross-platform MSBuild task that invokes the Bicep CLI and compiles Bicep files into ARM template JSON. |
+| [Azure.Bicep.MSBuild](https://www.nuget.org/packages/Azure.Bicep.MSBuild) | Cross-platform MSBuild task that invokes the Bicep CLI and compiles Bicep files into ARM JSON templates. |
| [Azure.Bicep.CommandLine.win-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.win-x64) | Bicep CLI for Windows. | | [Azure.Bicep.CommandLine.linux-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.linux-x64) | Bicep CLI for Linux. | | [Azure.Bicep.CommandLine.osx-x64](https://www.nuget.org/packages/Azure.Bicep.CommandLine.osx-x64) | Bicep CLI for macOS. |
-### Azure.Bicep.MSBuild package
-
-When referenced in a project file's `PackageReference` the `Azure.Bicep.MSBuild` package imports the `Bicep` task that's used to invoke the Bicep CLI. The package converts its output into MSBuild errors and the `BicepCompile` target that's used to simplify the `Bicep` task's usage. By default the `BicepCompile` runs after the `Build` target and compiles all `@(Bicep)` items and places the output in `$(OutputPath)` with the same file name and the _.json_ extension.
-
-The following example compiles _one.bicep_ and _two.bicep_ files in the same directory as the project file and places the compiled _one.json_ and _two.json_ in the `$(OutputPath)` directory.
+You can find the latest version from these pages. For example:
-```xml
-<ItemGroup>
- <Bicep Include="one.bicep" />
- <Bicep Include="two.bicep" />
-</ItemGroup>
-```
-You can override the output path per file using the `OutputFile` metadata on `Bicep` items. The following example will recursively find all _main.bicep_ files and place the compiled _.json_ files in `$(OutputPath)` under a subdirectory with the same name in `$(OutputPath)`:
+The latest NuGet package versions match the latest [Bicep CLI](./bicep-cli.md) version.
-```xml
-<ItemGroup>
- <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
-</ItemGroup>
-```
+- **Azure.Bicep.MSBuild**
-More customizations can be performed by setting one of the following properties in your project:
+ When included in project file's `PackageReference` property, the `Azure.Bicep.MSBuild` package imports the Bicep task used for invoking the Bicep CLI.
+
+ ```xml
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="0.24.24" />
+ ...
+ </ItemGroup>
-| Property Name | Default Value | Description |
-| - |- | - |
-| `BicepCompileAfterTargets` | `Build` | Used as `AfterTargets` value for the `BicepCompile` target. Change the value to override the scheduling of the `BicepCompile` target in your project. |
-| `BicepCompileDependsOn` | None | Used as `DependsOnTargets` value for the `BicepCompile` target. This property can be set to targets that you want `BicepCompile` target to depend on. |
-| `BicepCompileBeforeTargets` | None | Used as `BeforeTargets` value for the `BicepCompile` target. |
-| `BicepOutputPath` | `$(OutputPath)` | Set this property to override the default output path for the compiled ARM template. `OutputFile` metadata on `Bicep` items takes precedence over this value. |
+ ```
+
+ The package transforms the output of Bicep CLI into MSBuild errors and imports the `BicepCompile` target to streamline the usage of the Bicep task. By default, the `BicepCompile` runs after the `Build` target, compiling all @(Bicep) items and @(BicepParam) items. It then deposits the output in `$(OutputPath)` with the same filename and a _.json_ extension.
-The `Azure.Bicep.MSBuild` requires the `BicepPath` property to be set either in order to function. You may set it by referencing the appropriate `Azure.Bicep.CommandLine.*` package for your operating system or manually by installing the Bicep CLI and setting the `BicepPath` environment variable or MSBuild property.
+ The following example shows project file setting for compiling _main.bicep_ and _main.bicepparam_ files in the same directory as the project file and places the compiled _main.json_ and _main.parameters.json_ in the `$(OutputPath)` directory.
-### Azure.Bicep.CommandLine packages
+ ```xml
+ <ItemGroup>
+ <Bicep Include="main.bicep" />
+ <BicepParam Include="main.bicepparam" />
+ </ItemGroup>
+ ```
-The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. When referenced in a project file via a `PackageReference`, the `Azure.Bicep.CommandLine.*` packages set the `BicepPath` property to the full path of the Bicep executable for the platform. The reference to this package may be omitted if Bicep CLI is installed through other means and the `BicepPath` environment variable or MSBuild property are set accordingly.
+ You can override the output path per file using the `OutputFile` metadata on `Bicep` items. The following example recursively finds all _main.bicep_ files and places the compiled _.json_ files in `$(OutputPath)` under a subdirectory with the same name in `$(OutputPath)`:
-### SDK-based examples
+ ```xml
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ <BicepParam Include="**\main.bicepparam" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).parameters.json" />
+ </ItemGroup>
+ ```
-The following examples contain a default Console App SDK-based C# project file that was modified to convert Bicep files into ARM templates. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+ More customizations can be performed by setting one of the following properties to the `PropertyGroup` in your project:
-The .NET Core 3.1 and .NET 6 examples are similar. But .NET 6 uses a different format for the _Program.cs_ file. For more information, see [.NET 6 C# console app template generates top-level statements](/dotnet/core/tutorials/top-level-templates).
+ | Property Name | Default Value | Description |
+ | - |- | - |
+ | `BicepCompileAfterTargets` | `Build` | Used as `AfterTargets` value for the `BicepCompile` target. Change the value to override the scheduling of the `BicepCompile` target in your project. |
+ | `BicepCompileDependsOn` | None | Used as `DependsOnTargets` value for the `BicepCompile` target. This property can be set to targets that you want `BicepCompile` target to depend on. |
+ | `BicepCompileBeforeTargets` | None | Used as `BeforeTargets` value for the `BicepCompile` target. |
+ | `BicepOutputPath` | `$(OutputPath)` | Set this property to override the default output path for the compiled ARM template. `OutputFile` metadata on `Bicep` items takes precedence over this value. |
-### .NET 6
+ For the `Azure.Bicep.MSBuild` to operate, it's required to have an environment variable named `BicepPath` set. See the next bullet item for configuring `BicepPath`.
-In this example, the `RootNamespace` property contains a placeholder value. When you create a project file, the value matches your project's name.
+- **Azure.Bicep.CommandLine**
-```xml
-<Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>net6.0</TargetFramework>
- <RootNamespace>net6-sdk-project-name</RootNamespace>
- <ImplicitUsings>enable</ImplicitUsings>
- <Nullable>enable</Nullable>
- </PropertyGroup>
+ The `Azure.Bicep.CommandLine.*` packages are available for Windows, Linux, and macOS. The following example references the package for Windows.
+ ```xml
<ItemGroup> <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
- <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
- </ItemGroup>
+ ...
+ </ItemGroup>
+ ```
- <ItemGroup>
- <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
- </ItemGroup>
-</Project>
-```
+ When referenced in a project file, the `Azure.Bicep.CommandLine.*` packages automatically set the `BicepPath` property to the full path of the Bicep executable for the platform. The reference to this package can be omitted if Bicep CLI is installed through other means. For this case, instead of referencing an `Azure.Bicep.Commandline` package, you can either configure an environment variable called `BicepPath` or add `BicepPath` to the `PropertyGroup`, for example on Windows:
+
+ ```xml
+ <PropertyGroup>
+ <BicepPath>c:\users\john\.Azure\bin\bicep.exe</BicepPath>
+ ...
+ </PropertyGroup>
+ ```
-### .NET Core 3.1
+ On Linux:
-```xml
-<Project Sdk="Microsoft.NET.Sdk">
+ ```xml
<PropertyGroup>
- <OutputType>Exe</OutputType>
- <TargetFramework>netcoreapp3.1</TargetFramework>
+ <BicepPath>/usr/local/bin/bicep</BicepPath>
+ ...
</PropertyGroup>
+ ```
- <ItemGroup>
- <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
- <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
- </ItemGroup>
+### Project file examples
- <ItemGroup>
- <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
- </ItemGroup>
-</Project>
-```
+The following examples show how to configure C# console application project files for converting Bicep files and Bicep parameter files to JSON. Replace `__LATEST_VERSION__` with the latest version of the [Bicep NuGet packages](https://www.nuget.org/packages/Azure.Bicep.Core/) in the following examples. See [MSBuild tasks and Bicep packages](#msbuild-tasks-and-bicep-packages) for finding the latest version.
-### NoTargets SDK
+#### SDK-based example
-The following example contains a project that converts Bicep files into ARM templates using [Microsoft.Build.NoTargets](https://www.nuget.org/packages/Microsoft.Build.NoTargets). This SDK allows creation of standalone projects that compile only Bicep files. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+The .NET Core 3.1 and .NET 6 examples are similar. But .NET 6 uses a different format for the _Program.cs_ file. For more information, see [.NET 6 C# console app template generates top-level statements](/dotnet/core/tutorials/top-level-templates).
-For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files), specify a version like `Microsoft.Build.NoTargets/3.5.6`.
+<a id="net-6"></a>
+- **.NET 6**
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <OutputType>Exe</OutputType>
+ <TargetFramework>net6.0</TargetFramework>
+ <RootNamespace>net6-sdk-project-name</RootNamespace>
+ <ImplicitUsings>enable</ImplicitUsings>
+ <Nullable>enable</Nullable>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ <BicepParam Include="**\main.bicepparam" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).parameters.json" />
+ </ItemGroup>
+ </Project>
+ ```
+
+ The `RootNamespace` property contains a placeholder value. When you create a project file, the value matches your project's name.
+
+<a id="net-core-31"></a>
+- **.NET Core 3.1**
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <OutputType>Exe</OutputType>
+ <TargetFramework>netcoreapp3.1</TargetFramework>
+ </PropertyGroup>
+
+ <ItemGroup>
+ <PackageReference Include="Azure.Bicep.CommandLine.win-x64" Version="__LATEST_VERSION__" />
+ <PackageReference Include="Azure.Bicep.MSBuild" Version="__LATEST_VERSION__" />
+ </ItemGroup>
+
+ <ItemGroup>
+ <Bicep Include="**\main.bicep" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).json" />
+ <BicepParam Include="**\main.bicepparam" OutputFile="$(OutputPath)\%(RecursiveDir)\%(FileName).parameters.json" />
+ </ItemGroup>
+ </Project>
+ ```
+
+<a id="notargets-sdk"></a>
+#### NoTargets SDK example
+
+The [Microsoft.Build.NoTargets](https://github.com/microsoft/MSBuildSdks/blob/main/src/NoTargets/README.md) MSBuild project SDK allows project tree owners the ability to define projects that don't compile an assembly. This SDK allows creation of standalone projects that compile only Bicep files.
```xml
-<Project Sdk="Microsoft.Build.NoTargets/__LATEST_VERSION__">
+<Project Sdk="Microsoft.Build.NoTargets/__LATEST_MICROSOFT.BUILD.NOTARGETS.VERSION__">
<PropertyGroup> <TargetFramework>net48</TargetFramework> </PropertyGroup>
For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files)
<ItemGroup> <Bicep Include="main.bicep"/>
+ <BicepParam Include="main.bicepparam"/>
</ItemGroup> </Project> ```
-### Classic framework
+The latest `Microsoft.Build.NoTargets` version can be found at [https://www.nuget.org/packages/Microsoft.Build.NoTargets](https://www.nuget.org/packages/Microsoft.Build.NoTargets). For [Microsoft.Build.NoTargets](/dotnet/core/project-sdk/overview#project-files), specify a version like `Microsoft.Build.NoTargets/3.7.56`.
+
+```xml
+<Project Sdk="Microsoft.Build.NoTargets/3.7.56">
+ ...
+</Project>
+```
-The following example converts Bicep to JSON inside a classic project file that's not SDK-based. Only use the classic example if the previous examples don't work for you. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+<a id="classic-framework"></a>
+#### Classic framework example
-In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` properties contain placeholder values. When you create a project file, a unique GUID is created, and the name values match your project's name.
+Use the classic example only if the previous examples don't work for you. In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` properties contain placeholder values. When you create a project file, a unique GUID is created, and the name values match your project's name.
```xml <?xml version="1.0" encoding="utf-8"?>
In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` propertie
<ItemGroup> <None Include="App.config" /> <Bicep Include="main.bicep" />
+ <BicepParam Include="main.bicepparam" />
</ItemGroup> <ItemGroup> <PackageReference Include="Azure.Bicep.CommandLine.win-x64">
In this example, the `ProjectGuid`, `RootNamespace` and `AssemblyName` propertie
## Convert Bicep to JSON
-The following examples show how MSBuild converts a Bicep file to JSON. Follow the instructions to create one of the project files for .NET, .NET Core 3.1, or Classic framework. Then continue to create the Bicep file and run MSBuild.
+These examples demonstrate the conversion of a Bicep file and a Bicep parameter file to JSON using MSBuild. Start by creating a project file for .NET, .NET Core 3.1, or the Classic framework. Then, generate the Bicep file and the Bicep parameter file before running MSBuild.
+
+### Create project
# [.NET](#tab/dotnet) Build a project in .NET with the dotnet CLI.
-1. Open Visual Studio code and select **Terminal** > **New Terminal** to start a PowerShell session.
-1. Create a directory named _bicep-msbuild-demo_ and go to the directory. This example uses _C:\bicep-msbuild-demo_.
+1. Open Visual Studio Code and select **Terminal** > **New Terminal** to start a PowerShell session.
+1. Create a directory named _msBuildDemo_ and go to the directory. This example uses _C:\msBuildDemo_.
```powershell
- New-Item -Name .\bicep-msbuild-demo -ItemType Directory
- Set-Location -Path .\bicep-msbuild-demo
+ Set-Location -Path C:\
+ New-Item -Name .\msBuildDemo -ItemType Directory
+ Set-Location -Path .\msBuildDemo
``` 1. Run the `dotnet` command to create a new console with the .NET 6 framework.
Build a project in .NET with the dotnet CLI.
dotnet new console --framework net6.0 ```
- The project file uses the same name as your directory, _bicep-msbuild-demo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
+ The command creates a project file using the same name as your directory, _msBuildDemo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
-1. Replace the contents of _bicep-msbuild-demo.csproj_ with the [.NET 6](#net-6) or [NoTargets SDK](#notargets-sdk) examples.
-1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
+1. Open _msBuildDemo.csproj_ with an editor, and replace the content with the [.NET 6](#net-6) or [NoTargets SDK](#notargets-sdk) example, and also replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages.
1. Save the file. # [.NET Core 3.1](#tab/netcore31) Build a project in .NET Core 3.1 using the dotnet CLI.
-1. Open Visual Studio code and select **Terminal** > **New Terminal** to start a PowerShell session.
-1. Create a directory named _bicep-msbuild-demo_ and go to the directory. This example uses _C:\bicep-msbuild-demo_.
+1. Open Visual Studio Code and select **Terminal** > **New Terminal** to start a PowerShell session.
+1. Create a directory named _msBuildDemo_ and go to the directory. This example uses _C:\msBuildDemo_.
```powershell
- New-Item -Name .\bicep-msbuild-demo -ItemType Directory
- Set-Location -Path .\bicep-msbuild-demo
+ Set-Location -Path C:\
+ New-Item -Name .\msBuildDemo -ItemType Directory
+ Set-Location -Path .\msBuildDemo
```
-1. Run the `dotnet` command to create a new console with the .NET 6 framework.
+1. Run the `dotnet` command to create a new console with the .NET 6 framework.
```powershell dotnet new console --framework netcoreapp3.1 ```
- The project file is named the same as your directory, _bicep-msbuild-demo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
+ The project file is named the same as your directory, _msBuildDemo.csproj_. For more information about how to create a console application from Visual Studio Code, see the [tutorial](/dotnet/core/tutorials/with-visual-studio-code).
-1. Replace the contents of _bicep-msbuild-demo.csproj_ with the [.NET Core 3.1](#net-core-31) or [NoTargets SDK](#notargets-sdk) examples.
+1. Replace the contents of _msBuildDemo.csproj_ with the [.NET Core 3.1](#net-core-31) or [NoTargets SDK](#notargets-sdk) example.
1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages. 1. Save the file.
To create the project file and dependencies, use Visual Studio.
1. Open Visual Studio. 1. Select **Create a new project**. 1. For the C# language, select **Console App (.NET Framework)** and select **Next**.
-1. Enter a project name. For this example, use _bicep-msbuild-demo_ for the project.
+1. Enter a project name. For this example, use _msBuildDemo_ for the project.
1. Select **Place solution and project in same directory**. 1. Select **.NET Framework 4.8**. 1. Select **Create**.
-If you know how to unload a project and reload a project, you can edit _bicep-msbuild-demo.csproj_ in Visual Studio.
+If you know how to unload a project and reload a project, you can edit _msBuildDemo.csproj_ in Visual Studio. Otherwise, edit the project file in Visual Studio Code.
-Otherwise, edit the project file in Visual Studio Code.
-
-1. Open Visual Studio Code and go to the _bicep-msbuild-demo_ directory.
-1. Replace _bicep-msbuild-demo.csproj_ with the [Classic framework](#classic-framework) code sample.
+1. Open Visual Studio Code and go to the _msBuildDemo_ directory.
+1. Replace _msBuildDemo.csproj_ with the [Classic framework](#classic-framework) code example.
1. Replace `__LATEST_VERSION__` with the latest version of the Bicep NuGet packages. 1. Save the file.
Otherwise, edit the project file in Visual Studio Code.
### Create Bicep file
-You'll need a Bicep file that will be converted to JSON.
-
-1. Use Visual Studio Code and create a new file.
-1. Copy the following sample and save it as _main.bicep_ in the _C:\bicep-msbuild-demo_ directory.
-
-```bicep
-@allowed([
- 'Premium_LRS'
- 'Premium_ZRS'
- 'Standard_GRS'
- 'Standard_GZRS'
- 'Standard_LRS'
- 'Standard_RAGRS'
- 'Standard_RAGZRS'
- 'Standard_ZRS'
-])
-@description('Storage account type.')
-param storageAccountType string = 'Standard_LRS'
-
-@description('Location for all resources.')
-param location string = resourceGroup().location
-
-var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
-
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-05-01' = {
- name: storageAccountName
- location: location
- sku: {
- name: storageAccountType
- }
- kind: 'StorageV2'
-}
+You need a Bicep file and a BicepParam file to be converted to JSON.
+
+1. Create a _main.bicep_ file in the same folder as the project file, for example: _C:\msBuildDemo_ directory, with the following content:
+
+ ```bicep
+ @allowed([
+ 'Premium_LRS'
+ 'Premium_ZRS'
+ 'Standard_GRS'
+ 'Standard_GZRS'
+ 'Standard_LRS'
+ 'Standard_RAGRS'
+ 'Standard_RAGZRS'
+ 'Standard_ZRS'
+ ])
+ @description('Storage account type.')
+ param storageAccountType string = 'Standard_LRS'
+
+ @description('Location for all resources.')
+ param location string = resourceGroup().location
+
+ var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
+
+ resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: storageAccountType
+ }
+ kind: 'StorageV2'
+ }
+
+ output storageAccountNameOutput string = storageAccount.name
+ ```
+
+1. Create a _main.bicepparam_ file in the _C:\msBuildDemo_ directory with the following content:
+
+ ```bicep
+ using './main.bicep'
+
+ param prefix = '{prefix}'
+ ```
+
+ Replace `{prefix}` with a string value used as a prefix for the storage account name.
-output storageAccountNameOutput string = storageAccount.name
-```
### Run MSBuild
-Run MSBuild to convert the Bicep file to JSON.
+Run MSBuild to convert the Bicep file and the Bicep parameter file to JSON.
1. Open a Visual Studio Code terminal session.
-1. In the PowerShell session, go to the _C:\bicep-msbuild-demo_ directory.
+1. In the PowerShell session, go to the folder that contains the project file. For example, the _C:\msBuildDemo_ directory.
1. Run MSBuild. ```powershell
- MSBuild.exe -restore .\bicep-msbuild-demo.csproj
+ MSBuild.exe -restore .\msBuildDemo.csproj
``` The `restore` parameter creates dependencies needed to compile the Bicep file during the initial build. The parameter is optional after the initial build.
-1. Go to the output directory and open the _main.json_ file that should look like the sample.
+ To use the .NET Core:
+
+ ```powershell
+ dotnet build .\msBuildDemo.csproj
+ ```
+
+ or
+
+ ```powershell
+ dotnet restore .\msBuildDemo.csproj
+ ```
+
+1. Go to the output directory and open the _main.json_ file that should look like the following example.
MSBuild creates an output directory based on the SDK or framework version:
Run MSBuild to convert the Bicep file to JSON.
} ```
+1. The _main.parameters.json_ file should look like:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "prefix": {
+ "value": "mystore"
+ }
+ }
+}
+```
+ If you make changes or want to rerun the build, delete the output directory so new files can be created. ## Clean up resources
-When you're finished with the files, delete the directory. For this example, delete _C:\bicep-msbuild-demo_.
+When you're finished with the files, delete the directory. For this example, delete _C:\msBuildDemo_.
```powershell
-Remove-Item -Path "C:\bicep-msbuild-demo" -Recurse
+Remove-Item -Path "C:\msBuildDemo" -Recurse
``` ## Next steps
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resource providers for compute services are:
| Resource provider namespace | Azure service | | | - |
-| Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/overview.md) |
+| Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/enterprise/overview.md) |
| Microsoft.AVS | [Azure VMware Solution](../../azure-vmware/index.yml) | | Microsoft.Batch | [Batch](../../batch/index.yml) | | Microsoft.ClassicCompute | Classic deployment model virtual machine |
The resource providers for compute services are:
| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) | | Microsoft.LabServices | [Azure Lab Services](../../lab-services/index.yml) | | Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) |
-| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/overview.md) |
+| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/enterprise/overview.md) |
| Microsoft.Quantum | [Azure Quantum](https://azure.microsoft.com/services/quantum/) | | Microsoft.SerialConsole - [registered by default](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) | | Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) |
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply to [Azure role-based access control (Azure RBAC)](../
## Azure Spring Apps limits
-To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/quotas.md).
+To learn more about the limits for Azure Spring Apps, see [Quotas and service plans for Azure Spring Apps](../../spring-apps/enterprise/quotas.md).
## Azure Storage limits
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | storageaccounts | **Yes** | **Yes** | **Yes**<br/><br/> [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
+> | storageaccounts | **Yes** | **Yes** | [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
## Microsoft.StorageCache
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-resource-manager Template Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-string.md
Title: Template functions - string
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with strings. Previously updated : 05/22/2023 Last updated : 01/31/2024 # String functions for ARM templates
The output from the preceding example with the default values is:
`first(arg1)`
-Returns the first character of the string, or first element of the array.
+Returns the first character of the string, or first element of the array. If an empty string is given, the function results in an empty string. In the case of an empty array, the function returns `null`.
In Bicep, use the [first](../bicep/bicep-functions-string.md#first) function.
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
This tutorial continues on the chat room application introduced in [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md). Complete that quickstart first to set up your chat room.
-In this tutorial, you can discover the process of creating your own authentication method and integrate it with the Microsoft Azure SignalR Service.
+In this tutorial, learn how to create and integrate your authentication method using Microsoft Azure SignalR Service.
-The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach lacks effectiveness in real-world, as it fails to prevent malicious users who might assume false identities from gaining access to sensitive data.
+The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach is ineffective in the real-world because malicious users can use fake identities to access sensitive data.
-[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate.
+[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After GitHub account authentication, the account information will be added as a cookie to be used by the web client to authenticate.
For more information on the OAuth authentication APIs provided through GitHub, see [Basics of Authentication](https://developer.github.com/v3/guides/basics-of-authentication/).
To complete this tutorial, you must have the following prerequisites:
- An account created on [GitHub](https://github.com/) - [Git](https://git-scm.com/) - [.NET Core SDK](https://dotnet.microsoft.com/download)-- [Azure Cloud Shell](../cloud-shell/quickstart.md) configured for the bash environment.-- Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository.
+- [Azure Cloud Shell](../cloud-shell/quickstart.md) configured for the bash environment
+- Download or clone the [AzureSignalR-sample](https://github.com/aspnet/AzureSignalR-samples) GitHub repository
## Create an OAuth app
In this section, you implement a `Login` API that authenticates clients using th
### Update the Hub class
-By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token isn't associated with an authenticated identity.
+By default, web client connects to SignalR Service using an internal access token. This access token isn't associated with an authenticated identity.
Basically, it's anonymous access. In this section, you turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim.
In this section, you turn on real authentication by adding the `Authorize` attri
![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png)
- You're prompted to authorize the chat app's access to your GitHub account. Select the **Authorize** button.
+ You prompt to authorize the chat app's access to your GitHub account. Select the **Authorize** button.
![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png)
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
description: A tutorial to walk through how to use Azure Web PubSub service and
+ Last updated 01/12/2024
azure-web-pubsub Socket Io Howto Integrate Apim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-howto-integrate-apim.md
keywords: Socket.IO, Socket.IO on Azure, webapp Socket.IO, Socket.IO integration
-+ Last updated 1/11/2024
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
bastion Quickstart Developer Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md
description: Learn how to deploy Bastion using the Developer SKU.
Previously updated : 01/11/2024 Last updated : 01/31/2024
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer SKU. After Bastion is deployed, you can connect to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
+The following diagram shows the architecture for Azure Bastion and the Developer SKU.
++ > [!IMPORTANT] > During Preview, Bastion Developer SKU is free of charge. Pricing details will be released at GA for a usage-based pricing model.
batch Batch Pool Cloud Service To Virtual Machine Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-cloud-service-to-virtual-machine-configuration.md
Title: Migrate Batch pool configuration from Cloud Services to Virtual Machines description: Learn how to update your pool configuration to the latest and recommended configuration Previously updated : 09/03/2021 Last updated : 01/30/2024 # Migrate Batch pool configuration from Cloud Services to Virtual Machine
Cloud Services Configuration pools don't support some of the current Batch featu
If your Batch solutions currently use 'cloudServiceConfiguration' pools, we recommend changing to 'virtualMachineConfiguration' as soon as possible. This will enable you to benefit from all Batch capabilities, such as an expanded [selection of VM series](batch-pool-vm-sizes.md), Linux VMs, [containers](batch-docker-container-workloads.md), [Azure Resource Manager virtual networks](batch-virtual-network.md), and [node disk encryption](disk-encryption.md).
+> [!IMPORANT]
+> Azure [Batch account certificates](credential-access-key-vault.md) are deprecated and will be retired after the
+> same February 29, 2024 date as `cloudServiceConfiguration` pools. If you are using Batch account certificates,
+> [migrate your Batch account certificates to Azure Key Vault](batch-certificate-migration-guide.md) at the same
+> time as migrating your pool configuration.
+ ## Create a pool using Virtual Machine Configuration You can't switch an existing active pool that uses 'cloudServiceConfiguration' to use 'virtualMachineConfiguration'. Instead, you'll need to create new pools. Once you've created your new 'virtualMachineConfiguration' pools and replicated all of your jobs and tasks, you can delete the old 'cloudServiceConfiguration'pools that you're no longer using.
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
The following are known limitations in Chaos Studio.
- Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/ip-addresses.md) are also required. -- **Supported VM operating systems** - If you run an experiment that makes use of the Chaos Studio agent, the virtual machine must run one of the following operating systems:-
- - Windows Server 2019, Windows Server 2016, and Windows Server 2012 R2
- - Red Hat Enterprise Linux 8, Red Hat Enterprise Linux 8.2, openSUSE Leap 15.2, CentOS 8, Debian 10 Buster (with unzip installation required), Oracle Linux 8.3, and Ubuntu Server 18.04 LTS
-- **Hardened Linux untested** - The Chaos Studio agent isn't currently tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).-- **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers:
- * **Windows:** Microsoft Edge, Google Chrome, and Firefox
- * **MacOS:** Safari, Google Chrome, and Firefox
+- **Version support** - Review the [Azure Chaos Studio version compatibility](chaos-studio-versions.md) page for more information on operating system, browser, and integration version compatibility.
- **Terraform** - Chaos Studio doesn't support Terraform at this time. - **PowerShell modules** - Chaos Studio doesn't have dedicated PowerShell modules at this time. For PowerShell, use our REST API - **Azure CLI** - Chaos Studio doesn't have dedicated AzCLI modules at this time. Use our REST API from AzCLI
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Get started with Azure Chaos Studio by using a virtual machine (VM) shutdown ser
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- A Linux VM. If you don't have a VM, [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md).
+- A Linux VM running an operating system in the [Azure Chaos Studio version compatibility](chaos-studio-versions.md) list. If you don't have a VM, [follow these steps to create one](../virtual-machines/linux/quick-create-portal.md).
## Register the Chaos Studio resource provider If it's your first time using Chaos Studio, you must first register the Chaos Studio resource provider before you onboard the resources and create an experiment. You must do these steps for each subscription where you use Chaos Studio:
Create an Azure resource and ensure that it's one of the supported [fault provid
1. Search for **Virtual Machine Contributor** and select the role. Select **Next**. ![Screenshot that shows choosing the role for the VM.](images/quickstart-virtual-machine-contributor.png)+
+1. Select **Managed identity** option
+
1. Choose **Select members** and search for your experiment name. Select your experiment and choose **Select**. ![Screenshot that shows selecting the experiment.](images/quickstart-select-experiment-role-assignment.png)
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- A virtual machine. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
+- A virtual machine running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
- A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
You can use these same steps to set up and run an experiment for any agent-based
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- A Linux VM. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
+- A Linux VM running an operating system in the [version compatibility](chaos-studio-versions.md) list. If you don't have a VM, you can [create one](../virtual-machines/linux/quick-create-portal.md).
- A network setup that permits you to [SSH into your VM](../virtual-machines/ssh-keys-portal.md). - A user-assigned managed identity *that was assigned to the target VM or virtual machine scale set*. If you don't have a user-assigned managed identity, you can [create one](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
chaos-studio Chaos Studio Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-versions.md
+
+ Title: Azure Chaos Studio compatibility
+description: Understand the compatibility of Azure Chaos Studio with operating systems and tools.
+++ Last updated : 01/26/2024++++
+# Azure Chaos Studio version compatibility
+
+The following reference shows relevant version support and compatibility for features within Chaos Studio.
+
+## Operating systems supported by the agent
+
+The Chaos Studio agent is tested for compatibility with the following operating systems on Azure virtual machines. This testing involves deploying an Azure virtual machine with the specified SKU, installing the agent as a virtual machine extension, and validating the output of the available agent-based faults.
+
+| Operating system | Chaos agent compatibility | Notes |
+|::|::|::|
+| Windows Server 2019 | Γ£ô | |
+| Windows Server 2016 | Γ£ô | |
+| Windows Server 2012 R2 | Γ£ô | |
+| Red Hat Enterprise Linux 8 | Γ£ô | Currently tested up to 8.9 |
+| openSUSE Leap 15.2 | Γ£ô | |
+| CentOS 8 | Γ£ô | |
+| Debian 10 Buster | Γ£ô | Installation of `unzip` utility required |
+| Oracle Linux 8.3 | Γ£ô | |
+| Ubuntu Server 18.04 LTS | Γ£ô | |
+
+The agent isn't currently tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).
+
+If an operating system isn't currently listed, you may still attempt to install, use, and troubleshoot the virtual machine extension, agent, and agent-based capabilities, but Chaos Studio cannot guarantee behavior or support for an unlisted operating system.
+
+To request validation and support on more operating systems or versions, use the [Chaos Studio Feedback Community](https://aka.ms/ChaosStudioFeedback).
+
+## Chaos Mesh compatibility
+
+Faults within Azure Kubernetes Service resources currently integrate with the open-source project [Chaos Mesh](https://chaos-mesh.org/), which is part of the [Cloud Native Computing Foundation](https://www.cncf.io/projects/chaosmesh/). Review [Create a chaos experiment that uses a Chaos Mesh fault to kill AKS pods with the Azure portal](chaos-studio-tutorial-aks-portal.md) for more details on using Azure Chaos Studio with Chaos Mesh.
+
+Find Chaos Mesh's support policy and release dates here: [Supported Releases](https://chaos-mesh.org/supported-releases/).
+
+Chaos Studio currently tests with the following version combinations.
+
+| Chaos Studio fault version | Kubernetes version | Chaos Mesh version | Notes |
+|::|::|::|::|
+| 2.1 | 1.25.11 | 2.5.1 | |
+
+The *Chaos Studio fault version* column refers to the individual fault version for each AKS Chaos Mesh fault used in the experiment JSON, for example `urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1`. If a past version of the corresponding Chaos Studio fault remains available from the Chaos Studio API (for example, `...podChaos/1.0`), it is within support.
+
+## Browser compatibility
+
+Review the Azure portal documentation on [Supported devices](../azure-portal/azure-portal-supported-browsers-devices.md) for more information on browser support.
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md
-zone_pivot_groups: acs-azcli-azp-azpnew-java-net-python-csharp-js
+zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js
# Quickstart: Get and manage phone numbers
zone_pivot_groups: acs-azcli-azp-azpnew-java-net-python-csharp-js
[!INCLUDE [Azure portal](./includes/phone-numbers-portal.md)] ::: zone-end - ::: zone pivot="programming-language-csharp" [!INCLUDE [Azure portal](./includes/phone-numbers-net.md)] ::: zone-end
communication-services Migrating To Azure Communication Services Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/migrating-to-azure-communication-services-calling.md
Title: Tutorial - Migrating from Twilio video to ACS
-description: In this tutorial, you learn how to migrate your calling product from Twilio to Azure Communication Services.
+description: Learn how to migrate a calling product from Twilio to Azure Communication Services.
# Migration Guide from Twilio Video to Azure Communication Services
-This article provides guidance on how to migrate your existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) for WebJS. Twilio Video and Azure Communication Services' calling SDK for WebJS are both cloud-based platforms that enable developers to add voice and video calling features to their web applications. However, there are some key differences between them that may affect your choice of platform or require some changes to your existing code if you decide to migrate. In this article, we will compare the main features and functionalities of both platforms and provide some guidance on how to migrate your existing Twilio Video implementation to Azure Communication Services' Calling SDK for WebJS.
+This article describes how to migrate an existing Twilio Video implementation to the [Azure Communication Services' Calling SDK](../concepts/voice-video-calling/calling-sdk-features.md) for WebJS. Both Twilio Video and Azure Communication Services' Calling SDK for WebJS are also cloud-based platforms that enable developers to add voice and video calling features to their web applications.
+
+However, there are some key differences between them that may affect your choice of platform or require some changes to your existing code if you decide to migrate. In this article, we compare the main features and functions of both platforms and provide some guidance on how to migrate your existing Twilio Video implementation to Azure Communication Services' Calling SDK for WebJS.
## Key features of the Azure Communication Services calling SDK -- Addressing - Azure Communication Services provides [identities](../concepts/identity-model.md) for authentication and addressing communication endpoints. These identities are used within Calling APIs, providing clients with a clear view of who is connected to a call (the roster).-- Encryption - The Calling SDK safeguards traffic by encrypting it and preventing tampering along the way.-- Device Management and Media - The SDK handles the management of audio and video devices, efficiently encodes content for transmission, and supports both screen and application sharing.-- PSTN - The SDK can initiate voice calls with the traditional Public Switched Telephone Network (PSTN), [using phone numbers acquired either in the Azure portal](../quickstarts/telephony/get-phone-number.md) or programmatically.-- Teams Meetings ΓÇô Azure Communication Services is equipped to [join Teams meetings](../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with Teams voice and its video calls.-- Notifications - Azure Communication Services provides APIs for notifying clients of incoming calls, allowing your application to listen to events (for example, incoming calls) even when your application is not running in the foreground.-- User Facing Diagnostics (UFD) - Azure Communication Services utilizes [events](../concepts/voice-video-calling/user-facing-diagnostics.md) designed to provide insights into underlying issues that could affect call quality, allowing developers to subscribe to triggers such as weak network signals or muted microphones for proactive issue awareness.-- Media Statics - Provides comprehensive insights into VoIP and video call [metrics](../concepts/voice-video-calling/media-quality-sdk.md), including call quality information, empowering developers to enhance communication experiences.-- Video Constraints - Azure Communication Services offers APIs that control [video quality among other parameters](../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls. By adjusting parameters like resolution and frame rate, the SDK supports different call situations for varied levels of video quality.
+- **Addressing** - Azure Communication Services provides [identities](../concepts/identity-model.md) for authentication and addressing communication endpoints. These identities are used within Calling APIs, providing clients with a clear view of who is connected to a call (the roster).
+- **Encryption** - The Calling SDK safeguards traffic by encrypting it and preventing tampering along the way.
+- **Device Management and Media enablement** - The SDK manages audio and video devices, efficiently encodes content for transmission, and supports both screen and application sharing.
+- **PSTN calling** - You can use the SDK to initiate voice calling using the traditional Public Switched Telephone Network (PSTN), [using phone numbers acquired either in the Azure portal](../quickstarts/telephony/get-phone-number.md) or programmatically.
+- **Teams Meetings** ΓÇô Azure Communication Services is equipped to [join Teams meetings](../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with Teams voice and video calls.
+- **Notifications** - Azure Communication Services provides APIs to notify clients of incoming calls. This allows your application to listen for events (such as incoming calls) even when your application isn't running in the foreground.
+- **User Facing Diagnostics** - Azure Communication Services uses [events](../concepts/voice-video-calling/user-facing-diagnostics.md) designed to provide insights into underlying issues that could affect call quality. You can subscribe your application to triggers such as weak network signals or muted microphones for proactive issue awareness.
+- **Media Quality Statistics** - Provides comprehensive insights into VoIP and video call [metrics](../concepts/voice-video-calling/media-quality-sdk.md). Metrics include call quality information, empowering developers to enhance communication experiences.
+- **Video Constraints** - Azure Communication Services offers APIs that control [video quality among other parameters](../quickstarts/voice-video-calling/get-started-video-constraints.md) during video calls. The SDK supports different call situations for varied levels of video quality, so developers can adjust parameters like resolution and frame rate.
-**For a more detailed understanding of the capabilities of the Calling SDK for different platforms, consult** [**this document**](../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities)**.**
+**For a more detailed understanding of the Calling SDK for different platforms, see** [**this document**](../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities)**.**
If you're embarking on a new project from the ground up, see the [Quickstarts of the Calling SDK](../quickstarts/voice-video-calling/get-started-with-video-calling.md?pivots=platform-web). **Prerequisites:**
-1. **Azure Account:** Confirm that you have an active subscription in your Azure account. New users can create a free Azure account [here](https://azure.microsoft.com/free/).
-2. **Node.js 18:** Ensure Node.js 18 is installed on your system; download can be found right [here](https://nodejs.org/en).
-3. **Communication Services Resource:** Set up a [Communication Services Resource](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp) via your Azure portal and note down your connection string.
-4. **Azure CLI:** You can get the Azure CLI installer from [here](/cli/azure/install-azure-cli-windows?tabs=azure-cli)..
+1. **Azure Account:** Make sure that your Azure account is active. New users can create a free account at [Microsoft Azure](https://azure.microsoft.com/free/).
+2. **Node.js 18:** Ensure Node.js 18 is installed on your system. Download from [Node.js](https://nodejs.org/en).
+3. **Communication Services Resource:** Set up a [Communication Services Resource](../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp) via your Azure portal and note your connection string.
+4. **Azure CLI:** Follow the instructions at [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows?tabs=azure-cli)..
5. **User Access Token:** Generate a user access token to instantiate the call client. You can create one using the Azure CLI as follows: ```console az communication identity token issue --scope voip --connection-string "yourConnectionString" ```
-For more information, see the guide on how to [Use Azure CLI to Create and Manage Access Tokens](../quickstarts/identity/access-tokens.md?pivots=platform-azcli).
+For more information, see [Use Azure CLI to Create and Manage Access Tokens](../quickstarts/identity/access-tokens.md?pivots=platform-azcli).
For Video Calling as a Teams user: -- You also can use Teams identity. For instructions on how to generate an access token for a Teams User, [follow this guide](../quickstarts/manage-teams-identity.md?pivots=programming-language-javascript).-- Obtain the Teams thread ID for call operations using the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). Additional information on how to create a chat thread ID can be found [here](/graph/api/chat-post?preserve-view=true&tabs=javascript&view=graph-rest-1.0#example-2-create-a-group-chat).
+- You can also use Teams identity. To generate an access token for a Teams User, see [Manage teams identity](../quickstarts/manage-teams-identity.md?pivots=programming-language-javascript).
+- Obtain the Teams thread ID for call operations using the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). For information about creating a thread ID, see [Create chat - Microsoft Graph v1.0 > Example2: Create a group chat](/graph/api/chat-post?preserve-view=true&tabs=javascript&view=graph-rest-1.0#example-2-create-a-group-chat).
### UI library
-The UI Library simplifies the process of creating modern communication user interfaces using Azure Communication Services. It offers a collection of ready-to-use UI components that you can easily integrate into your application.
+The UI library simplifies the process of creating modern communication user interfaces using Azure Communication Services. It offers a collection of ready-to-use UI components that you can easily integrate into your application.
-This prebuilt set of controls facilitates the creation of aesthetically pleasing designs using [Fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and the development of audio/video communication experiences. If you wish to explore more about the UI Library, check out the [overview page](../concepts/ui-library/ui-library-overview.md), where you find comprehensive information about both web and mobile platforms.
+This open source prebuilt set of controls enables you to create aesthetically pleasing designs using [Fluent UI SDK](https://developer.microsoft.com/en-us/fluentui#/) components and develop high quality audio/video communication experiences. For more information, check out the [Azure Communications Services UI Library overview](../concepts/ui-library/ui-library-overview.md). The overview includes comprehensive information about both web and mobile platforms.
### Calling support
-The Azure Communication Services Calling SDK supports the following streaming configurations:
+The Azure Communication Services calling SDK supports the following streaming configurations:
| Limit | Web | Windows/Android/iOS | ||-|--|
The Azure Communication Services Calling SDK supports the following streaming co
## Call Types in Azure Communication Services
-Azure Communication Services offers various call types. The type of call you choose impacts your signaling schema, the flow of media traffic, and your pricing model. Further details can be found [here](../concepts/voice-video-calling/about-call-types.md).
+Azure Communication Services offers various call types. The type of call you choose impacts your signaling schema, the flow of media traffic, and your pricing model. For more information, see [Voice and video concepts](../concepts/voice-video-calling/about-call-types.md).
-- Voice Over IP (VoIP) - This type of call involves one user of your application calling another over an internet or data connection. Both signaling and media traffic are routed over the internet.-- Public Switched Telephone Network (PSTN) - When your users interact with a traditional telephone number, calls are facilitated via PSTN voice calling. In order to make and receive PSTN calls, you need to introduce telephony capabilities to your Azure Communication Services resource. Here, signaling and media employ a mix of IP-based and PSTN-based technologies to connect your users.-- One-to-One Call - When one of your users connects with another through our SDKs. The call can be established via either VoIP or PSTN.-- Group Call - Involved when three or more participants connect. Any combination of VoIP and PSTN-connected users can partake in a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot.-- Rooms Call - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, please refer to the [conceptual documentation](../concepts/rooms/room-concept.md).
+- **Voice Over IP (VoIP)** - When a user of your application calls another over an internet or data connection. Both signaling and media traffic are routed over the internet.
+- **Public Switched Telephone Network (PSTN)** - When your users call a traditional telephone number, calls are facilitated via PSTN voice calling. To make and receive PSTN calls, you need to introduce telephony capabilities to your Azure Communication Services resource. Here, signaling and media employ a mix of IP-based and PSTN-based technologies to connect your users.
+- **One-to-One Calls** - When one of your users connects with another through our SDKs. The call can be established via either VoIP or PSTN.
+- **Group Calls** - Happens when three or more participants connect in a single call. Any combination of VoIP and PSTN-connected users can be on a group call. A one-to-one call can evolve into a group call by adding more participants to the call, and one of these participants can be a bot.
+- **Rooms Call** - A Room acts as a container that manages activity between end-users of Azure Communication Services. It provides application developers with enhanced control over who can join a call, when they can meet, and how they collaborate. For a more comprehensive understanding of Rooms, see the [Rooms overview](../concepts/rooms/room-concept.md).
## Installation
npm install @azure/communication-common npm install @azure/communication-calling
### Remove the Twilio SDK from the project
-You can remove the Twilio SDK from your project by uninstalling the package
+You can remove the Twilio SDK from your project by uninstalling the package.
```console npm uninstall twilio-video ```
-## Object model
+## Object Model
The following classes and interfaces handle some of the main features of the Azure Communication Services Calling SDK: | **Name** | **Description** | |--|-| | CallClient | The main entry point to the Calling SDK. |
-| AzureCommunicationTokenCredential | Implements the CommunicationTokenCredential interface, which is used to instantiate the CallAgent. |
-| CallAgent | Used to start and manage calls. |
-| Device Manager | Used to manage media devices. |
-| Call | Used for representing a Call. |
-| LocalVideoStream | Used for creating a local video stream for a camera device on the local system. |
-| RemoteParticipant | Used for representing a remote participant in the Call. |
-| RemoteVideoStream | Used for representing a remote video stream from a Remote Participant. |
-| LocalAudioStream | Represents a local audio stream for a local microphone device |
-| AudioOptions | Audio options, which are provided when making an outgoing call or joining a group call |
-| AudioIssue | Represents the end of call survey audio issues, example responses would be NoLocalAudio - the other participants were unable to hear me, or LowVolume - the callΓÇÖs audio volume was low |
-
-When using in a Teams implementation there are a few differences:
+| AzureCommunicationTokenCredential | Implements the `CommunicationTokenCredential` interface, which is used to instantiate the CallAgent. |
+| CallAgent | Start and manage calls. |
+| Device Manager | Manage media devices. |
+| Call | Represents a Call. |
+| LocalVideoStream | Create a local video stream for a camera device on the local system. |
+| RemoteParticipant | Represents a remote participant in the Call. |
+| RemoteVideoStream | Represents a remote video stream from a Remote Participant. |
+| LocalAudioStream | Represents a local audio stream for a local microphone device. |
+| AudioOptions | Audio options, provided to a participant when making an outgoing call or joining a group call. |
+| AudioIssue | Represents the end of call survey audio issues. Example responses might be `NoLocalAudio` - the other participants were unable to hear me, or `LowVolume` - the call audio volume was too low. |
+
+When using ACS calling in a Teams call, there are a few differences:
- Instead of `CallAgent` - use `TeamsCallAgent` for starting and managing Teams calls. - Instead of `Call` - use `TeamsCall` for representing a Teams Call.
Using the `CallClient`, initialize a `CallAgent` instance. The `createCallAgent`
#### Twilio
-Twilio doesn't have a Device Manager analog, tracks are being created using the systemΓÇÖs default device. For customization, you should obtain the desired source track via:
+Twilio doesn't have a Device Manager analog. Tracks are created using the systemΓÇÖs default device. To customize a device, obtain the desired source track via:
```javascript navigator.mediaDevices.getUserMedia() ```
callClient = new CallClient();
const callAgent = await callClient.createCallAgent(tokenCredential, {displayName: 'optional user name'}); ```
-You can use the getDeviceManager method on the CallClient instance to access deviceManager.
+You can use the `getDeviceManager` method on the `CallClient` instance to access `deviceManager`.
-```javascript
const deviceManager = await callClient.getDeviceManager();
+```javascript
// Get a list of available video devices for use. const localCameras = await deviceManager.getCameras();
twilioRoom = await twilioVideo.connect('token', { name: 'roomName', audio: false
### Azure Communication Services
-To create and start a call, use one of the APIs on `callAgent` and provide a user that you created through the Communication Services identity SDK.
+To create and start a call, use one of the `callAgent` APIs and provide a user that you created through the Communication Services identity SDK.
-Call creation and start are synchronous. The `call` instance allows you to subscribe to call events - subscribe to `stateChanged` event for value changes.
+Call creation and start are synchronous. The `call` instance enables you to subscribe to call events. Subscribe to the `stateChanged` event for value changes.
```javascript call.on('stateChanged', async () =\> { console.log(\`Call state changed: \${call.state}\`) });
-``````
+```
### Azure Communication Services 1:1 Call
-To call another Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's CommunicationUserIdentifier that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md).
+To call another Azure Communication Services user, use the `startCall` method on `callAgent` and pass the recipient's `CommunicationUserIdentifier` that you [created with the Communication Services administration library](../quickstarts/identity/access-tokens.md).
```javascript const userCallee = { communicationUserId: '\<Azure_Communication_Services_USER_ID\>' }; const oneToOneCall = callAgent.startCall([userCallee]);
const oneToOneCall = callAgent.startCall([userCallee]);
### Azure Communication Services Room Call
-To join a `room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the join method and pass the context instance.
+To join a `Room` call, you can instantiate a context object with the `roomId` property as the room identifier. To join the call, use the `join` method and pass the context instance.
```javascript const context = { roomId: '\<RoomId\>' }; const call = callAgent.join(context); ```
-A **room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **rooms**, you can read the [conceptual documentation](../concepts/rooms/room-concept.md) or follow the [quick start guide](../quickstarts/rooms/join-rooms-call.md).
+A **Room** offers application developers better control over who can join a call, when they meet and how they collaborate. To learn more about **Rooms**, see the [Rooms overview](../concepts/rooms/room-concept.md), or see [Quickstart: Join a room call](../quickstarts/rooms/join-rooms-call.md).
-### Azure Communication Services group Call
+### Azure Communication Services Group Call
-To start a new group call or join an ongoing group call, use the `join` method and pass an object with a groupId property. The `groupId` value has to be a GUID.
+To start a new group call or join an ongoing group call, use the `join` method and pass an object with a `groupId` property. The `groupId` value must be a GUID.
```javascript const context = { groupId: '\<GUID\>'}; const call = callAgent.join(context);
const call = callAgent.join(context);
### Azure Communication Services Teams call
-Start a synchronous one-to-one or group call with `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events.
+Start a synchronous one-to-one or group call using the `startCall` API on `teamsCallAgent`. You can provide `MicrosoftTeamsUserIdentifier` or `PhoneNumberIdentifier` as a parameter to define the target of the call. The method returns the `TeamsCall` instance that allows you to subscribe to call events.
```javascript const userCallee = { microsoftTeamsUserId: '\<MICROSOFT_TEAMS_USER_ID\>' }; const oneToOneCall = teamsCallAgent.startCall(userCallee);
const oneToOneCall = teamsCallAgent.startCall(userCallee);
### Twilio
-The Twilio Video SDK the Participant is being created after joining the room, and it doesn't have any information about other rooms.
+When using Twilio Video SDK, the Participant is created after joining the room; and it doesn't have any information about other rooms.
### Azure Communication Services
callAgent.on('incomingCall', async (call) =\>{
The `incomingCall` event includes an `incomingCall` instance that you can accept or reject.
-When starting/joining/accepting a call with video on, if the specified video camera device is being used by another process or if it's disabled in the system, the call starts with video off, and a `cameraStartFailed:` true call diagnostic will be raised.
+When starting, joining, or accepting a call with *video on*, if the specified video camera device is being used by another process or if it's disabled in the system, the call starts with *video off*, and returns a `cameraStartFailed: true` call diagnostic.
```javascript const incomingCallHandler = async (args: { incomingCall: IncomingCall }) => {
const incomingCallHandler = async (args: { incomingCall: IncomingCall }) => {
// Get incoming call ID var incomingCallId = incomingCall.id
- // Get information about this Call. This API is provided as a preview for developers and may change based on feedback that we receive. Do not use this API in a production environment. To use this api please use 'beta' release of Azure Communication Services Calling Web SDK
+ // Get information about this Call.
var callInfo = incomingCall.info; // Get information about caller
callAgentInstance.on('incomingCall', incomingCallHandler);
```
-After starting a call, joining a call, or accepting a call, you can also use the callAgents' `callsUpdated` event to be notified of the new Call object and start subscribing to it.
+After starting a call, joining a call, or accepting a call, you can also use the `callAgent` `callsUpdated` event to be notified of the new `Call` object and start subscribing to it.
```javascript callAgent.on('callsUpdated', (event) => { event.added.forEach((call) => {
callAgent.on('callsUpdated', (event) => {
}); ```
-For Azure Communication Services Teams implementation, check how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call).
+For Azure Communication Services Teams implementation, see how to [Receive a Teams Incoming Call](../how-tos/cte-calling-sdk/manage-calls.md#receive-a-teams-incoming-call).
## Adding participants to call
call.remoteParticipants; // [remoteParticipant, remoteParticipant....]
**Add participant:**
-To add a participant to a call, you can use `addParticipant`. Provide one of the Identifier types. It synchronously returns the remoteParticipant instance.
+To add a participant to a call, you can use `addParticipant`. Provide one of the Identifier types. It synchronously returns the `remoteParticipant` instance.
The `remoteParticipantsUpdated` event from Call is raised when a participant is successfully added to the call. ```javascript
const remoteParticipant = call.addParticipant(userIdentifier);
**Remove participant:**
-To remove a participant from a call, you can invoke `removeParticipant`. You have to pass one of the Identifier types. This method resolves asynchronously after the participant is removed from the call. The participant is also removed from the `remoteParticipants` collection.
+To remove a participant from a call, use `removeParticipant`. You need to pass one of the Identifier types. This method resolves asynchronously after the participant is removed from the call. The participant is also removed from the `remoteParticipants` collection.
```javascript const userIdentifier = { communicationUserId: '<Azure_Communication_Services_USER_ID>' }; await call.removeParticipant(userIdentifier);
const videoTrack = await twilioVideo.createLocalVideoTrack({ constraints });
const videoTrackPublication = await localParticipant.publishTrack(videoTrack, { options }); ```
-Camera is enabled by default, however it can be disabled and enabled back if necessary:
+The camera is enabled by default. It can be disabled and enabled back if necessary:
```javascript videoTrack.disable(); ```
-Or
+Or:
```javascript videoTrack.enable(); ```
-Later created video track should be attached locally:
+If there's a later created video track, attach it locally:
```javascript const videoElement = videoTrack.attach(); const localVideoContainer = document.getElementById( localVideoContainerId ); localVideoContainer.appendChild(videoElement);- ```
-Twilio Tracks rely on default input devices and reflect the changes in defaults. However, to change an input device, the previous Video Track should be unpublished:
+Twilio Tracks rely on default input devices and reflect the changes in defaults. To change an input device, you need to unpublish the previous Video Track:
```javascript localParticipant.unpublishTrack(videoTrack); ```
-And a new Video Track with the correct constraints should be created.
+Then create a new Video Track with the correct constraints.
#### Azure Communication Services
-To start a video while on a call, you have to enumerate cameras using the getCameras method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and then pass the `LocalVideoStream` object into the `startVideo` method of an existing call object:
+To start a video while on a call, you need to enumerate cameras using the `getCameras` method on the `deviceManager` object. Then create a new instance of `LocalVideoStream` with the desired camera and pass the `LocalVideoStream` object into the `startVideo` method of an existing call object:
```javascript const deviceManager = await callClient.getDeviceManager();
const localVideoStream = new LocalVideoStream(camera);
await call.startVideo(localVideoStream); ```
-After you successfully start sending video, a LocalVideoStream instance of type Video is added to the localVideoStreams collection on a call instance.
+After you successfully start sending video, a `LocalVideoStream` instance of type Video is added to the `localVideoStreams` collection on a call instance.
```javascript const localVideoStream = call.localVideoStreams.find( (stream) =\> { return stream.mediaStreamType === 'Video'} ); ```
-To stop local video while on a call, pass the localVideoStream instance that's being used for video:
+To stop local video while on a call, pass the `localVideoStream` instance that's being used for video:
```javascript await call.stopVideo(localVideoStream); ```
-You can switch to a different camera device while a video is sending by invoking switchSource on a localVideoStream instance:
+You can switch to a different camera device while a video is sending by calling `switchSource` on a `localVideoStream` instance:
```javascript const cameras = await callClient.getDeviceManager().getCameras();
localVideoStream.switchSource(camera);
If the specified video device is being used by another process, or if it's disabled in the system: -- While in a call, if your video is off and you start video using call.startVideo(), this method throws a `SourceUnavailableError` and `cameraStartFailed` will be set to true.-- A call to the `localVideoStream.switchSource()` method causes `cameraStartFailed` to be set to true. Our [Call Diagnostics guide](../concepts/voice-video-calling/call-diagnostics.md) provides additional information on how to diagnose call related issues.
+- While in a call, if your video is off and you start video using `call.startVideo()`, this method returns a `SourceUnavailableError` and `cameraStartFailed` will be set to true.
+- A call to the `localVideoStream.switchSource()` method causes `cameraStartFailed` to be set to true. See the [Call Diagnostics guide](../concepts/voice-video-calling/call-diagnostics.md) for more information about how to diagnose call-related issues.
-To verify if the local video is on or off you can use `isLocalVideoStarted` API, which returns true or false:
+To verify whether the local video is *on* or *off* you can use the `isLocalVideoStarted` API, which returns true or false:
```javascript call.isLocalVideoStarted; ```
-To listen for changes to the local video, you can subscribe and unsubscribe to the `isLocalVideoStartedChanged` event
+To listen for changes to the local video, you can subscribe and unsubscribe to the `isLocalVideoStartedChanged` event:
```javascript // Subscribe to local video event
call.off('isLocalVideoStartedChanged', () => {
```
-### Rendering a remote user video
+### Rendering a remote user's video
#### Twilio
-As soon as a Remote Participant publishes a Video Track, it needs to be attached. `trackSubscribed` event on Room or Remote Participant allows you to detect when the track can be attached:
+As soon as a Remote Participant publishes a Video Track, it needs to be attached. The `trackSubscribed` event on Room or Remote Participant enables you to detect when the track can be attached:
```javascript twilioRoom.on('participantConneted', (participant) => {
const remoteVideoStream: RemoteVideoStream = call.remoteParticipants[0].videoStr
const streamType: MediaStreamType = remoteVideoStream.mediaStreamType; ```
-To render `RemoteVideoStream`, you have to subscribe to its `isAvailableChanged` event. If the `isAvailable` property changes to true, a remote participant is sending a stream. After that happens, create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance by using the asynchronous createView method. You can then attach `view.target` to any UI element.
+To render `RemoteVideoStream`, you need to subscribe to its `isAvailableChanged` event. If the `isAvailable` property changes to true, a remote participant is sending a stream. After that happens, create a new instance of `VideoStreamRenderer`, and then create a new `VideoStreamRendererView` instance by using the asynchronous `createView` method. You can then attach `view.target` to any UI element.
-Whenever availability of a remote stream changes, you can destroy the whole `VideoStreamRenderer` or a specific `VideoStreamRendererView`. If you do decide to keep them it will result in displaying a blank video frame.
+Whenever availability of a remote stream changes, you can destroy the whole `VideoStreamRenderer` or a specific `VideoStreamRendererView`. If you do decide to keep them, it displays a blank video frame.
```javascript // Reference to the html's div where we would display a grid of all remote video streams from all participants.
subscribeToRemoteVideoStream = async (remoteVideoStream) => {
console.log(`Remote video stream size changed: new height: ${remoteVideoStream.size.height}, new width: ${remoteVideoStream.size.width}`); }); }- ```
-Subscribe to the remote participant's videoStreamsUpdated event to be notified when the remote participant adds new video streams and removes video streams.
+Subscribe to the remote participant's `videoStreamsUpdated` event to be notified when the remote participant adds new video streams and removes video streams.
```javascript remoteParticipant.on('videoStreamsUpdated', e => {
remoteParticipant.on('videoStreamsUpdated', e => {
// Unsubscribe from remote participant's video streams }); });- ``` ### Virtual background #### Twilio
-To use Virtual Background, Twilio helper library should be installed:
+To use Virtual Background, install Twilio helper library:
```console npm install @twilio/video-processors ```
-New Processor instance should be created and loaded:
+Create and load a new `Processor` instance:
```javascript import { GaussianBlurBackgroundProcessor } from '@twilio/video-processors';
const blurProcessor = new GaussianBlurBackgroundProcessor({ assetsPath: virtualB
await blurProcessor.loadModel(); ```
-As soon as the model is loaded the background can be added to the video track via addProcessor method:
-```javascript
-videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' });
-```
+As soon as the model is loaded, you can add the background to the video track using the `addProcessor` method:
+
+| videoTrack.addProcessor(processor, { inputFrameBufferType: 'video', outputFrameBufferContextType: 'webgl2' }); |
+||
#### Azure Communication Services
if (backgroundBlurSupported) {
} ```
-For background replacement with an image you need to provide the URL of the image you want as the background to this effect. Currently supported image formats are: png, jpg, jpeg, tiff, bmp, and current supported aspect ratio is 16:9
+For background replacement with an image you need to provide the URL of the image you want as the background to this effect. Supported image formats are: PNG, JPG, JPEG, TIFF, and BMP. The supported aspect ratio is 16:9.
```javascript const backgroundImage = 'https://linkToImageFile';
if (backgroundReplacementSupported) {
} ```
-Changing the image for this effect can be done by passing it via the configured method:
+Change the image for this effect by passing it via the configured method:
```javascript const newBackgroundImage = 'https://linkToNewImageFile';
await backgroundReplacementEffect.configure({
}); ```
-Switching effects can be done using the same method on the video effects feature API:
+To switch effects, use the same method on the video effects feature API:
```javascript // Switch to background blur
await videoEffectsFeatureApi.startEffects(backgroundBlurEffect);
await videoEffectsFeatureApi.startEffects(backgroundReplacementEffect); ```
-At any time if you want to check what effects are active, you can use the `activeEffects` property. The `activeEffects` property returns an array with the names of the currently active effects and returns an empty array if there are no affects active.
+At any time, if you want to check which effects are active, use the `activeEffects` property. The `activeEffects` property returns an array with the names of the currently active effects and returns an empty array if there are no effects active.
```javascript
-// Using the video effects feature API
+// Using the video effects feature api
const currentActiveEffects = videoEffectsFeatureApi.activeEffects; ```
const audioTrack = await twilioVideo.createLocalAudioTrack({ constraints });
const audioTrackPublication = await localParticipant.publishTrack(audioTrack, { options }); ```
-Microphone is enabled by default, however it can be disabled and enabled back if necessary:
+The microphone is enabled by default. You can disable and enable it back as needed:
```javascript audioTrack.disable(); ```
Or
audioTrack.enable(); ```
-Created Audio Track should be attached by Local Participant the same way as Video Track:
+Any created Audio Track should be attached by Local Participant the same way as Video Track:
```javascript const audioElement = audioTrack.attach();
twilioRoom.on('participantConneted', (participant) => {
}); ```
-Or
+Or:
```javascript twilioRoom..on('trackSubscribed', (track, publication, participant) => {
twilioRoom..on('trackSubscribed', (track, publication, participant) => {
```
-It is impossible to mute incoming audio in Twilio Video SDK.
+It isn't possible to mute incoming audio in Twilio Video SDK.
#### Azure Communication Services
await call.unmuteIncomingAudio();
```
-### Detecting Dominant speaker
+### Detecting dominant speaker
#### Twilio
-To detect the loudest Participant in the Room, Dominant Speaker API can be used. It can be enabled in the connection options when joining the Group Room with at least 2 participants:
+To detect the loudest Participant in the Room, use the Dominant Speaker API. You can enable it in the connection options when joining the Group Room with at least 2 participants:
```javascript twilioRoom = await twilioVideo.connect('token', { name: 'roomName',
dominantSpeaker: true
}); ```
-When the loudest speaker in the Room will change, the dominantSpeakerChanged event is emitted:
+When the loudest speaker in the Room changes, the `dominantSpeakerChanged` event is emitted:
```javascript twilioRoom.on('dominantSpeakerChanged', (participant) => {
twilioRoom.on('dominantSpeakerChanged', (participant) => {
#### Azure Communication Services
-Dominant speakers for a call are an extended feature of the core Call API and allows you to obtain a list of the active speakers in the call. This is a ranked list, where the first element in the list represents the last active speaker on the call and so on.
+Dominant speakers for a call are an extended feature of the core Call API. It enables you to obtain a list of the active speakers in the call. This is a ranked list, where the first element in the list represents the last active speaker on the call and so on.
In order to obtain the dominant speakers in a call, you first need to obtain the call dominant speakers feature API object: ```javascript
Next you can obtain the list of the dominant speakers by calling `dominantSpeake
let dominantSpeakers: DominantSpeakersInfo = callDominantSpeakersApi.dominantSpeakers; ```
-Also, you can subscribe to the `dominantSpeakersChanged` event to know when the dominant speakers list has changed.
+You can also subscribe to the `dominantSpeakersChanged` event to know when the dominant speakers list changes.
+ ```javascript const dominantSpeakersChangedHandler = () => {
callDominantSpeakersApi.on('dominantSpeakersChanged', dominantSpeakersChangedHan
## Enabling screen sharing ### Twilio
-To share the screen in Twilio Video, source track should be obtained via navigator.mediaDevices
+To share the screen in Twilio Video, obtain the source track via `navigator.mediaDevices`:
Chromium-based browsers: ```javascript
const stream = await navigator.mediaDevices.getUserMedia({ mediaSource: 'screen'
const track = stream.getTracks()[0]; ```
-Obtain the screen share track can then be published and managed the same way as casual Video Track (see the ΓÇ£VideoΓÇ¥ section).
+Obtain the screen share track, then you can publish and manage it the same way as the casual Video Track (see the ΓÇ£VideoΓÇ¥ section).
### Azure Communication Services
-To start screen sharing while on a call, you can use asynchronous API `startScreenSharing`:
+To start screen sharing while on a call, you can use the asynchronous API `startScreenSharing`:
```javascript await call.startScreenSharing(); ```
-After successfully starting to sending screen sharing, a `LocalVideoStream` instance of type `ScreenSharing` is created and is added to the `localVideoStreams` collection on the call instance.
+After successfully starting to sending screen sharing, a `LocalVideoStream` instance of type `ScreenSharing` is created and added to the `localVideoStreams` collection on the call instance.
```javascript const localVideoStream = call.localVideoStreams.find( (stream) => { return stream.mediaStreamType === 'ScreenSharing'} ); ```
-To stop screen sharing while on a call, you can use asynchronous API `stopScreenSharing`:
+To stop screen sharing while on a call, you can use the asynchronous API `stopScreenSharing`:
```javascript await call.stopScreenSharing(); ```
-To verify if screen sharing is on or off, you can use `isScreenSharingOn` API, which returns true or false:
+To verify whether screen sharing is on or off, you can use `isScreenSharingOn` API, which returns true or false:
```javascript call.isScreenSharingOn; ```
-To listen for changes to the screen share, you can subscribe and unsubscribe to the `isScreenSharingOnChanged` event
+To listen for changes to the screen share, subscribe and unsubscribe to the `isScreenSharingOnChanged` event:
```javascript // Subscribe to screen share event
call.off('isScreenSharingOnChanged', () => {
### Twilio
-To collect real-time media stats, the getStats method can be used.
+To collect real-time media stats, use the `getStats`` method.
```javascript const stats = twilioRoom.getStats(); ``` ### Azure Communication Services
-Media quality statistics is an extended feature of the core Call API. You first need to obtain the mediaStatsFeature API object:
+Media quality statistics is an extended feature of the core Call API. You first need to obtain the `mediaStatsFeature` API object:
```javascript const mediaStatsFeature = call.feature(Features.MediaStats);
const mediaStatsFeature = call.feature(Features.MediaStats);
To receive the media statistics data, you can subscribe `sampleReported` event or `summmaryReported` event: -- `sampleReported` event triggers every second. It's suitable as a data source for UI display or your own data pipeline.-- `summmaryReported` event contains the aggregated values of the data over intervals, which is useful when you just need a summary.
+- `sampleReported` event triggers every second. Suitable as a data source for UI display or your own data pipeline.
+- `summmaryReported` event contains the aggregated values of the data over intervals. Useful when you just need a summary.
-If you want control over the interval of the summmaryReported event, you need to define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions`. Otherwise, the SDK uses default values.
+If you want control over the interval of the `summmaryReported` event, you need to define `mediaStatsCollectorOptions` of type `MediaStatsCollectorOptions`. Otherwise, the SDK uses default values.
```javascript const mediaStatsCollectorOptions: SDK.MediaStatsCollectorOptions = { aggregationInterval: 10,
mediaStatsCollector.on('summaryReported', (summary) => {
}); ```
-In case you don't need to use the media statistics collector, you can call dispose method of `mediaStatsCollector`.
+If you don't need to use the media statistics collector, you can call the dispose method of `mediaStatsCollector`.
```javascript mediaStatsCollector.dispose(); ```
-It's not necessary to call the dispose method of `mediaStatsCollector` every time the call ends, as the collectors are reclaimed internally when the call ends.
+You don't need to call the dispose method of `mediaStatsCollector` every time a call ends. The collectors are reclaimed internally when the call ends.
-You can learn more about media quality statistics [here](../concepts/voice-video-calling/media-quality-sdk.md?pivots=platform-web).
+For more information, see [Media quality statistics](../concepts/voice-video-calling/media-quality-sdk.md?pivots=platform-web).
## Diagnostics ### Twilio
-To test connectivity, Twilio offers Preflight API - a test call is performed to identify signaling and media connectivity issues.
+To test connectivity, Twilio offers Preflight API. This is a test call performed to identify signaling and media connectivity issues.
-To launch the test, an access token is required:
+An access token is required to launch the test:
```javascript const preflightTest = twilioVideo.runPreflight(token);
preflightTest.on('failed', (error, report) => {
preflightTest.on('completed', (report) => { console.log(`Preflight test report: ${report}`); });- ```
-Another way to identify network issues during the call is Network Quality API, which monitors Participant's network and provides quality metrics. It can be enabled in the connection options when joining the Group Room:
+Another way to identify network issues during the call is by using the Network Quality API, which monitors a Participant's network and provides quality metrics. You can enable it in the connection options when a participant joins the Group Room:
```javascript twilioRoom = await twilioVideo.connect('token', {
twilioRoom = await twilioVideo.connect('token', {
}); ```
-When the network quality for Participant changes, a `networkQualityLevelChanged` event will be emitted:
+When the network quality for Participant changes, it generates a `networkQualityLevelChanged` event:
```javascript participant.on(networkQualityLevelChanged, (networkQualityLevel, networkQualityStats) => { // Processing Network Quality stats
participant.on(networkQualityLevelChanged, (networkQualityLevel, networkQualityS
``` ### Azure Communication Services
-Azure Communication Services provides a feature called `"User Facing Diagnostics" (UFD)` that can be used to examine various properties of a call to determine what the issue might be. User Facing Diagnostics are events that are fired off that could indicate due to some underlying issue (poor network, the user has their microphone muted) that a user might have a poor experience.
+Azure Communication Services provides a feature called `"User Facing Diagnostics" (UFD)` that you can use to examine various properties of a call to identify the issue. User Facing Diagnostics events could be caused by some underlying issue (poor network, the user has their microphone muted) that could cause a user to have a poor call experience.
-User-facing diagnostics is an extended feature of the core Call API and allows you to diagnose an active call.
+User-facing diagnostics is an extended feature of the core Call API and enables you to diagnose an active call.
```javascript const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics); ```
-Subscribe to the diagnosticChanged event to monitor when any user-facing diagnostic changes:
+Subscribe to the `diagnosticChanged`` event to monitor when any user-facing diagnostic changes:
```javascript /** * Each diagnostic has the following data:
const diagnosticChangedListener = (diagnosticInfo: NetworkDiagnosticChangedEvent
userFacingDiagnostics.network.on('diagnosticChanged', diagnosticChangedListener); userFacingDiagnostics.media.on('diagnosticChanged', diagnosticChangedListener);- ```
-You can learn more about User Facing Diagnostics and the different diagnostic values available in [this article](../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
+To learn more about User Facing Diagnostics and the different diagnostic values available, see [User Facing Diagnostics](../concepts/voice-video-calling/user-facing-diagnostics.md?pivots=platform-web).
-ACS also provides a pre-call diagnostics API. To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. There you can access the `PreCallDiagnostics` feature and the `startTest` method.
+Azure Communication Services also provides a precall diagnostics API. To Access the Pre-Call API, you need to initialize a `callClient`, and provision an Azure Communication Services access token. Then you can access the `PreCallDiagnostics` feature and the `startTest` method.
```javascript import { CallClient, Features} from "@azure/communication-calling";
const tokenCredential = new AzureCommunicationTokenCredential("INSERT ACCESS TOK
const preCallDiagnosticsResult = await callClient.feature(Features.PreCallDiagnostics).startTest(tokenCredential); ```
-The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a PreCallDiagnosticsResult object.
+The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `PreCallDiagnosticsResult` object.
```javascript export declare type PreCallDiagnosticsResult = {
export declare type PreCallDiagnosticsResult = {
}; ```
-You can learn more about ensuring precall readiness [here](../concepts/voice-video-calling/pre-call-diagnostics.md).
-
+You can learn more about ensuring precall readiness in [Pre-Call diagnostics](../concepts/voice-video-calling/pre-call-diagnostics.md).
## Event listeners
-### Twilio
+Twilio
```javascript twilioRoom.on('participantConneted', (participant) => {
container-apps Azure Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md
Previously updated : 12/05/2023 Last updated : 01/30/2024
ARM64 based clusters aren't supported at this time.
- Fix to image pull secret retrieval issues - Update placement of Envoy to distribute across available nodes where possible - When container apps fail to provision as a result of revision conflicts, set the provisioning state to failed
-
+
+### Container Apps extension v1.30.6 (January 2024)
+
+ - Update KEDA to v2.12
+ - Update Envoy SC image to v1.0.4
+ - Update Dapr image to v1.11.6
+ - Added default response timeout for Envoy routes to 1800 seconds
+ - Changed Fluent bit default log level to warn
+ - Delay deletion of job pods to ensure log emission
+ - Fixed issue for job pod deletion for failed job executions
+ - Ensure jobs in suspended state also have failed pods deleted
+ - Update to not resolve HTTPOptions for TCP applications
+ - Allow applications to listen on HTTP or HTTPS
+ - Add ability to suspend jobs
+ - Fixed issue where KEDA scaler was failing to create job after stopped job execution
+ - Add startingDeadlineSeconds to Container App Job in case of cluster reboot
+ - Removed heavy logging in Envoy access log server
+ - Updated Monitoring Configuration version for Azure Container Apps on Azure Arc enabled Kubernetes
+
## Next steps [Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md)
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
You can get started building your first container app [using the quickstarts](ge
[Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change. ### Azure Spring Apps
-[Azure Spring Apps](../spring-apps/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
+[Azure Spring Apps](../spring-apps/enterprise/overview.md) is a fully managed service for Spring developers. If you want to run Spring Boot, Spring Cloud or any other Spring applications on Azure, Azure Spring Apps is an ideal option. The service manages the infrastructure of Spring applications so developers can focus on their code. Azure Spring Apps provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more.
### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following tables describe how to configure a collection of NSG allow rules.
# [Consumption only environment](#tab/consumption-only)
+>[!Note]
+> When using Consumption only environments, all [outbound ports required by Azure Kubernetes Service](/azure/aks/outbound-rules-control-egress#required-outbound-network-rules-and-fqdns-for-aks-clusters) are also required for your container app.
+ | Protocol | Source | Source ports | Destination | Destination ports | Description | |--|--|--|--|--|--| | TCP | Your container app's subnet<sup>1</sup> | \* | Your Container Registry | Your container registry's port | This is required to communicate with your container registry. For example, when using ACR, you need `AzureContainerRegistry` and `AzureActiveDirectory` for the destination, and the port will be your container registry's port unless using private endpoints.<sup>2</sup> |
container-apps Ingress How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md
Disable ingress for your container app by omitting the `ingress` configuration p
::: zone-end
-## <a name="use-additional-tcp-ports"></a>Use additional TCP ports (preview)
+## <a name="use-additional-tcp-ports"></a>Use additional TCP ports
You can expose additional TCP ports from your application. To learn more, see the [ingress concept article](ingress-overview.md#additional-tcp-ports).
-> [Note]
-> To use this preview feature, you must have the container apps CLI extension. Run `az extension add -n containerapp` in order to install the latest version of the container apps CLI extension.
+> [!NOTE]
+> To use this feature, you must have the container apps CLI extension. Run `az extension add -n containerapp` in order to install the latest version of the container apps CLI extension.
::: zone pivot="azure-cli"
container-apps Ingress Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md
With TCP ingress enabled, your container app:
- Is accessible to other container apps in the same environment via its name (defined by the `name` property in the Container Apps resource) and exposed port number. - Is accessible externally via its fully qualified domain name (FQDN) and exposed port number if the ingress is set to "external".
-## <a name="additional-tcp-ports"></a>Additional TCP ports (preview)
+## <a name="additional-tcp-ports"></a>Additional TCP ports
-In addition to the main HTTP/TCP port for your container apps, you might expose additional TCP ports to enable applications that accept TCP connections on multiple ports. This feature is in preview.
+In addition to the main HTTP/TCP port for your container apps, you might expose additional TCP ports to enable applications that accept TCP connections on multiple ports.
> [!NOTE]
-> As the feature is in preview, make sure you are using the latest preview version of the container apps CLI extension.
+> This feature requires using the latest preview version of the container apps CLI extension.
The following apply to additional TCP ports: - Additional TCP ports can only be external if the app itself is set as external and the container app is using a custom VNet.
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Azure Container Instances
container-registry Container Registry Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-streaming.md
description: "Artifact streaming is a feature in Azure Container Registry to enh
+ Last updated 12/14/2023- #customer intent: As a developer, I want artifact streaming capabilities so that I can efficiently deliver and serve containerized applications to end-users in real-time.
Follow the steps to create artifact streaming in the [Azure portal](https://port
[az-acr-artifact-streaming-operation-cancel]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-cancel [az-acr-artifact-streaming-operation-show]: /cli/azure/acr/artifact-streaming/operation#az-acr-artifact-streaming-operation-show [az-acr-artifact-streaming-update]: /cli/azure/acr/artifact-streaming#az-acr-artifact-streaming-update-
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
az group create --name myResourceGroup --location eastus
## Create a container registry
-Create a container registry using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, *myContainerRegistry008* is used. Update this to a unique value.
+Create a container registry using the [az acr create][az-acr-create] command. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the following example, *mycontainerregistry008* is used. Update this to a unique value.
```azurecli-interactive az acr create --resource-group myResourceGroup \
- --name myContainerRegistry008 --sku Basic
+ --name mycontainerregistry008 --sku Basic
``` This example creates a *Basic* registry, a cost-optimized option for developers learning about Azure Container Registry. For details on available service tiers, see [Container registry service tiers][container-registry-skus].
Run the [az acr build][az-acr-build] command, which builds the image and, after
```azurecli-interactive az acr build --image sample/hello-world:v1 \
- --registry myContainerRegistry008 \
+ --registry mycontainerregistry008 \
--file Dockerfile . ```
Now quickly run the image you built and pushed to your registry. Here you use [a
The following example uses $Registry to specify the endpoint of the registry where you run the command: ```azurecli-interactive
-az acr run --registry myContainerRegistry008 \
+az acr run --registry mycontainerregistry008 \
--cmd '$Registry/sample/hello-world:v1' ```
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
copilot Get Monitoring Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-monitoring-information.md
Title: Get information about Azure Monitor logs using Microsoft Copilot for Azure (preview) description: Learn about scenarios where Microsoft Copilot for Azure (preview) can provide information about Azure Monitor metrics and logs. Previously updated : 11/15/2023 Last updated : 01/30/2024
You can ask Microsoft Copilot for Azure (preview) questions about logs collected by [Azure Monitor](/azure/azure-monitor/).
-When asked about logs for a particular resource, Microsoft Copilot for Azure (preview) generates an example KQL expression and allows you to further explore the data in Azure Monitor logs. This capability is available for all customers using Log Analytics, and can be used in the context of a particular Azure Kubernetes Service cluster that is using Azure Monitor logs.
+When asked about logs for a particular resource, Microsoft Copilot for Azure (preview) generates an example KQL expression and allows you to further explore the data in Azure Monitor logs. This capability is available for all customers using Log Analytics, and can be used in the context of a particular Azure Kubernetes Service (AKS) cluster that uses Azure Monitor logs.
-When you ask Microsoft Copilot for Azure (preview) about logs, it automatically pulls context when possible, based on the current conversation or on the page you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify the resource for which you want information.
+To get details about your container logs, start on the **Logs** page for your AKS cluster.
[!INCLUDE [scenario-note](includes/scenario-note.md)]
When you ask Microsoft Copilot for Azure (preview) about logs, it automatically
## Sample prompts
-Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor logs. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information.
+Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor logs for an AKS cluster. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information.
- "Are there any errors in container logs?" - "Show logs for the last day of pod <provide_pod_name> under namespace <provide_namespace>"
Here are a few examples of the kinds of prompts you can use to get information a
## Next steps - Explore [capabilities](capabilities.md) of Microsoft Copilot for Azure (preview).-- Learn more about [Azure Monitor](/azure/azure-monitor/).
+- Learn more about [Azure Monitor](/azure/azure-monitor/) and [how to use it with AKS clusters](/azure/aks/monitor-aks).
- [Request access](https://aka.ms/MSCopilotforAzurePreview) to Microsoft Copilot for Azure (preview).
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
az cosmosdb sql container merge \
For **shared throughput databases**, start the merge by using `az cosmosdb sql database merge`.
-```azurecli
-az cosmosdb sql database merge \
- --account-name '<cosmos-account-name>'
- --name '<cosmos-database-name>'
- --resource-group '<resource-group-name>'
+```azurecli-interactive
+az cosmosdb sql database merge `
+ --resource-group "<resource-group-name>" `
+ --name "<database-name>" `
+ --account-name "<cosmos-db-account-name>"
```
-```http
-POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/sqlDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview
-```
+```azurecli-interactive
+databaseId=$(az cosmosdb sql database show `
+ --resource-group "<resource-group-name>" `
+ --name "<database-name>" `
+ --account-name "<cosmos-db-account-name>" `
+ --query "id" `
+ --output "tsv"
+)
+
+endpoint="https://management.azure.com$databaseId/partitionMerge?api-version=2023-11-15-preview"
+
+az rest `
+ --method "POST" `
+ --url $endpoint `
+ --body "{}"
#### [API for MongoDB](#tab/mongodb/azure-powershell)
az cosmosdb mongodb collection merge \
-For **shared-throughput databases**, start the merge by using [`az cosmosdb mongodb database merge`](/cli/azure/cosmosdb/mongodb/database?view=azure-cli-latest).
+For **shared-throughput databases**, start the merge by using [`az cosmosdb mongodb database merge`](/cli/azure/cosmosdb/mongodb/database).
-```azurecli
+```azurecli-interactive
az cosmosdb mongodb database merge \ --account-name '<cosmos-account-name>' --name '<cosmos-database-name>'
az cosmosdb mongodb database merge \
```
-```http
+```http-interactive
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/mongodbDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview ```
cosmos-db Change Feed Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-modes.md
Latest version mode is a persistent record of changes made to items from creates
## All versions and deletes change feed mode (preview)
-All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts. Learn more about how to [sign up for the preview](#get-started).
+All versions and deletes mode (preview) is a persistent record of all changes to items from create, update, and delete operations. You get a record of each change to items in the order that it occurred, including intermediate changes to an item between change feed reads. For example, if an item is created and then updated before you read the change feed, both the create and the update versions of the item appear in the change feed. To read from the change feed in all versions and deletes mode, you must have [continuous backups](../continuous-backup-restore-introduction.md) configured for your Azure Cosmos DB account. Turning on continuous backups creates the all versions and deletes change feed. You can only read changes that occurred within the continuous backup period when using this change feed mode. This mode is only compatible with Azure Cosmos DB for NoSQL accounts. Learn more about how to [sign up for the preview](?tabs=all-versions-and-deletes#get-started).
## Change feed use cases
cosmos-db Client Metrics Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/client-metrics-java.md
+ Last updated 12/14/2023
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
ms.devlang: csharp+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-go.md
ms.devlang: golang+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
ms.devlang: java+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-nodejs.md
ms.devlang: javascript+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python+ Last updated 01/08/2024 zone_pivot_groups: azure-cosmos-db-quickstart-env
cosmos-db Tutorial Spark Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-spark-connector.md
+ Last updated 01/17/2024 zone_pivot_groups: programming-languages-spark-all-minus-sql-r-csharp
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-springboot-azure-kubernetes-service.md
# Tutorial - Spring Boot Application with Azure Cosmos DB for NoSQL and Azure Kubernetes Service [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
+> [!NOTE]
+> For Spring Boot applications, we recommend using Azure Spring Apps. However, you can still use Azure Kubernetes Service as a destination.
+ In this tutorial, you will set up and deploy a Spring Boot application that exposes REST APIs to perform CRUD operations on data in Azure Cosmos DB (API for NoSQL account). You will package the application as Docker image, push it to Azure Container Registry, deploy to Azure Kubernetes Service and test the application. ## Pre-requisites
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
cosmos-db Reference Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-terraform.md
+ Last updated 01/02/2024
Terraform provides documentation for all supported Azure Cosmos DB for PostgreSQ
## Next steps * See [the latest documentation for Terraform's Azure provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs).
-* Learn to [use Azure CLI authentication in Terraform](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli).
+* Learn to [use Azure CLI authentication in Terraform](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/azure_cli).
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
# Installs Node and the npm packages saved in your package.json file in the build
- - task: NodeTool@0
+ - task: UseNode@1
inputs:
- versionSpec: '14.x'
+ version: '18.x'
displayName: 'Install Node.js' - task: Npm@1
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
For other scenarios than binary file copy, copy activity rerun starts from the b
While copying data from source to sink, in scenarios like data lake migration, you can also choose to preserve the metadata and ACLs along with data using copy activity. See [Preserve metadata](copy-activity-preserve-metadata.md) for details.
+## Add metadata tags to file based sink
+When the sink is Azure Storage based (Azure data lake storage or Azure Blob Storage), we can opt to add some metadata to the files. These metadata will be appearing as part of the file properties as Key-Value pairs.
+For all the types of file based sinks, you can add metadata involving dynamic content using the pipeline parameters, system variables, functions and variables.
+In addition to this, for binary file based sink, you have the option to add Last Modified datetime (of the source file) using the keyword $$LASTMODIFIED, as well as custom values as a metadata to the sink file.
+ ## Schema and data type mapping See [Schema and data type mapping](copy-activity-schema-and-type-mapping.md) for information about how the Copy activity maps your source data to your sink.
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Previously updated : 12/13/2023 Last updated : 01/29/2024
All the linked service types are supported for parameterization.
- Generic HTTP - Generic REST - Google AdWords
+- Google BigQuery
- Informix
+- MariaDB
- Microsoft Access - MySQL - OData
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
Unsupported resources include:
* Azure API Management in deployment modes other than the supported modes. * PaaS services (multi-tenant) including Azure App Service Environment for Power Apps. * Protected resources that include public IPs created from public IP address prefix.
+* NAT Gateway.
[!INCLUDE [ddos-waf-recommendation](../../includes/ddos-waf-recommendation.md)]
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024
defender-for-cloud Concept Gcp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-gcp-connector.md
Title: Defender for Cloud's GCP connector description: Learn how the GCP connector works on Microsoft Defender for Cloud.- Last updated 06/29/2023
The GCP connector allows for continuous monitoring of Google Cloud resources for
The authentication process between Microsoft Defender for Cloud and GCP is a federated authentication process.
-When you onboard to Defender for Cloud, the GCloud template is used to create the following resources as part of the authentication process:
+When you onboard to Defender for Cloud, the GCloud template is used to create the following resources as part of the authentication process:
- Workload identity pool and providers
From here, you can decide which resources you want to protect based on the secur
### Configure access
-Once you've selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
+Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
In this step, you can find the GCloud script that needs to be run on the GCP project that is going to onboarded. The GCloud script is generated based on the plans you selected to onboard.
From here, you can decide which resources you want to protect based on the secur
### Configure access
-Once you've selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
+Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
When you onboard an organization, there's a section that includes management project details. Similar to other GCP projects, the organization is also considered a project and is utilized by Defender for Cloud to create all of the required resources needed to connect the organization to Defender for Cloud. In the management project details section, you have the choice of: -- Dedicating a management project for Defender for Cloud to include in the GCloud script.
+- Dedicating a management project for Defender for Cloud to include in the GCloud script.
- Provide the details of an already existing project to be used as the management project with Defender for Cloud.
-You need to decide what is your best option for your organization's architecture. We recommend creating a dedicated project for Defender for Cloud.
+You need to decide what is your best option for your organization's architecture. We recommend creating a dedicated project for Defender for Cloud.
The GCloud script is generated based on the plans you selected to onboard. The script creates all of the required resources on your GCP environment so that Defender for Cloud can operate and provide the following security benefits: - Workload identity pool - Workload identity provider for each plan - Custom role to grant Defender for Cloud access to discover and get the project under the onboarded organization-- A service account for each plan
+- A service account for each plan
- A service account for the autoprovisioning service - Organization level policy bindings for each service account - API enablement(s) at the management project level.
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
When you enable the agentless discovery for Kubernetes extension, the following
Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature). - **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.-- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role *Microsoft.Security/pricings/microsoft-defender-operator*. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
+- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation by creating a `ClusterRoleBinding` between the created identity and the Kubernetes `ClusterRole` *aks:trustedaccessrole:defender-containers:microsoft-defender-operator*. The `ClusterRole` is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
## [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
To complete this quickstart, you need:
|--|--| | Release state: | General Availability. | | Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). |
-| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** to create a connector on the Azure subscription. <br> **Project Collection Administrator** on the Azure DevOps Organization. <br> **Basic or Basic + Test Plans Access Level** in Azure DevOps. <br> **Third-party application access via OAuth**, which must be set to `On` on the Azure DevOps Organization. [Learn more about OAuth and how to enable it in your organizations](/azure/devops/organizations/accounts/change-application-access-policies).|
+| Required permissions: | **Account Administrator** with permissions to sign in to the Azure portal. <br> **Contributor** to create a connector on the Azure subscription. <br> **Project Collection Administrator** on the Azure DevOps Organization. <br> **Basic or Basic + Test Plans Access Level** on the Azure DevOps Organization. <br> _Please ensure you have BOTH Project Collection Administrator permissions and Basic Access Level for all Azure DevOps organizations you wish to onboard. Stakeholder Access Level is not sufficient._ <br> **Third-party application access via OAuth**, which must be set to `On` on the Azure DevOps Organization. [Learn more about OAuth and how to enable it in your organizations](/azure/devops/organizations/accounts/change-application-access-policies).|
| Regions and availability: | Refer to the [support and prerequisites](devops-support.md) section for region support and feature availability. | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Microsoft Azure operated by 21Vianet) |
defender-for-cloud Subassessment Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/subassessment-rest-api.md
Azure Resource Graph (ARG) provides a REST API that can be used to programmatically access vulnerability assessment results for both Azure registry and runtime vulnerabilities recommendations. Learn more about [ARG references and query examples](/azure/governance/resource-graph/overview).
-Azure and AWS container registry vulnerabilities sub-assessments are published to ARG as part of the security resources. Learn more about [security sub-assessments](/azure/governance/resource-graph/samples/samples-by-category?tabs=azure-cli#list-container-registry-vulnerability-assessment-results).
+Azure, AWS, and GCP container registry vulnerabilities sub-assessments are published to ARG as part of the security resources. Learn more about [security sub-assessments](/azure/governance/resource-graph/samples/samples-by-category?tabs=azure-cli#list-container-registry-vulnerability-assessment-results).
## ARG query examples To pull specific sub assessments, you need the assessment key.
-* For Azure container vulnerability assessment powered by MDVM the key is `c0b7cfc6-3172-465a-b378-53c7ff2cc0d5`.
-* For AWS container vulnerability assessment powered by MDVM the key is `c27441ae-775c-45be-8ffa-655de37362ce`.
+* For Azure container vulnerability assessment powered by MDVM, the key is `c0b7cfc6-3172-465a-b378-53c7ff2cc0d5`.
+* For AWS container vulnerability assessment powered by MDVM, the key is `c27441ae-775c-45be-8ffa-655de37362ce`.
+* For GCP container vulnerability assessment powered by MDVM, the key is `5cc3a2c1-8397-456f-8792-fe9d0d4c9145`.
The following is a generic security sub assessment query example that can be used as an example to build queries with. This query pulls the first sub assessment generated in the last hour. ```kql
securityresources
] ```
+### Query result - GCP sub-assessment
+```json
+[
+ {
+ "id": "/subscriptions/{SubscriptionId}/resourceGroups/{ResourceGroup}/providers/ microsoft.security/ securityconnectors/{SecurityConnectorName}/securityentitydata/gar-gcp-repository-{RepositoryName}-{Region}/providers/Microsoft.Security/assessments/5cc3a2c1-8397-456f-8792-fe9d0d4c9145/subassessments/{SubAssessmentId}",
+ "name": "{SubAssessmentId}",
+ "type": "microsoft.security/assessments/subassessments",
+ "tenantId": "{TenantId}",
+ "kind": "",
+ "location": "global",
+ "resourceGroup": "{ResourceGroup}",
+ "subscriptionId": "{SubscriptionId}",
+ "managedBy": "",
+ "sku": null,
+ "plan": null,
+ "properties": {
+ "description": "This vulnerability affects the following vendors: Alpine, Debian, Libtiff, Suse, Ubuntu. To view more details about this vulnerability please visit the vendor website.",
+ "resourceDetails": {
+ "id": "us-central1-docker.pkg.dev/detection-stg-manual-tests-2/hital/nginx@sha256:09e210fe1e7f54647344d278a8d0dee8a4f59f275b72280e8b5a7c18c560057f",
+ "source": "Gcp",
+ "resourceType": "repository",
+ "nativeCloudUniqueIdentifier": "projects/detection-stg-manual-tests-2/locations/us-central1/repositories/hital/dockerImages/nginx@sha256:09e210fe1e7f54647344d278a8d0dee8a4f59f275b72280e8b5a7c18c560057f",
+ "resourceProvider": "gar",
+ "resourceName": "detection-stg-manual-tests-2/hital/nginx",
+ "hierarchyId": "788875449976",
+ "connectorId": "40139bd8-5bae-e3e0-c640-2a45cdcd2d0c",
+ "region": "us-central1"
+ },
+ "displayName": "CVE-2017-11613",
+ "additionalData": {
+ "assessedResourceType": "GcpContainerRegistryVulnerability",
+ "vulnerabilityDetails": {
+ "severity": "Low",
+ "lastModifiedDate": "2023-12-09T00:00:00.0000000Z",
+ "exploitabilityAssessment": {
+ "exploitStepsPublished": false,
+ "exploitStepsVerified": false,
+ "exploitUris": [],
+ "isInExploitKit": false,
+ "types": [
+ "PrivilegeEscalation"
+ ]
+ },
+ "publishedDate": "2017-07-26T00:00:00.0000000Z",
+ "workarounds": [],
+ "references": [
+ {
+ "title": "CVE-2017-11613",
+ "link": "https://nvd.nist.gov/vuln/detail/CVE-2017-11613"
+ },
+ {
+ "title": "129463",
+ "link": "https://exchange.xforce.ibmcloud.com/vulnerabilities/129463"
+ },
+ {
+ "title": "CVE-2017-11613_oval:com.ubuntu.trusty:def:36061000000",
+ "link": "https://security-metadata.canonical.com/oval/com.ubuntu.trusty.usn.oval.xml.bz2"
+ },
+ {
+ "title": "CVE-2017-11613_oval:org.debian:def:85994619016140765823174295608399452222",
+ "link": "https://www.debian.org/security/oval/oval-definitions-stretch.xml"
+ },
+ {
+ "title": "oval:org.opensuse.security:def:201711613",
+ "link": "https://ftp.suse.com/pub/projects/security/oval/suse.linux.enterprise.server.15.xml.gz"
+ },
+ {
+ "title": "CVE-2017-11613-cpe:2.3:a:alpine:tiff:*:*:*:*:*:alpine_3.9:*:*-3.9",
+ "link": "https://security.alpinelinux.org/vuln/CVE-2017-11613"
+ }
+ ],
+ "weaknesses": {
+ "cwe": [
+ {
+ "id": "CWE-20"
+ }
+ ]
+ },
+ "cvss": {
+ "2.0": null,
+ "3.0": {
+ "cvssVectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:L/E:U/RL:U/RC:R",
+ "base": 3.3
+ }
+ },
+ "cveId": "CVE-2017-11613",
+ "cpe": {
+ "version": "*",
+ "language": "*",
+ "vendor": "debian",
+ "softwareEdition": "*",
+ "targetSoftware": "debian_9",
+ "targetHardware": "*",
+ "product": "tiff",
+ "edition": "*",
+ "update": "*",
+ "other": "*",
+ "part": "Applications",
+ "uri": "cpe:2.3:a:debian:tiff:*:*:*:*:*:debian_9:*:*"
+ }
+ },
+ "cvssV30Score": 3.3,
+ "artifactDetails": {
+ "lastPushedToRegistryUTC": "2023-12-11T08:33:13.0000000Z",
+ "repositoryName": "detection-stg-manual-tests-2/hital/nginx",
+ "registryHost": "us-central1-docker.pkg.dev",
+ "artifactType": "ContainerImage",
+ "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
+ "digest": "sha256:09e210fe1e7f54647344d278a8d0dee8a4f59f275b72280e8b5a7c18c560057f",
+ "tags": [
+ "1.12"
+ ]
+ },
+ "softwareDetails": {
+ "version": "4.0.8-2+deb9u2",
+ "language": "",
+ "fixedVersion": "4.0.8-2+deb9u4",
+ "vendor": "debian",
+ "category": "OS",
+ "osDetails": {
+ "osPlatform": "linux",
+ "osVersion": "debian_9"
+ },
+ "packageName": "tiff",
+ "fixReference": {
+ "description": "DSA-4349-1: tiff security update 2018 November 30",
+ "id": "DSA-4349-1",
+ "releaseDate": "2018-11-30T22:41:54.0000000Z",
+ "url": "https://security-tracker.debian.org/tracker/DSA-4349-1"
+ },
+ "fixStatus": "FixAvailable",
+ "evidence": [
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^tiff:.* -e .*:tiff: | cut -f 1 -d ':' | xargs dpkg-query -s",
+ "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^tiff:.* -e .*:tiff: | cut -f 1 -d ':' | xargs dpkg-query -s"
+ ]
+ }
+ },
+ "timeGenerated": "2023-12-11T10:25:43.8751687Z",
+ "remediation": "Create new image with updated package tiff with version 4.0.8-2+deb9u4 or higher.",
+ "id": "CVE-2017-11613",
+ "status": {
+ "severity": "Low",
+ "code": "Unhealthy"
+ }
+ },
+ "tags": null,
+ "identity": null,
+ "zones": null,
+ "extendedLocation": null,
+ "assessmentKey": "5cc3a2c1-8397-456f-8792-fe9d0d4c9145",
+ "timeGenerated": "2023-12-11T10:25:43.8751687Z"
+ }
+]
+```
+ ## Definitions | Name | Description |
Other context fields for Azure container registry vulnerability assessment
| **Name** | **Type** | **Description** | | -- | -- | -- |
-| assessedResourceType | string: <br> AzureContainerRegistryVulnerability<br> AwsContainerRegistryVulnerability | Subassessment resource type |
+| assessedResourceType | string: <br> AzureContainerRegistryVulnerability<br> AwsContainerRegistryVulnerability <br> GcpContainerRegistryVulnerability | Subassessment resource type |
| cvssV30Score | Numeric | CVSS V3 Score | | vulnerabilityDetails | VulnerabilityDetails | | | artifactDetails | ArtifactDetails | |
Details of the Azure resource that was assessed
| ID | string | Azure resource ID of the assessed resource | | source | string: Azure | The platform where the assessed resource resides |
-### ResourceDetails - AWS
+### ResourceDetails - AWS / GCP
-Details of the AWS resource that was assessed
+Details of the AWS/GCP resource that was assessed
| **Name** | **Type** | **Description** | | | | | | id | string | Azure resource ID of the assessed resource |
-| source | string: Aws | The platform where the assessed resource resides |
+| source | string: Aws/Gcp | The platform where the assessed resource resides |
| connectorId | string | Connector ID | | region | string | Region | | nativeCloudUniqueIdentifier | string | Native Cloud's Resource ID of the Assessed resource in |
-| resourceProvider | string: ecr | The assessed resource provider |
+| resourceProvider | string: ecr/gar/gcr | The assessed resource provider |
| resourceType | string | The assessed resource type | | resourceName | string | The assessed resource name |
-| hierarchyId | string | Account ID (Aws) |
+| hierarchyId | string | Account ID (Aws) / Project ID (Gcp) |
### SubAssessmentStatus
Programmatic code for the status of the assessment
| **Name** | **Type** | **Description**| | | | | | Healthy | string | The resource is healthy |
-| NotApplicable | string | Assessment for this resource did not happen |
+| NotApplicable | string | Assessment for this resource didn't happen |
| Unhealthy | string | The resource has a security issue that needs to be addressed | ### SecuritySubAssessment
Security subassessment on a resource
| properties.id | string | Vulnerability ID | | properties.impact | string | Description of the impact of this subassessment | | properties.remediation | string | Information on how to remediate this subassessment |
-| properties.resourceDetails | ResourceDetails: <br> [Azure Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsazure) <br> [AWS Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsaws) | Details of the resource that was assessed |
+| properties.resourceDetails | ResourceDetails: <br> [Azure Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsazure) <br> [AWS/GCP Resource Details](/azure/defender-for-cloud/subassessment-rest-api#resourcedetailsaws--gcp) | Details of the resource that was assessed |
| properties.status | [SubAssessmentStatus](/azure/defender-for-cloud/subassessment-rest-api#subassessmentstatus) | Status of the subassessment | | properties.timeGenerated | string | The date and time the subassessment was generated | | type | string | Resource type |
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Previously updated : 06/18/2023 Last updated : 01/24/2024 # Microsoft Defender for Cloud troubleshooting guide
Last updated 06/18/2023
This guide is for IT professionals, information security analysts, and cloud administrators whose organizations need to troubleshoot problems related to Microsoft Defender for Cloud. > [!TIP]
-> When you're facing a problem or need advice from our support team, the **Diagnose and solve problems** section of the Azure portal is good place to look for solutions.
+> When you're facing a problem or need advice from our support team, the **Diagnose and solve problems** section of the Azure portal is a good place to look for solutions.
>
-> :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Screenshot of the Azure portal that shows the page for diagnosing and solving problems in Defender for Cloud.":::
+> :::image type="content" source="media/release-notes/solve-problems.png" alt-text="Screenshot of the Azure portal that shows the page for diagnosing and solving problems in Defender for Cloud." lightbox="media/release-notes/solve-problems.png":::
## Use the audit log to investigate problems
Just like Azure Monitor, Defender for Cloud uses the Log Analytics agent to coll
Open the services management console (*services.msc*) to make sure that the Log Analytics agent service is running. To see which version of the agent you have, open Task Manager. On the **Processes** tab, locate the Log Analytics agent service, right-click it, and then select **Properties**. On the **Details** tab, look for the file version. ### Check installation scenarios for the Log Analytics agent
If you experience problems with loading the workload protection dashboard, make
If you can't onboard your Azure DevOps organization, try the following troubleshooting tips:
+- Make sure you're using a non-preview version of the [Azure portal]( https://portal.azure.com); the authorize step doesn't work in the Azure preview portal.
+ - It's important to know which account you're signed in to when you authorize the access, because that will be the account that the system uses for onboarding. Your account can be associated with the same email address but also associated with different tenants. Make sure that you select the right account/tenant combination. If you need to change the combination: 1. On your [Azure DevOps profile page](https://app.vssps.visualstudio.com/profile/view), use the dropdown menu to select another account.
- :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that's used to select an account.":::
+ :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that's used to select an account." lightbox="media/troubleshooting-guide/authorize-select-tenant.png":::
1. After you select the correct account/tenant combination, go to **Environment settings** in Defender for Cloud and edit your Azure DevOps connector. Reauthorize the connector to update it with the correct account/tenant combination. You should then see the correct list of organizations on the dropdown menu.
You can also find troubleshooting information for Defender for Cloud at the [Def
If you need more assistance, you can open a new support request on the Azure portal. On the **Help + support** page, select **Create a support request**. ## See also
dev-box How To Configure Stop Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-stop-schedule.md
description: Learn how to configure an auto-stop schedule to automatically shut down dev boxes in a pool at a specified time and save on costs. + Last updated 01/10/2024
dev-box Monitor Dev Box Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/monitor-dev-box-reference.md
+
+ Title: Monitoring Microsoft Dev Box data reference
+
+description: Important reference material needed when you monitor Dev Box. Schema reference for dev center diagnostic logs. Review the included Azure Storage and Azure Monitor Logs properties.
++++++ Last updated : 01/30/2023++
+# Monitoring Microsoft Dev Box data reference
+
+This article provides a reference for log and metric data collected for a Microsoft Dev Box dev center. You can use the collected data to analyze the performance and availability of resources within your dev center. For details about how to collect and analyze monitoring data for your dev center, see [Monitoring Microsoft Dev Box](monitor-dev-box.md).
+
+## Resource logs
+
+The following table lists the properties of resource logs in a Microsoft Dev Box dev center. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the **DevCenterDiagnosticLogs** table under the resource provider name of `MICROSOFT.DEVCENTER`.
+
+| Azure Storage field or property | Azure Monitor Logs property | Description |
+| | | |
+| **time** | **TimeGenerated** | The date and time (UTC) when the operation occurred. |
+| **resourceId** | **ResourceId** | The dev center resource for which logs are enabled. |
+| **operationName** | **OperationName** | Name of the operation. If the event represents an Azure role-based access control (RBAC) operation, specify the Azure RBAC operation name (for example, `Microsoft.DevCenter/projects/users/devboxes/write`). This name is typically modeled in the form of an Azure Resource Manager operation, even if it's not a documented Resource Manager operation: (`Microsoft.<providerName>/<resourceType>/<subtype>/<Write/Read/Delete/Action>`). |
+| **identity** | **CallerIdentity** | The OID of the caller of the event. |
+| **TargetResourceId** | **ResourceId** | The subresource that pertains to the request. Depending on the operation performed, this value might point to a `devbox` or `environment`. |
+| **resultSignature** | **ResponseCode** | The HTTP status code returned for the operation. |
+| **resultType** | **OperationResult** | Indicates whether the operation failed or succeeded. |
+| **correlationId** | **CorrelationId** | The unique correlation ID for the operation that can be shared with the app team to support further investigation. |
+
+For a list of all Azure Monitor log categories and links to associated schemas, see [Common and service-specific schemas for Azure resource logs](../azure-monitor/essentials/resource-logs-schema.md).
+
+## Azure Monitor Logs tables
+
+A dev center uses Kusto tables from Azure Monitor Logs. You can query these tables with Log Analytics. For a list of Kusto tables that a dev center uses, see the [Azure Monitor Logs table reference organized by resource type](/azure/azure-monitor/reference/tables/tables-resourcetype#dev-centers).
+
+## Related content
+
+- [Monitor Dev Box](monitor-dev-box.md)
+- [Monitor Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md)
dev-box Monitor Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/monitor-dev-box.md
+
+ Title: Monitoring Microsoft Dev Box
+
+description: Start here to learn how to monitor Dev Box. Learn how to use Azure diagnostic logs to see an audit history for your dev center.
++++++ Last updated : 01/30/2023++
+# Monitoring Microsoft Dev Box
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Microsoft Dev Box. Microsoft Dev Box uses [Azure Monitor](/azure/azure-monitor/overview). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+## Monitoring data
+
+Microsoft Dev Box collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring *Dev Box* data reference](monitor-dev-box-reference.md) for detailed information on the metrics and logs metrics created by Dev Box.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Dev box* are listed in [Microsoft Dev Box monitoring data reference](monitor-dev-box-reference.md#resource-logs).
+
+### Configure Azure diagnostic logs for a dev center
+
+With Azure diagnostic logs for DevCenter, you can view audit logs for dataplane operations in your dev center. These logs can be routed to any of the following destinations:
+
+* Azure Storage account
+* Log Analytics workspace
+
+This feature is available on all dev centers.
+
+Diagnostics logs allow you to export basic usage information from your dev center to different kinds sources so that you can consume them in a customized way. The dataplane audit logs expose information around CRUD operations for dev boxes within your dev center. Including, for example, start and stop commands executed on dev boxes. Some sample ways you can choose to export this data:
+
+* Export data to blob storage, export to CSV.
+* Export data to Azure Monitor logs and view and query data in your own Log Analytics workspace
+
+To learn more about the different types of logs available for dev centers, see [DevCenter Diagnostic Logs Reference](monitor-reference.md).
+
+### Enable logging with the Azure portal
+
+Follow these steps enable logging for your Azure DevCenter resource:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the Azure portal, navigate to **All resources** -> **your-devcenter**
+
+3. In the **Monitoring** section, select **Diagnostics settings**.
+
+4. Select **Add diagnostic setting** in the open page.
++
+#### Enable logging with Azure Storage
+
+To use a storage account to store the logs, follow these steps:
+
+ >[!NOTE]
+ >A storage account in the same region as your dev center is required to complete these steps. Refer to: **[Create an Azure Storage account](../storage/common/storage-account-create.md?tabs=azure-portal&toc=%2fazure%2fstorage%2fblobs%2ftoc.json)** for more information.
+
+1. For **Diagnostic setting name**, enter a name for your diagnostic log settings.
+
+2. Select **Archive to a storage account**, then select **Dataplane audit logs**.
+
+3. For **Retention (days)**, choose the number of retention days. A retention of zero days stores the logs indefinitely.
+
+4. Select the subscription and storage account for the logs.
+
+3. Select **Save**.
+
+#### Send to Log Analytics
+
+To use Log Analytics for the logs, follow these steps:
+
+>[!NOTE]
+>A log analytics workspace is required to complete these steps. Refer to: **[Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md)** for more information.
+
+1. For **Diagnostic setting name**, enter a name for your diagnostic log settings.
+
+2. Select **Send to Log Analytics**, then select **Dataplane audit logs**.
+
+3. Select the subscription and Log Analytics workspace for the logs.
+
+4. Select **Save**.
+
+### Enable logging with PowerShell
+
+The following example shows how to enable diagnostic logs via the Azure PowerShell Cmdlets.
++
+#### Enable diagnostic logs in a storage account
+
+1. Sign in to Azure PowerShell:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+
+2. To enable Diagnostic Logs in a storage account, enter these commands. Replace the variables with your values:
+
+ ```azurepowershell-interactive
+ $rg = <your-resource-group-name>
+ $devcenterid = <your-devcenter-ARM-resource-id>
+ $storageacctid = <your-storage-account-resource-id>
+ $diagname = <your-diagnostic-setting-name>
+
+ $log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category DataplaneAuditEvent -RetentionPolicyDay 7 -RetentionPolicyEnabled $true
+
+ New-AzDiagnosticSetting -Name $diagname -ResourceId $devcenterid -StorageAccountId $storageacctid -Log $log
+ ```
+
+#### Enable diagnostics logs for Log Analytics workspace
+
+1. Sign in to Azure PowerShell:
+
+ ```azurepowershell-interactive
+ Connect-AzAccount
+ ```
+2. To enable Diagnostic Logs for a Log Analytics workspace, enter these commands. Replace the variables with your values:
+
+ ```azurepowershell-interactive
+ $rg = <your-resource-group-name>
+ $devcenterid = <your-devcenter-ARM-resource-id>
+ $workspaceid = <your-log-analytics-workspace-resource-id>
+ $diagname = <your-diagnostic-setting-name>
+
+ $log = New-AzDiagnosticSettingLogSettingsObject -Enabled $true -Category DataplaneAuditEvent -RetentionPolicyDay 7 -RetentionPolicyEnabled $true
+
+ New-AzDiagnosticSetting -Name $diagname -ResourceId $devcenterid -WorkspaceId $workspaceid -Log $log
+ ```
+
+## Analyzing Logs
+This section describes existing tables for DevCenter diagnostic logs and how to query them.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Common and service-specific schemas for Azure resource logs](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema).
+
+DevCenter stores data in the following tables.
+
+| Table | Description |
+|:|:|
+| DevCenterDiagnosticLogs | Table used to store dataplane request/response information on dev box or environments within the dev center. |
+| DevCenterResourceOperationLogs | Operation logs pertaining to DevCenter resources, including information around resource health status changes. |
+| DevCenterBillingEventLogs | Billing event related to DevCenter resources. This log contains information about the quantity and unit charged per meter. |
+
+## Sample Kusto Queries
+After enabling diagnostic settings on your dev center, you should be able to view audit logs for the tables within a log analytics workspace.
+
+Here are some queries that you can enter into Log search to help your monitor your dev boxes.
+
+To query for all dataplane logs from DevCenter:
+
+```kusto
+DevCenterDiagnosticLogs
+```
+
+To query for a filtered list of dataplane logs, specific to a single devbox:
+
+```kusto
+DevCenterDiagnosticLogs
+| where TargetResourceId contains "<devbox-name>"
+```
+
+To generate a chart for dataplane logs, grouped by operation result status:
+
+```kusto
+DevCenterDiagnosticLogs
+| summarize count() by OperationResult
+| render piechart
+```
+
+These examples are just a small sample of the rich queries that can be performed in Monitor using the Kusto Query Language. For more information, see [samples for Kusto queries](/azure/data-explorer/kusto/query/samples?pivots=azuremonitor).
+
+## Related content
+
+- [Monitor Dev Box](monitor-dev-box.md)
+- [Azure Diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md)
+- [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md)
+- [Azure Log Analytics REST API](/rest/api/loganalytics)
dev-box Tutorial Configure Multiple Monitors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-configure-multiple-monitors.md
+
+ Title: 'Tutorial: Configure multiple monitors for your dev box'
+
+description: In this tutorial, you configure an RDP client to use multiple monitors when connecting to a dev box.
++++ Last updated : 01/30/2023+++
+# Tutorial: Use multiple monitors on a dev box
+
+In this tutorial, you configure a remote desktop client to use dual or more monitors when you connect to your dev box.
+
+Using multiple monitors gives you more screen real estate to work with. You can spread your work across multiple screens, or use one screen for your development environment and another for documentation, email or messaging.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure the remote desktop client for multiple monitors.
+
+## Prerequisites
+
+To complete this tutorial, you must [install the Remote desktop app](tutorial-connect-to-dev-box-with-remote-desktop-app.md#download-the-remote-desktop-client-for-windows) on your local machine.
+
+## Configure Remote Desktop to use multiple monitors
+
+When you connect to your cloud-hosted developer machine in Microsoft Dev Box by using a remote desktop app, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors.
+
+Use the following steps to configure Remote Desktop to use multiple monitors.
+
+# [Windows](#tab/windows)
+
+1. Open the Remote Desktop app.
+
+ :::image type="content" source="./media/tutorial-configure-multiple-monitors/remote-desktop-app.png" alt-text="Screenshot of the Windows 11 start menu with Remote desktop showing and open highlighted.":::
+
+1. Right-click the dev box you want to configure, and then select **Settings**.
+
+1. On the settings pane, turn off **Use default settings**.
+
+ :::image type="content" source="media/tutorial-configure-multiple-monitors/turn-off-default-settings.png" alt-text="Screenshot showing the Use default settings slider.":::
+
+1. In **Display Settings**, in the **Display configuration** list, select the displays to use and configure the options:
+
+ | Value | Description | Options |
+ ||||
+ | All displays | Remote desktop uses all available displays. | - Use only a single display when in windowed mode. <br> - Fit the remote session to the window. |
+ | Single display | Remote desktop uses a single display. | - Start the session in full screen mode. <br> - Fit the remote session to the window. <br> - Update the resolution on when a window is resized. |
+ | Select displays | Remote Desktop uses only the monitors you select. | - Maximize the session to the current displays. <br> - Use only a single display when in windowed mode. <br> - Fit the remote connection session to the window. |
+
+ :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-select-display.png" alt-text="Screenshot showing the Remote Desktop display settings, highlighting the option to select the number of displays.":::
+
+1. Close the settings pane, and then select your dev box to begin the Remote Desktop session.
+
+# [Non-Windows](#tab/non-Windows)
+
+1. Open Remote Desktop.
+
+1. Select **PCs**.
+
+1. On the Connections menu, select **Edit PC**.
+
+1. Select **Display**.
+
+1. On the Display tab, select **Use all monitors**, and then select **Save**.
+
+ :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-for-mac.png" alt-text="Screenshot showing the Edit PC dialog box with the display configuration options.":::
+
+1. Select your dev box to begin the Remote Desktop session.
+
+
+
+## Clean up resources
+
+Dev boxes incur costs whenever they're running. When you finish using your dev box, shut down or stop it to avoid incurring unnecessary costs.
+
+You can stop a dev box from the developer portal:
+
+1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
+
+1. For the dev box that you want to stop, select More options (**...**), and then select **Stop**.
+
+ :::image type="content" source="./media/tutorial-configure-multiple-monitors/stop-dev-box.png" alt-text="Screenshot of the menu command to stop a dev box.":::
+
+The dev box might take a few moments to stop.
+
+## Related content
+
+- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
+- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box)
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
Title: 'Tutorial: Use a Remote Desktop client to connect to a dev box'
-description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box. Configure the RDP client for a multi-monitor setup.
+description: In this tutorial, you download and use a remote desktop client to connect to a dev box in Microsoft Dev Box.
Previously updated : 12/15/2023 Last updated : 01/30/2024 # Tutorial: Use a remote desktop client to connect to a dev box
-In this tutorial, you download and use a remote desktop client application to connect to a dev box in Microsoft Dev Box. Learn how to configure the application to take advantage of a multi-monitor setup.
+In this tutorial, you download and use a remote desktop client application to connect to a dev box.
Remote Desktop apps let you use and control a dev box from almost any device. For your desktop or laptop, you can choose to download the Remote Desktop client for Windows Desktop or Microsoft Remote Desktop for Mac. You can also download a Remote Desktop app for your mobile device: Microsoft Remote Desktop for iOS or Microsoft Remote Desktop for Android.
-Alternately, you can also connect to your dev box through the browser from the Microsoft Dev Box developer portal.
+Alternately, you can connect to your dev box through the browser from the Microsoft Dev Box developer portal.
In this tutorial, you learn how to: > [!div class="checklist"] > * Download a remote desktop client.
+> * Connect to a dev box by using a subscription URL.
> * Connect to an existing dev box.
-> * Configure the remote desktop client for multiple monitors.
## Prerequisites
-To complete this tutorial, you must first:
--- [Configure Microsoft Dev Box](./quickstart-configure-dev-box-service.md).-- [Create a dev box](./quickstart-create-dev-box.md#create-a-dev-box) on the [developer portal](https://aka.ms/devbox-portal).
+To complete this tutorial, you must have access to a dev box through the developer portal.
## Download the remote desktop client and connect to your dev box
To download and set up the Remote Desktop client for Windows:
:::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/connect-remote-desktop-client.png" alt-text="Screenshot that shows how to select your platform configuration for the Windows Remote Desktop client.":::
-1. After you select your platform configuration, click the platform configuration to start the download process for the Remote Desktop client.
+1. After you select your platform configuration, select the platform configuration to start the download process for the Remote Desktop client.
- :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/download-windows-desktop.png" alt-text="Screenshot that shows how to click the platform configuration again to download the Windows Remote Desktop client.":::
+ :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/download-windows-desktop.png" alt-text="Screenshot that shows how to select the platform configuration again to download the Windows Remote Desktop client.":::
1. After the Remote Desktop MSI file downloads to your computer, open the file and follow the prompts to install the Remote Desktop app.
To use a non-Windows Remote Desktop client to connect to your dev box:
1. Your dev box appears in the Remote Desktop client's **Workspaces** area. Double-click the dev box to connect. :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png" alt-text="Screenshot of a dev box in a non-Windows Remote Desktop client Workspace." lightbox="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png":::-
-## Configure Remote Desktop to use multiple monitors
-
-When you connect to your cloud-hosted developer machine in Microsoft Dev Box, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors.
-
-Use the following steps to configure Remote Desktop to use multiple monitors.
-
-# [Windows](#tab/windows)
-
-1. Open Remote Desktop.
-
-1. Right-click the dev box you want to configure, and then select **Settings**.
-
-1. On the settings pane, turn off **Use default settings**.
-
- :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/turn-off-default-settings.png" alt-text="Screenshot showing the Use default settings slider.":::
-
-1. In **Display Settings**, in the **Display configuration** list, select the displays to use and configure the options:
-
- | Value | Description | Options |
- ||||
- | All displays | Remote desktop uses all available displays. | - Use only a single display when in windowed mode. <br> - Fit the remote session to the window. |
- | Single display | Remote desktop uses a single display. | - Start the session in full screen mode. <br> - Fit the remote session to the window. <br> - Update the resolution on when a window is resized. |
- | Select displays | Remote Desktop uses only the monitors you select. | - Maximize the session to the current displays. <br> - Use only a single display when in windowed mode. <br> - Fit the remote connection session to the window. |
-
- :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-select-display.png" alt-text="Screenshot showing the Remote Desktop display settings, highlighting the option to select the number of displays.":::
-
-1. Close the settings pane, and then select your dev box to begin the Remote Desktop session.
-
-# [Non-Windows](#tab/non-Windows)
-
-1. Open Remote Desktop.
-
-1. Select **PCs**.
-
-1. On the Connections menu, select **Edit PC**.
-
-1. Select **Display**.
-
-1. On the Display tab, select **Use all monitors**, and then select **Save**.
-
- :::image type="content" source="media/tutorial-connect-to-dev-box-with-remote-desktop-app/remote-desktop-for-mac.png" alt-text="Screenshot showing the Edit PC dialog box with the display configuration options.":::
-
-1. Select your dev box to begin the Remote Desktop session.
-
-
- ## Clean up resources Dev boxes incur costs whenever they're running. When you finish using your dev box, shut down or stop it to avoid incurring unnecessary costs.
The dev box might take a few moments to stop.
## Related content -- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)-- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box)
+- Learn how to [configure multiple monitors](./tutorial-configure-multiple-monitors.md) for your Remote Desktop client.
+- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
education-hub Create Assignment Allocate Credit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/create-assignment-allocate-credit.md
Title: Create an assignment and allocate credit description: Explains how to create an assignment, allocate credit, and invite students to a course in the Azure Education Hub. -+ Last updated 06/30/2020
education-hub Set Up Course Classroom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/set-up-course-classroom.md
Title: Set up a course and create a classroom description: This quickstart explains how to set up a course and classroom in Azure Education Hub. -+ Last updated 06/30/2020
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
In this article, you learn how to manage users and their memberships in OSDU gro
- Generate the service principal access token that's needed to call the Entitlement APIs. See [How to generate auth token](how-to-generate-auth-token.md). - Keep all the parameter values handy. They're needed to run different user management requests via the Entitlements API.
-## Fetch OID
+## Fetch object-id
-The object ID (OID) is the Microsoft Entra user OID.
+The Azure object ID (OID) is the Microsoft Entra user OID.
1. Find the OID of the users first. If you're managing an application's access, you must find and use the application ID (or client ID) instead of the OID.
-1. Input the OID of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance.
+1. Input the OID of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance. You can not use user's email id in the parameter and must use object ID.
:::image type="content" source="media/how-to-manage-users/azure-active-directory-object-id.png" alt-text="Screenshot that shows finding the object ID from Microsoft Entra ID.":::
The object ID (OID) is the Microsoft Entra user OID.
If you try to directly use your own access token for adding entitlements, it results in a 401 error. The `client-id` access token must be used to add the first set of users in the system. Those users (with admin access) can then manage more users with their own access token. 1. Use the `client-id` access token to do the following steps by using the commands outlined in the following sections: 1. Add the user to the `users@<data-partition-id>.<domain>` OSDU group.
- 2. Add the user to the `users.datalake.ops@<data-partition-id>.<domain>` OSDU group.
+ 2. Add the user to the `users.datalake.ops@<data-partition-id>.<domain>` OSDU group to give access of all the service groups.
+ 3. Add the user to the `users.data.root@<data-partition-id>.<domain>` OSDU group to give access of all the data groups.
1. The user becomes the admin of the data partition. The admin can then add or remove more users to the required entitlement groups: 1. Get the admin's auth token by using [Generate user access token](how-to-generate-auth-token.md#generate-the-user-auth-token) and by using the same `client-id` and `client-secret` values. 1. Get the OSDU group, such as `service.legal.editor@<data-partition-id>.<domain>`, to which you want to add more users by using the admin's access token. 1. Add more users to that OSDU group by using the admin's access token.
+
+To know more about the OSDU bootstrap groups, check out [here](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/master/docs/bootstrap/bootstrap-groups-structure.md).
## Get the list of all available groups in a data partition
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. The value to be sent for the parameter `email` is the OID of the user and not the user's email address. ```bash
- curl --location --request POST 'https://<adme-url>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
+ curl --location --request POST 'https://<adme-url>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.<domain>/members' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' \ --header 'Content-Type: application/json' \
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. Run the following curl command in Azure Cloud Shell to get all the groups associated with the user. ```bash
- curl --location --request GET 'https://<adme-url>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \
+ curl --location --request GET 'https://<adme-url>/api/entitlements/v2/members/<obejct-id>/groups?type=none' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' ```
Run the following curl command in Azure Cloud Shell to get all the groups that a
1. *Do not* delete the OWNER of a group unless you have another OWNER who can manage users in that group. ```bash
- curl --location --request DELETE 'https://<adme-url>/api/entitlements/v2/members/<OBJECT_ID>' \
+ curl --location --request DELETE 'https://<adme-url>/api/entitlements/v2/members/<object-id>' \
--header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer <access_token>' ```
event-grid Add Identity Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/add-identity-roles.md
Title: Add managed identity to a role on Azure Event Grid destination
description: This article describes how to add managed identity to Azure roles on destinations such as Azure Service Bus and Azure Event Hubs. Previously updated : 03/25/2021 Last updated : 01/31/2024 # Grant managed identity the access to Event Grid destination
Assign a system-assigned managed identity by using instructions from the followi
- [System topics](enable-identity-system-topics.md) ## Supported destinations and Azure roles
-After you enable identity for your event grid custom topic or domain, Azure automatically creates an identity in Microsoft Entra ID. Add this identity to appropriate Azure roles so that the custom topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the event grid custom topic can forward events to event hubs in that namespace.
+After you enable identity for your Event Grid custom topic or domain, Azure automatically creates an identity in Microsoft Entra ID. Add this identity to appropriate Azure roles so that the custom topic or domain can forward events to supported destinations. For example, add the identity to the **Azure Event Hubs Data Sender** role for an Azure Event Hubs namespace so that the Event Grid custom topic can forward events to event hubs in that namespace.
-Currently, Azure event grid supports custom topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the custom topic can forward the events.
+Currently, Azure Event Grid supports custom topics or domains configured with a system-assigned managed identity to forward events to the following destinations. This table also gives you the roles that the identity should be in so that the custom topic can forward the events.
| Destination | Azure role | | -- | |
Currently, Azure event grid supports custom topics or domains configured with a
## Use the Azure portal You can use the Azure portal to assign the custom topic or domain identity to an appropriate role so that the custom topic or domain can forward events to the destination.
-The following example adds a managed identity for an event grid custom topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the event grid custom topic can forward events to all entities within the namespace.
+The following example adds a managed identity for an Event Grid custom topic named **msitesttopic** to the **Azure Service Bus Data Sender** role for a Service Bus namespace that contains a queue or topic resource. When you add to the role at the namespace level, the Event Grid custom topic can forward events to all entities within the namespace.
1. Go to your **Service Bus namespace** in the [Azure portal](https://portal.azure.com). 1. Select **Access Control** in the left pane.
The following example adds a managed identity for an event grid custom topic nam
The steps are similar for adding an identity to other roles mentioned in the table. ## Use the Azure CLI
-The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for event grid custom topics. The commands for event grid domains are similar.
+The example in this section shows you how to use the Azure CLI to add an identity to an Azure role. The sample commands are for Event Grid custom topics. The commands for Event Grid domains are similar.
### Get the principal ID for the custom topic's system identity First, get the principal ID of the custom topic's system-managed identity and assign the identity to appropriate roles.
az role assignment create --role "$role" --assignee "$topic_pid" --scope "$event
``` ### Create a role assignment for a Service Bus topic at various scopes
-The following CLI example shows how to add an event grid custom topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the event grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the event grid custom topic can forward events only to that specific Service Bus queue or topic.
+The following CLI example shows how to add an Event Grid custom topic's identity to the **Azure Service Bus Data Sender** role at the namespace level or at the Service Bus topic level. If you create the role assignment at the namespace level, the Event Grid topic can forward events to all entities (Service Bus queues or topics) within that namespace. If you create a role assignment at the Service Bus queue or topic level, the Event Grid custom topic can forward events only to that specific Service Bus queue or topic.
```azurecli-interactive role="Azure Service Bus Data Sender"
event-grid Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-custom-topic.md
Title: Create an Azure Event Grid topic or a domain description: This article shows how to create an Event Grid topic or domain. Previously updated : 07/21/2022 Last updated : 01/31/2024
If you're new to Azure Event Grid, read through [Event Grid overview](overview.m
An Event Grid topic provides a user-defined endpoint that you post your events to. 1. Sign in to [Azure portal](https://portal.azure.com/).
-2. In the search bar at the top, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list. If you are create a domain, search for **Event Grid Domains**.
+2. In the search bar at the top, type **Event Grid Topics**, and then select **Event Grid Topics** from the drop-down list. To create a domain, search for **Event Grid Domains**.
- :::image type="content" source="./media/custom-event-quickstart-portal/select-topics.png" alt-text="Screenshot showing the Azure port search bar to search for Event Grid topics.":::
+ :::image type="content" source="./media/custom-event-quickstart-portal/select-topics.png" lightbox="./media/custom-event-quickstart-portal/select-topics.png" alt-text="Screenshot showing the Azure port search bar to search for Event Grid topics.":::
3. On the **Event Grid Topics** or **Event Grid Domains** page, select **+ Create** on the toolbar. :::image type="content" source="./media/custom-event-quickstart-portal/create-topic-button.png" alt-text="Screenshot showing the Create Topic button on Event Grid topics page.":::
On the **Basics** page of **Create Topic** or **Create Event Grid Domain** wizar
1. Select your Azure **subscription**. 2. Select an existing resource group or select **Create new**, and enter a **name** for the **resource group**.
-3. Provide a unique **name** for the custom topic or domain. The name must be unique because it's represented by a DNS entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-".
+3. Provide a unique **name** for the custom topic or domain. The name must be unique because it's represented by a Domain Name System (DNS) entry. Don't use the name shown in the image. Instead, create your own name - it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-".
4. Select a **location** for the Event Grid topic or domain. 1. Select **Next: Networking** at the bottom of the page to switch to the **Networking** page.
On the **Basics** page of **Create Topic** or **Create Event Grid Domain** wizar
## Networking page On the **Networking** page of the **Create Topic** or **Create Event Grid Domain** wizard, follow these steps:
-1. If you want to allow clients to connect to the topic or domain endpoint via a public IP address, keep the **Public access** option selected.
+1. If you want to allow clients to connect to the topic or domain endpoint via a public IP address, keep the **Public access** option selected. You can restrict the access to specific IP addresses or IP address range.
:::image type="content" source="./media/configure-firewall/networking-page-public-access.png" alt-text="Screenshot showing the selection of Public access option on the Networking page of the Create topic wizard."::: 1. To allow access to the topic or domain via a private endpoint, select the **Private access** option.
On the **Security** page of the **Create Topic** or **Create Event Grid Domain*
1. To disable local authentication, select **Disabled**. When you do it, the topic or domain can't be accessed using accesskey and SAS authentication, but only via Microsoft Entra authentication. :::image type="content" source="./media/authenticate-with-microsoft-entra-id/create-topic-disable-local-auth.png" alt-text="Screenshot showing the Advanced tab of Create Topic page when you can disable local authentication.":::
+1. Configure the minimum required Transport Layer Security (TLS) version. For more information, see [Configure minimum TLS version](transport-layer-security-configure-minimum-version.md).
+
+ :::image type="content" source="./media/create-custom-topic/configure-transport-layer-security-version.png" alt-text="Screenshot showing the Advanced tab of Create Topic page when you can select the minimum TLS version.":::
1. Select **Advanced** at the bottom of the page to switch to the **Advanced** page. ## Advanced page
On the **Security** page of the **Create Topic** or **Create Event Grid Domain*
:::image type="content" source="./media/create-custom-topic/data-residency.png" alt-text="Screenshot showing the Data residency section of the Advanced page in the Create Topic wizard.":::
- The **Cross-Geo** option allows Microsoft-initiated failover to the paired region in case of a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](./faq.yml).
+ The **Cross-Geo** option allows Microsoft-initiated failover to the paired region when there's a region failure. For more information, see [Server-side geo disaster recovery in Azure Event Grid](geo-disaster-recovery.md). Microsoft-initiated failover is exercised by Microsoft in rare situations to fail over Event Grid resources from an affected region to the corresponding geo-paired region. This process doesn't require an intervention from user. Microsoft reserves right to make a determination of when this path will be taken. The mechanism doesn't involve a user consent before the user's topic or domain is failed over. For more information, see [How do I recover from a failover?](./faq.yml).
- If you select the **Regional** option, you may define your own disaster recovery plan.
+ If you select the **Regional** option, you can define your own disaster recovery plan.
3. Select **Next: Tags** to move to the **Tags** page. ## Tags page
-The **Tags** page has no fields that are specific to Event Grid. You can assign a tag (name-value pair) as you do for any other Azure resource. Select **Next: Review + create** to switch to the **Review + create** page.
+The **Tags** page has no fields that are specific to Event Grid. You can assign a tag (name-value pair) as you do for any other Azure resource. Select **Next: Review + create** to switch to the **Review + create** page.
## Review + create page On the **Review + create** page, review all your settings, confirm the validation succeeded, and then select **Create** to create the topic or the domain.
event-grid Custom Event To Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-eventhub.md
Title: 'Quickstart: Send custom events to Event Hubs - Event Grid, Azure CLI' description: 'Quickstart: Use Azure Event Grid and Azure CLI to publish a topic, and subscribe to that event. An event hub is used for the endpoint.' Previously updated : 11/18/2022 Last updated : 01/31/2024 # Quickstart: Route custom events to Azure Event Hubs with Azure CLI and Event Grid
-[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Events are delivered by Event Grid to [supported event handlers](event-handlers.md) and Azure Event Hubs is one of them. In this article, you use Azure CLI for the following steps:
+[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Event Grid delivers events to [supported event handlers](event-handlers.md) and Azure Event Hubs is one of them. In this article, you use Azure CLI for the following steps:
1. Create an Event Grid custom topic. 1. Create an Azure Event Hubs subscription for the custom topic.
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a DNS entry.
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a Domain Name System (DNS) entry.
1. Specify a name for the topic.
endpoint=$(az eventgrid topic show --name $topicname -g gridResourceGroup --quer
key=$(az eventgrid topic key list --name $topicname -g gridResourceGroup --query "key1" --output tsv) ```
-To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
+To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
```azurecli-interactive for i in 1 2 3
do
done ```
-On the **Overview** page for your Event Hubs namespace in the Azure portal, notice that Event Grid sent those three events to the event hub. You'll see the same chart on the **Overview** page for the `demohub` Event Hubs instance page.
+On the **Overview** page for your Event Hubs namespace in the Azure portal, notice that Event Grid sent those three events to the event hub. You see the same chart on the **Overview** page for the `demohub` Event Hubs instance page.
:::image type="content" source="./media/custom-event-to-eventhub/show-result.png" lightbox="./media/custom-event-to-eventhub/show-result.png" alt-text="Image showing the portal page with incoming message count as 3.":::
event-grid Custom Event To Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-queue-storage.md
Title: 'Quickstart: Send custom events to storage queue - Event Grid, Azure CLI' description: 'Quickstart: Use Azure Event Grid and Azure CLI to publish a topic, and subscribe to that event. A storage queue is used for the endpoint.' Previously updated : 12/20/2022 Last updated : 01/31/2024 # Quickstart: Route custom events to Azure Queue storage via Event Grid using Azure CLI
-[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Events are delivered by Event Grid to [supported event handlers](event-handlers.md) and Azure Queue storage is one of them. In this article, you use Azure CLI for the following steps:
+[Azure Event Grid](overview.md) is a highly scalable and serverless event broker that you can use to integrate applications using events. Event Grid delivers events to [supported event handlers](event-handlers.md) and Azure Queue storage is one of them. In this article, you use Azure CLI for the following steps:
1. Create an Event Grid custom topic. 1. Create an Azure Queue subscription for the custom topic.
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a DNS entry.
+An Event Grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<topic_name>` with a unique name for your custom topic. The Event Grid topic name must be unique because it's represented by a Domain Name System (DNS) entry.
1. Specify a name for the topic.
endpoint=$(az eventgrid topic show --name $topicname -g gridResourceGroup --quer
key=$(az eventgrid topic key list --name $topicname -g gridResourceGroup --query "key1" --output tsv) ```
-To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, you use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
+To simplify this article, you use sample event data to send to the custom topic. Typically, an application or Azure service would send the event data. CURL is a utility that sends HTTP requests. In this article, you use CURL to send the event to the custom topic. The following example sends three events to the Event Grid topic:
```azurecli-interactive for i in 1 2 3
event-grid Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/network-security.md
Title: Network security for Azure Event Grid resources
description: This article describes how to use service tags for egress, IP firewall rules for ingress, and private endpoints for ingress with Azure Event Grid. Previously updated : 11/17/2022 Last updated : 01/31/2024
You can use service tags to define network access controls on [network security
| | -- |::|::|::| | AzureEventGrid | Azure Event Grid. | Both | No | No | - ## IP firewall Azure Event Grid supports IP-based access controls for publishing to topics and domains. With IP-based controls, you can limit the publishers to a topic or domain to only a set of approved set of machines and cloud services. This feature complements the [authentication mechanisms](security-authentication.md) supported by Event Grid.
-By default, topic and domain are accessible from the internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IP addresses or IP address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Publishers originating from any other IP address will be rejected and will receive a 403 (Forbidden) response.
+By default, topic and domain are accessible from the internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IP addresses or IP address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation. Publishers originating from any other IP address are rejected and receive a 403 (Forbidden) response.
For step-by-step instructions to configure IP firewall for topics and domains, see [Configure IP firewall](configure-firewall.md). -- ## Private endpoints
-You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to your topics and domains securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. A private endpoint is a special network interface for an Azure service in your VNet. When you create a private endpoint for your topic or domain, it provides secure connectivity between clients on your VNet and your Event Grid resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Event Grid service uses a secure private link.
+You can use [private endpoints](../private-link/private-endpoint-overview.md) to allow ingress of events directly from your virtual network to your topics and domains securely over a [private link](../private-link/private-link-overview.md) without going through the public internet. A private endpoint is a special network interface for an Azure service in your virtual network. When you create a private endpoint for your topic or domain, it provides secure connectivity between clients on your virtual network and your Event Grid resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Event Grid service uses a secure private link.
-![Architecture diagram](./media/network-security/architecture-diagram.png)
Using private endpoints for your Event Grid resource enables you to: -- Secure access to your topic or domain from a VNet over the Microsoft backbone network as opposed to the public internet.-- Securely connect from on-premises networks that connect to the VNet using VPN or Express Routes with private-peering.
+- Secure access to your topic or domain from a virtual network over the Microsoft backbone network as opposed to the public internet.
+- Securely connect from on-premises networks that connect to the virtual network using VPN or Express Routes with private-peering.
-When you create a private endpoint for a topic or domain in your VNet, a consent request is sent for approval to the resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved. Otherwise, the connection is in **pending** state until approved. Applications in the VNet can connect to the Event Grid service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. Resource owners can manage consent requests and the private endpoints, through the **Private endpoints** tab for the resource in the Azure portal.
+When you create a private endpoint for a topic or domain in your virtual network, a consent request is sent for approval to the resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved. Otherwise, the connection is in **pending** state until approved. Applications in the virtual network can connect to the Event Grid service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. Resource owners can manage consent requests and the private endpoints, through the **Private endpoints** tab for the resource in the Azure portal.
### Connect to private endpoints
-Publishers on a VNet using the private endpoint should use the same connection string for the topic or domain as clients connecting to the public endpoint. DNS resolution automatically routes connections from the VNet to the topic or domain over a private link. Event Grid creates a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary update for the private endpoints, by default. However, if you're using your own DNS server, you may need to make additional changes to your DNS configuration.
+Publishers on a virtual network using the private endpoint should use the same connection string for the topic or domain as clients connecting to the public endpoint. Domain Name System (DNS) resolution automatically routes connections from the virtual network to the topic or domain over a private link. Event Grid creates a [private DNS zone](../dns/private-dns-overview.md) attached to the virtual network with the necessary update for the private endpoints, by default. However, if you're using your own DNS server, you might need to make more changes to your DNS configuration.
### DNS changes for private endpoints When you create a private endpoint, the DNS CNAME record for the resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, a private DNS zone is created that corresponds to the private link's subdomain.
-When you resolve the topic or domain endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the service. The DNS resource records for 'topicA', when resolved from **outside the VNet** hosting the private endpoint, will be:
+When you resolve the topic or domain endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the service. The DNS resource records for 'topicA', when resolved from **outside the VNet** hosting the private endpoint, are:
| Name | Type | Value | | | -| | | `topicA.westus.eventgrid.azure.net` | CNAME | `topicA.westus.privatelink.eventgrid.azure.net` | | `topicA.westus.privatelink.eventgrid.azure.net` | CNAME | \<Azure traffic manager profile\>
-You can deny or control access for a client outside the VNet through the public endpoint using the [IP firewall](#ip-firewall).
+You can deny or control access for a client outside the virtual network through the public endpoint using the [IP firewall](#ip-firewall).
-When resolved from the VNet hosting the private endpoint, the topic or domain endpoint URL resolves to the private endpoint's IP address. The DNS resource records for the topic 'topicA', when resolved from **inside the VNet** hosting the private endpoint, will be:
+When resolved from the virtual network hosting the private endpoint, the topic or domain endpoint URL resolves to the private endpoint's IP address. The DNS resource records for the topic 'topicA', when resolved from **inside the VNet** hosting the private endpoint, are:
| Name | Type | Value | | | -| | | `topicA.westus.eventgrid.azure.net` | CNAME | `topicA.westus.privatelink.eventgrid.azure.net` | | `topicA.westus.privatelink.eventgrid.azure.net` | A | 10.0.0.5
-This approach enables access to the topic or domain using the same connection string for clients on the VNet hosting the private endpoints, and clients outside the VNet.
+This approach enables access to the topic or domain using the same connection string for clients on the virtual network hosting the private endpoints, and clients outside the virtual network.
-If you're using a custom DNS server on your network, clients can resolve the FQDN for the topic or domain endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet, or configure the A records for `topicOrDomainName.regionName.privatelink.eventgrid.azure.net` with the private endpoint IP address.
+If you're using a custom DNS server on your network, clients can resolve the fully qualified domain name (FQDN) for the topic or domain endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network, or configure the A records for `topicOrDomainName.regionName.privatelink.eventgrid.azure.net` with the private endpoint IP address.
The recommended DNS zone name is `privatelink.eventgrid.azure.net`.
You can configure IP firewall for your Event Grid resource to restrict access ov
You can configure private endpoints to restrict access from only from selected virtual networks. For step-by-step instructions, see [Configure private endpoints](configure-private-endpoints.md).
-To troubleshoot network connectivity issues, see [Troubleshoot network connectivity issues](troubleshoot-network-connectivity.md)
+To troubleshoot network connectivity issues, see [Troubleshoot network connectivity issues](troubleshoot-network-connectivity.md).
event-grid Partner Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md
Title: Partner Events overview for customers description: Send or receive from a SaaS or ERP system directly to/from Azure services with Azure Event Grid. Previously updated : 12/14/2022 Last updated : 01/31/2024 # Partner Events overview for customers - Azure Event Grid
-Azure Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as "partners".
+Azure Event Grid's **Partner Events** allows customers to **subscribe to events** that originate in a registered system using the same mechanism they would use for any other event source on Azure, such as an Azure service. Those registered systems integrate with Event Grid are known as partners.
This feature also enables customers to **send events** to partner systems that support receiving and routing events to customer's solutions/endpoints in their platform. Typically, partners are software-as-a-service (SaaS) or [ERP](https://en.wikipedia.org/wiki/Enterprise_resource_planning) providers, but they might be corporate platforms wishing to make their events available to internal teams.
You receive events from a partner in a [partner topic](concepts.md#partner-topic
1. [Authorize partner to create a partner topic](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic) in a resource group you designate. Authorizations are stored in partner configurations (Azure resources). 2. [Request partner to forward your events](subscribe-to-partner-events.md#request-partner-to-enable-events-flow-to-a-partner-topic) from its service to your partner topic. **Partner provisions a partner topic** in the specified resource group of your Azure subscription. 3. After the partner creates a partner topic in your Azure subscription and resource group, [activate](subscribe-to-partner-events.md#activate-a-partner-topic) your partner topic.
-4. [Subscribe to events](subscribe-to-partner-events.md#subscribe-to-events) by creating one or more event subscriptions on the partner topic.
+4. [Subscribe to events](subscribe-to-partner-events.md#subscribe-to-events) by creating one or more event subscriptions for the partner topic.
:::image type="content" source="./media/partner-events-overview/receive-events-from-partner.png" alt-text="Diagram showing the steps to receive events from a partner.":::
You receive events from a partner in a [partner topic](concepts.md#partner-topic
## Why should I use Partner Events?
-You may want to use the Partner Events feature if you've one or more of the following requirements.
+Use the Partner Events feature if you have one or more of the following requirements.
- You want to subscribe to events that originate in a [partner](#available-partners) system and route them to event handlers on Azure or to any application or service with a public endpoint. - You want to take advantage of the rich set Event Grid's [destinations/event handlers](overview.md#event-handlers) that react to events from partners.-- You want to forward events raised by your custom application on Azure, an Azure service, or a Microsoft service to your application or service hosted by the [partner](#available-partners) system. For example, you may want to send Microsoft Entra ID, Teams, SharePoint, or Azure Storage events to a partner system on which you're a tenant for processing.
+- You want to forward events raised by your custom application on Azure, an Azure service, or a Microsoft service to your application or service hosted by the [partner](#available-partners) system. For example, you want to send Microsoft Entra ID, Teams, SharePoint, or Azure Storage events to a partner system on which you're a tenant for processing.
- You need a resilient push delivery mechanism with send-retry support and at-least once semantics. - You want to use [Cloud Events 1.0](https://cloudevents.io/) schema for your events.
You manage the following types of resources.
## Grant authorization to create partner topics and destinations
-You must authorize partners to create partner topics before they attempt to create those resources. If you don't grant your authorization, the partners' attempt to create the partner resource will fail.
+You must authorize partners to create partner topics before they attempt to create those resources. If you don't grant your authorization, the partners' attempt to create the partner resource fails.
You consent the partner to create partner topics by creating a **partner configuration** resource. You add a partner authorization to a partner configuration identifying the partner and providing an authorization expiration time by which a partner topic/destination must be created. The only types of resources that partners can create with your permission are partner topics.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
While FastPath supports most configurations, it doesn't support the following fe
* Basic Load Balancer: If you deploy a Basic internal load balancer in your virtual network or the Azure PaaS service you deploy in your virtual network uses a Basic internal load balancer, the network traffic from your on-premises network to the virtual IPs hosted on the Basic load balancer is sent to the virtual network gateway. The solution is to upgrade the Basic load balancer to a [Standard load balancer](../load-balancer/load-balancer-overview.md).
-* Private Link: FastPath Connectivity to a private endpoint or Private Link service over an ExpressRoute Direct circuit is supported for limited scenarios. For more information, see [enable FastPath and Private Link for 100 Gbps ExpressRoute Direct](expressroute-howto-linkvnet-arm.md#fastpath-and-private-link-for-100-gbps-expressroute-direct). FastPath connectivity to a Private endpoint/Private Link service is not supported for ExpressRoute partner circuits.
+* Private Link: FastPath Connectivity to a private endpoint or Private Link service over an ExpressRoute Direct circuit is supported for limited scenarios. For more information, see [enable FastPath and Private Link for 100 Gbps ExpressRoute Direct](expressroute-howto-linkvnet-arm.md#fastpath-virtual-network-peering-user-defined-routes-udrs-and-private-link-support-for-expressroute-direct-connections). FastPath connectivity to a Private endpoint/Private Link service is not supported for ExpressRoute partner circuits.
* DNS Private Resolver: Azure ExpressRoute FastPath does not support connectivity to [DNS Private Resolver](../dns/dns-private-resolver-overview.md).
While FastPath supports most configurations, it doesn't support the following fe
> * ExpressRoute Direct has a cumulative limit at the port level. > * Traffic flows through the ExpressRoute gateway when these limits are reached.
-## Public preview
-
-The following FastPath features are in Public preview:
-
-### Virtual network (VNet) Peering
-
-FastPath sends traffic directly to any VM deployed in a virtual network peered to the one connected to ExpressRoute, bypassing the ExpressRoute virtual network gateway. This feature is available for both IPv4 and IPv6 connectivity.
-
-**FastPath support for VNet peering is only available for ExpressRoute Direct connections.**
-
-> [!NOTE]
-> * FastPath VNet peering connectivity is not supported for Azure Dedicated Host workloads.
-
-### User Defined Routes (UDRs)
-
-FastPath honors UDRs configured on the GatewaySubnet and send traffic directly to an Azure Firewall or third party NVA.
-
-**FastPath support for UDRs is only available for ExpressRoute Direct connections**
-
-> [!NOTE]
-> * FastPath UDR connectivity is not supported for Azure Dedicated Host workloads.
-> * FastPath UDR connectivity is not supported for IPv6 workloads.
-
-To enroll in the Public preview, send an email to **exrpm@microsoft.com** with the following information:
-- Azure subscription ID-- Virtual Network(s) Azure Resource ID(s)-- ExpressRoute Circuit(s) Azure Resource ID(s)-- ExpressRoute Connection(s) Azure Resource ID(s)-- Number of Virtual Network peering connections-- Number of UDRs configured in the hub Virtual Network-- ## Limited General Availability (GA)
-FastPath Private Endpoint/Private Link support for 100Gbps and 10Gbps ExpressRoute Direct connections is available for limited scenarios in the following Azure regions:
+FastPath support for Virtual Network Peering, User Defined Routes (UDRs) and Private Endpoint/Private Link connectivity is available for limited scenarios for 100/10Gbps ExpressRoute Direct connections in the following Azure regions:
- Australia East - East Asia - East US
FastPath Private endpoint/Private Link connectivity is supported for the followi
> * Private Link pricing will not apply to traffic sent over ExpressRoute FastPath. For more information about pricing, check out the [Private Link pricing page](https://azure.microsoft.com/pricing/details/private-link/). > * FastPath supports a max of 100Gbps connectivity to a single Availability Zone (Az).
-For more information about supported scenarios and to enroll in the limited GA offering, send an email to **exrpm@microsoft.com** with the following information:
-- Azure subscription ID-- Virtual Network(s) Azure Resource ID(s)-- ExpressRoute Circuit(s) Azure Resource ID(s)-- ExpressRoute Virtual Network Gateway Connection(s) Azure Resource ID(s)-- Number of Private Endpoints/Private Link services deployed to the Virtual Network-
+For more information about supported scenarios and to enroll in the limited GA offering, complete this [Microsoft Form](https://aka.ms/FastPathLimitedGA)
## Next steps - To enable FastPath, see [Configure ExpressRoute FastPath](expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath).
expressroute Expressroute Howto Circuit Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-circuit-portal-resource-manager.md
zone_pivot_groups: expressroute-experience
This quickstart shows you how to create an ExpressRoute circuit using the Azure portal and the Azure Resource Manager deployment model. You can also check the status, update, delete, or deprovision a circuit.
+There are currently two create experience for ExpressRoute circuits in the portal. The new preview create experience is available through this [Preview link](https://aka.ms/expressrouteguidedportal). The current create experience is available through the [Azure portal](https://portal.azure.com). For guidance on how to create an ExpressRoute circuit with the preview create experience select the **Preview** tab at the top of the page.
+ :::image type="content" source="media/expressroute-howto-circuit-portal-resource-manager/environment-diagram.png" alt-text="Diagram of ExpressRoute circuit deployment environment using Azure portal."::: ## Prerequisites
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
$connection = Get-AzVirtualNetworkGatewayConnection -Name "MyConnection" -Resour
$connection.ExpressRouteGatewayBypass = $True Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection ```
-### FastPath and Private Link for 100-Gbps ExpressRoute Direct
+### FastPath Virtual Network Peering, User-Defined-Routes (UDRs) and Private Link support for ExpressRoute Direct Connections
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100-Gb ExpressRoute Direct circuits. To enable, follow the below guidance:
-1. Send an email to **ExRPM@microsoft.com**, providing the following information:
-* Azure Subscription ID
-* Virtual Network (virtual network) Resource ID
-* Azure Region where the Private Endpoint/Private Link service is deployed
-* Virtual Network Connection Resource ID
-* Number of Private Endpoints/Private Link services deployed to the virtual network
-* Target bandwidth to the Private Endpoints/Private Link services
+With Virtual Network Peering and UDR support, FastPath will send traffic directly to VMs deployed in "spoke" Virtual Networks (connected via Virtual Network Peering) and honor any UDRs configured on the GatewaySubnet. With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. With both of these features enabled, FastPath will directly send traffic to a Private Endpoint deployed in a "spoke" Virtual Network.
+These scenarios are Generally Available for limited scenarios with connections associated to 100-Gb ExpressRoute Direct circuits. To enable, follow the below guidance:
+1. Complete this [Microsoft Form](https://aka.ms/fastpathlimitedga) to request to enroll your subscription.
2. Once you receive a confirmation from Step 1, run the following Azure PowerShell command in the target Azure subscription. ```azurepowershell-interactive $connection = Get-AzVirtualNetworkGatewayConnection -ResourceGroupName <resource-group> -ResourceName <connection-name>
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connecti
> [!NOTE] > You can use [Connection Monitor](how-to-configure-connection-monitor.md) to verify that your traffic is reaching the destination using FastPath.
->
> [!NOTE] > Enabling FastPath Private Link support for limited GA scenarios may take upwards of 2 weeks to complete. Please plan your deployment(s) in advance. >
-## Enroll in ExpressRoute FastPath features (preview)
-
-### FastPath virtual network peering and user defined routes (UDRs).
-
-With FastPath and virtual network peering, you can enable ExpressRoute connectivity directly to VMs in a local or peered virtual network, bypassing the ExpressRoute virtual network gateway in the data path.
-
-With FastPath and UDR, you can configure a UDR on the GatewaySubnet to direct ExpressRoute traffic to an Azure Firewall or third party NVA. FastPath honors the UDR and send traffic directly to the target Azure Firewall or NVA, bypassing the ExpressRoute virtual network gateway in the data path.
-
-To enroll in the preview, send an email to **exrpm@microsoft.com**, providing the following information:
-* Azure Subscription ID
-* Virtual Network (virtual network) Resource ID
-* ExpressRoute Circuit Resource ID
-* ExpressRoute Connection(s) Resource ID(s)
-* Number of Private Endpoints deployed to the local/Hub virtual network.
-* Resource ID of any User-Defined-Routes (UDRs) configured in the local/Hub virtual network.
- **FastPath support for virtual network peering and UDRs is only available for ExpressRoute Direct connections**.
-### FastPath and Private Link for 10-Gbps ExpressRoute Direct
-
-With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10-Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
-
-To enroll in this preview, run the following Azure PowerShell command in the target Azure subscription:
-
-```azurepowershell-interactive
-Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass -ProviderNamespace Microsoft.Network
-```
- > [!NOTE] > Any connections configured for FastPath in the target subscription will be enrolled in the selected preview. We do not advise enabling these previews in production subscriptions. > If you already have FastPath configured and want to enroll in the preview feature, you need to do the following:
expressroute Expressroute Howto Linkvnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-cli.md
az network vpn-connection update --name ERConnection --resource-group ExpressRou
## Enroll in ExpressRoute FastPath features (preview)
-FastPath support for virtual network peering is now in Public preview. Enrollment is only available through Azure PowerShell. See [FastPath preview features](expressroute-howto-linkvnet-arm.md#enroll-in-expressroute-fastpath-features-preview), for instructions on how to enroll.
+FastPath support for virtual network peering is now in Public preview. Enrollment is only available through Azure PowerShell. For instructions on how to enroll, see [FastPath preview features](expressroute-howto-linkvnet-arm.md#fastpath-virtual-network-peering-user-defined-routes-udrs-and-private-link-support-for-expressroute-direct-connections).
> [!NOTE] > Any connections configured for FastPath in the target subscription will be enrolled in this preview. We do not advise enabling this preview in production subscriptions.
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
When adding a new connection for your ExpressRoute gateway, select the checkbox
## Enroll in ExpressRoute FastPath features (preview)
-FastPath support for virtual network peering is now in Public preview. Enrollment is only available through Azure PowerShell. See [FastPath preview features](expressroute-howto-linkvnet-arm.md#enroll-in-expressroute-fastpath-features-preview), for instructions on how to enroll.
+FastPath support for virtual network peering is now in Public preview. Enrollment is only available through Azure PowerShell. For instructions on how to enroll, see [FastPath preview features](expressroute-howto-linkvnet-arm.md#fastpath-virtual-network-peering-user-defined-routes-udrs-and-private-link-support-for-expressroute-direct-connections).
> [!NOTE] > Any connections configured for FastPath in the target subscription will be enrolled in this preview. We do not advise enabling this preview in production subscriptions.
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-performance.md
Title: Azure Firewall performance
-description: Compare Azure Firewall performance for Azure Firewall Standard and Premium
+description: Compare Azure Firewall performance for Azure Firewall Basic, Standard, and Premium.
Previously updated : 11/29/2022 Last updated : 01/30/2024 # Azure Firewall performance
-Reliable firewall performance is essential to operate and protect your virtual networks in Azure. More advanced features (like those found in Azure Firewall Premium) require more processing complexity. This will affect firewall performance and impact the overall network performance.
+Reliable firewall performance is essential to operate and protect your virtual networks in Azure. More advanced features (like those found in Azure Firewall Premium) require more processing complexity, and affect firewall performance and overall network performance.
-Azure Firewall has two versions: Standard and Premium.
+Azure Firewall has three versions: Basic, Standard, and Premium.
+
+- Azure Firewall Basic
+
+ Azure Firewall Basic is intended for small and medium size (SMB) customers to secure their Azure cloud environments. It provides the essential protection SMB customers need at an affordable price point.
- Azure Firewall Standard
- Azure Firewall Standard has been generally available since September 2018. It's cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
+ Azure Firewall Standard became generally available in September 2018. It's cloud native, highly available, with built-in auto scaling firewall-as-a-service. You can centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level-filtering rules, and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains.
- Azure Firewall Premium Azure Firewall Premium is a next generation firewall. It has capabilities that are required for highly sensitive and regulated environments. The features that might affect the performance of the Firewall are TLS (Transport Layer Security) inspection and IDPS (Intrusion Detection and Prevention).
For more information about Azure Firewall, see [What is Azure Firewall?](overvie
## Performance testing
-Before you deploy Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. It's recommended to evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. This includes the network topology, and emulating the actual characteristics of the expected traffic through the firewall.
+Before you deploy Azure Firewall, the performance needs to be tested and evaluated to ensure it meets your expectations. Not only should Azure Firewall handle the current traffic on a network, but it should also be ready for potential traffic growth. You should evaluate on a test network and not in a production environment. The testing should attempt to replicate the production environment as close as possible. You should account for the network topology, and emulate the actual characteristics of the expected traffic through the firewall.
## Performance data
The following set of performance results demonstrates the maximal Azure Firewall
|Firewall type and use case |TCP/UDP bandwidth (Gbps) |HTTP/S bandwidth (Gbps) | ||||
+|Basic|0.25|0.25|
|Standard |30|30| |Premium (no TLS/IDPS) |100|100| |Premium with TLS (no IDS/IPS) |-|100|
The following set of performance results demonstrates the maximal Azure Firewall
|Firewall use case |Throughput (Gbps)| |||
+|Basic|up to 250 Mbps|
|Standard<br>Max bandwidth for single TCP connection |up to 1.5| |Premium<br>Max bandwidth for single TCP connection |up to 9| |Premium single TCP connection with IDPS on *Alert and Deny* mode|up to 300 Mbps| ### Total throughput for initial firewall deployment
-The following throughput numbers are for an Azure Firewall deployment before auto-scale (out of the box deployment). Azure Firewall gradually scales out when the average throughput and CPU consumption is at 60% or if the number of connections usage is at 80%. Scale out takes five to seven minutes. Azure Firewall gradually scales in when the average throughput, CPU consumption, or number of connections is below 20%.
+The following throughput numbers are for an Azure Firewall Standard and Premium deployments before autoscale (out of the box deployment). Azure Firewall gradually scales out when the average throughput and CPU consumption is at 60% or if the number of connections usage is at 80%. Scale out takes five to seven minutes. Azure Firewall gradually scales in when the average throughput, CPU consumption, or number of connections is below 20%.
-When performance testing, ensure you test for at least 10 to 15 minutes, and start new connections to take advantage of newly created firewall nodes.
+When performance testing, make sure you test for at least 10 to 15 minutes, and start new connections to take advantage of newly created firewall nodes.
|Firewall use case |Throughput (Gbps)|
When performance testing, ensure you test for at least 10 to 15 minutes, and sta
|Standard<br>Max bandwidth |up to 3 | |Premium<br>Max bandwidth |up to 18| -
-Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
+> [!NOTE]
+> Azure Firewall Basic doesn't autoscale.
## Next steps
firewall Snat Private Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/snat-private-range.md
You can use the Azure portal to specify private IP address ranges for the firewa
You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network, so traffic to destinations in the learned ranges aren't SNATed. Auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use an ARM template, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes.
+> [!NOTE]
+> Auto-learn SNAT routes is availalable only on VNet deployments (hub virtual network). It isn't availble on VWAN deployments (secured virtual hub). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md)
+ ### Configure using an ARM template You can use the following JSON to configure auto-learn. Azure Firewall must be associated with an Azure Route Server.
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
Title: Create a policy assignment with Bicep file
description: In this quickstart, you use a Bicep file to create an Azure policy assignment that identifies non-compliant resources. Last updated 01/08/2024 -+ # Quickstart: Create a policy assignment to identify non-compliant resources by using a Bicep file
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 01/22/2024 Last updated : 01/30/2024
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 01/22/2024 Last updated : 01/30/2024
The name of each built-in links to the policy definition in the Azure portal. Us
[!INCLUDE [azure-policy-reference-policies-security-center](../../../../includes/policy/reference/bycat/policies-security-center.md)]
+## Security Center - Granular Pricing
++ ## Service Bus [!INCLUDE [azure-policy-reference-policies-service-bus](../../../../includes/policy/reference/bycat/policies-service-bus.md)]
guides Azure Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/developer/azure-developer-guide.md
Azure Spring Apps is a serverless app platform that enables you to build, deploy
> **When to use:** As a fully managed service Azure Spring Apps is a good choice when you're minimizing operational cost running Spring Boot and Spring Cloud apps on Azure. >
-> **Get started:** [Deploy your first Spring Boot app in Azure Spring Apps](../../spring-apps/quickstart.md).
+> **Get started:** [Deploy your first Spring Boot app in Azure Spring Apps](../../spring-apps/enterprise/quickstart.md).
### Enhance your applications with Azure services
hdinsight-aks Secure Traffic By Nsg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-nsg.md
When clusters are created, then certain ingress public IPs also get created. To
The following Azure CLI command can help you get the ingress public IP: ```
-aksManagedResourceGroup=`az rest --uri https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusterpools/{clusterPoolName}\?api-version\=2023-06-01-preview --query properties.managedResourceGroupName -o tsv --query properties.aksManagedResourceGroupName -o tsv`
+aksManagedResourceGroup="az rest --uri https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusterpools/{clusterPoolName}\?api-version\=2023-06-01-preview --query properties.managedResourceGroupName -o tsv --query properties.aksManagedResourceGroupName -o tsv"
az network public-ip list --resource-group $aksManagedResourceGroup --query "[?starts_with(name, 'kubernetes')].{Name:name, IngressPublicIP:ipAddress}" --output table ```
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
The Azure API for FHIR supports the following query parameters. All of these par
| \_typefilter | Yes | To request finer-grained filtering, you can use \_typefilter along with the \_type parameter. The value of the _typeFilter parameter is a comma-separated list of FHIR queries that further restrict the results | | \_container | No | Specifies the container within the configured storage account where the data should be exported. If a container is specified, the data will be exported into a folder into that container. If the container isnΓÇÖt specified, the data will be exported to a new container. | | \_till | No | Allows you to only export resources that have been modified till the time provided. This parameter is applicable to only System-Level export. In this case, if historical versions have not been disabled or purged, export guarantees true snapshot view, or, in other words, enables time travel. |
-|\includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as '_history' to export history/ non latest versioned resources. Include value as '_deleted' to export soft deleted resources. |
+|includeAssociatedData | No | Allows you to export history and soft deleted resources. This filter doesn't work with '_typeFilter' query parameter. Include value as '_history' to export history/ non latest versioned resources. Include value as '_deleted' to export soft deleted resources. |
> [!NOTE] > Only storage accounts in the same subscription as that for Azure API for FHIR are allowed to be registered as the destination for $export operations.
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
Title: What are events? - Azure Health Data Services
-description: Learn about events, its features, integrations, and next steps.
+ Title: What are events in Azure Health Data Services?
+description: Learn how to use events in Azure Health Data Services to subscribe to and receive notifications of changes to health data in the FHIR and DICOM services, and trigger other actions or services based on health data changes.
-+ Previously updated : 09/01/2023 Last updated : 01/29/2024 # What are events?
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
+**Applies to:** [!INCLUDE [Yes icon](../includes/applies-to.md)][!INCLUDE [FHIR service](../includes/fhir-service.md)], [!INCLUDE [DICOM service](../includes/DICOM-service.md)]
-Events are a subscription and notification feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data.
+Events in Azure Health Data Services allow you to subscribe to and receive notifications of changes to health data in the FHIR&reg; service or the DICOM&reg; service. Events also enable you to trigger other actions or services based changes to health data, such as starting workflows, sending email, text messages, or alerts.
-When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the events feature sends notification messages to events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace.
+Events are:
-> [!IMPORTANT]
-> FHIR resource and DICOM image change data is only written and event messages are sent when the events feature is turned on. The event feature doesn't send messages on past resource changes or when the feature is turned off.
+- **Scalable**. Events support growth and change in an organization's healthcare data needs by using the [Azure Event Grid service](../../event-grid/overview.md) and creating a [system topic](../../event-grid/system-topics.md) for Azure Health Data Services. For more information, see [Azure Event Grid event schema](../../event-grid/event-schema.md) and [Azure Health Data Services as an Event Grid source](../../event-grid/event-schema-azure-health-data-services.md).
-> [!TIP]
-> For more information about the features, configurations, and to learn about the use cases of the Azure Event Grid service, see [Azure Event Grid](../../event-grid/overview.md)
+- **Configurable**. Choose which FHIR and DICOM event types trigger event notifications. Use advanced features built into the Azure Event Grid service, such as filters, dead-lettering, and retry policies to tune message delivery options for events.
-
-> [!IMPORTANT]
-> Events currently supports the following operations:
->
-> * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully.
->
-> * **FhirResourceUpdated** - The event emitted after a FHIR resource gets updated successfully.
->
-> * **FhirResourceDeleted** - The event emitted after a FHIR resource gets soft deleted successfully.
->
-> * **DicomImageCreated** - The event emitted after a DICOM image gets created successfully.
->
-> * **DicomImageDeleted** - The event emitted after a DICOM image gets deleted successfully.
->
-> - **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully.
->
-> For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-
-## Scalable
-
-Events are designed to support growth and changes in healthcare technology needs by using the [Azure Event Grid service](../../event-grid/overview.md) and creating a system topic for the Azure Health Data Services Workspace.
-
-## Configurable
+- **Extensible**. Use events to send FHIR resource and DICOM image change messages to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md). to trigger downstream automated workflows that enhance operational data, data analysis, and visibility of the incoming data capturing in near real time.
+
+- **Secure**. Events are built on a platform that supports protected health information (PHI) compliance with privacy, safety, and security standards. Use [Azure managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from the Event Grid system topic to the events message-receiving endpoints of your choice.
-Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune events message delivery options.
-> [!NOTE]
-> The advanced features come as part of the Event Grid service.
+## Supported operations
-## Extensible
+Events support these operations:
-Use events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time.
-
-## Secure
+| Operation | Trigger condition |
+||-|
+| FhirResourceCreated | A FHIR resource was created. |
+| FhirResourceUpdated | A FHIR resource was updated. |
+| FhirResourceDeleted | A FHIR resource was soft deleted. |
+| DicomImageCreated | A DICOM image was created. |
+| DicomImageDeleted | A DICOM image was deleted. |
+| DicomImageUpdated | A DICOM image was updated. |
-Events are built on a platform that supports protected health information compliance with privacy, safety, and security in mind.
+For more information about delete types in the FHIR service, see [FHIR REST API capabilities in Azure Health Data Services](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md).
-Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the events message receiving endpoints of your choice.
+> [!IMPORTANT]
+> Event notifications are sent only when the capability is turned on. The events capability doesn't send messages for past changes or when the capability is turned off.
## Next steps
-To learn about deploying events using the Azure portal, see
-
-> [!div class="nextstepaction"]
-> [Deploy events using the Azure portal](events-deploy-portal.md)
+[Deploy events using the Azure portal](events-deploy-portal.md)
-To learn about troubleshooting events, see
+[Troubleshoot events](events-troubleshooting-guide.md)
-> [!div class="nextstepaction"]
-> [Troubleshoot events](events-troubleshooting-guide.md)
-
-To learn about the frequently asks questions (FAQs) about events, see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about Events](events-faqs.md)
+[Events FAQ](events-faqs.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Configure Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-identity-providers.md
+ Last updated 01/15/2024
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
iot-operations Howto Deploy Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md
# Previously updated : 12/06/2023 Last updated : 01/31/2024 #CustomerIntent: As an OT professional, I want to deploy Azure IoT Operations to a Kubernetes cluster.
Deploy Azure IoT Operations preview - enabled by Azure Arc to a Kubernetes clust
* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). This scenario requires Azure CLI version 2.46.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
+
+* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
+
+ ```bash
+ az extension add --upgrade --name azure-iot-ops
+ ```
+ * An Azure Arc-enabled Kubernetes cluster. If you don't have one, follow the steps in [Prepare your Azure Arc-enabled Kubernetes cluster](./howto-prepare-cluster.md?tabs=wsl-ubuntu). Using Ubuntu in Windows Subsystem for Linux (WSL) is the simplest way to get a Kubernetes cluster for testing. Azure IoT Operations should work on any CNCF-conformant kubernetes cluster. Currently, Microsoft only supports K3s on Ubuntu Linux and WSL, or AKS Edge Essentials on Windows.
-* Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). This scenario requires Azure CLI version 2.42.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
-
-* The Azure IoT Operations extension for Azure CLI.
+ Use the Azure IoT Operations extension for Azure CLI to verify that your cluster host is configured correctly for deployment by using the [verify-host](/cli/azure/iot/ops#az-iot-ops-verify-host) command on the cluster host:
- ```bash
- az extension add --name azure-iot-ops
+ ```azurecli
+ az iot ops verify-host
``` * An [Azure Key Vault](../../key-vault/general/overview.md) that has the **Permission model** set to **Vault access policy**. You can check this setting in the **Access configuration** section of an existing key vault.
Use the Azure portal to deploy Azure IoT Operations components to your Arc-enabl
1. In the Azure portal search bar, search for and select **Azure Arc**.
-1. Select **Azure IoT Operations (preview)** from the **Application services** section of the Azure Arc menu.
+1. Select **Azure IoT Operations (preview)** from the **Application Services** section of the Azure Arc menu.
1. Select **Create**.
-1. On the **Basics** tab of the **Install Azure IoT Operations Arc Extension** page, provide the following information:
+1. On the **Basic** tab of the **Install Azure IoT Operations Arc Extension** page, provide the following information:
| Field | Value | | -- | -- |
Use the Azure portal to deploy Azure IoT Operations components to your Arc-enabl
| **Subscription** | Select the subscription that contains your Arc-enabled Kubernetes cluster. | | **Azure Key vault** | Choose an existing key vault from the drop-down list or create a new one by selecting **Create new**. |
-1. On the **Automation** tab, the automation commands are populated based on your chosen cluster and key vault. Select an automation option:
-
- * **Azure CLI enablement + UI deployment -- Visually guided configuration**: Generates an Azure CLI command that configures your cluster. If you choose this option, you'll return to the Azure portal to complete the Azure IoT Operations deployment.
- * **Azure CLI deployment -- Efficiency unleashed**: Generates an Azure CLI command that configures your cluster and also deploys Azure IoT Operations.
-
-1. After choosing your automation option, copy the generated CLI command.
+1. Once you select a key vault, the **Automation** tab uses all the information you've selected so far to populate an Azure CLI command that configures your cluster and deploys Azure IoT Operations. Copy the CLI command.
- <!-- :::image type="content" source="../get-started/media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal."::: -->
+ :::image type="content" source="../get-started/media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal.":::
1. Sign in to Azure CLI on your development machine. To prevent potential permission issues later, sign in interactively with a browser here even if you've already logged in before.
Use the Azure portal to deploy Azure IoT Operations components to your Arc-enabl
Wait for the command to complete.
- If you copied the **Azure CLI deployment** CLI command, then you're done with the cluster configuration and deployment.
-
-1. If you copied the **Azure CLI enablement + UI deployment** CLI command, return to the Azure portal and select **Review + Create**.
-
-1. Wait for the validation to pass and then select **Create**.
- #### [Azure CLI](#tab/cli) Use the Azure CLI to deploy Azure IoT Operations components to your Arc-enabled Kubernetes cluster.
iot-operations Howto Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-manage-secrets.md
# Last updated 12/19/2023-
- - ignite-2023
+ #CustomerIntent: As an IT professional, I want prepare an Azure-Arc enabled Kubernetes cluster with Key Vault secrets so that I can deploy Azure IoT Operations to it.
iot-operations Howto Prepare Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md
An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure Io
To prepare your Azure Arc-enabled Kubernetes cluster, you need: - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- [Azure CLI version 2.42.0 or newer installed](/cli/azure/install-azure-cli) on your development machine.
+- [Azure CLI version 2.46.0 or newer installed](/cli/azure/install-azure-cli) on your development machine.
- Hardware that meets the [system requirements](/azure/azure-arc/kubernetes/system-requirements). ### Create a cluster
pod/resource-sync-agent-769bb66b79-z9n46 2/2 Running 0
pod/metrics-agent-6588f97dc-455j8 2/2 Running 0 10m ```
+To verify that your cluster is ready for Azure IoT Operations deployment, you can use the [verify-host](/cli/azure/iot/ops#az-iot-ops-verify-host) helper command in the Azure IoT Operations extension for Azure CLI. When run on the cluster host, this helper command checks connectivity to Azure Resource Manager and Microsoft Container Registry endpoints. If the cluster has an Ubuntu OS, it checks if `nfs-common` is installed, and if not give you the option to install on your behalf.
+
+```azurecli
+az iot ops verify-host
+```
+ ## Next steps Now that you have an Azure Arc-enabled Kubernetes cluster, you can [deploy Azure IoT Operations](../deploy-iot-ops/howto-deploy-iot-operations.md).
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md
# Previously updated : 12/06/2023 Last updated : 01/31/2024 #CustomerIntent: As a < type of user >, I want < what? > so that < why? >.
For this quickstart, we recommend GitHub Codespaces as a quick way to get starte
* Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
- This quickstart requires Azure CLI version 2.42.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
+ This quickstart requires Azure CLI version 2.46.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
-* The Azure IoT Operations extension for Azure CLI.
+* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
- ```powershell
- az extension add --name azure-iot-ops
+ ```bash
+ az extension add --upgrade --name azure-iot-ops
``` # [Linux](#tab/linux)
For this quickstart, we recommend GitHub Codespaces as a quick way to get starte
* Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
- This quickstart requires Azure CLI version 2.42.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
+ This quickstart requires Azure CLI version 2.46.0 or higher. Use `az --version` to check your version and `az upgrade` to update if necessary.
-* The Azure IoT Operations extension for Azure CLI.
+* The Azure IoT Operations extension for Azure CLI. Use the following command to add the extension or update it to the latest version:
```bash
- az extension add --name azure-iot-ops
+ az extension add --upgrade --name azure-iot-ops
```
On Ubuntu Linux, use K3s to create a Kubernetes cluster.
+## Verify cluster
+
+Use the Azure IoT Operations extension for Azure CLI to verify that your cluster host is configured correctly for deployment by using the [verify-host](/cli/azure/iot/ops#az-iot-ops-verify-host) command on the cluster host:
+
+```azurecli
+az iot ops verify-host
+```
+
+This helper command checks connectivity to Azure Resource Manager and Microsoft Container Registry endpoints. If the cluster has an Ubuntu OS, it checks if `nfs-common` is installed, and if not give you the option to install on your behalf.
+ ## Configure cluster and deploy Azure IoT Operations Part of the deployment process is to configure your cluster so that it can communicate securely with your Azure IoT Operations components and key vault. The Azure CLI command `az iot ops init` does this for you. Once your cluster is configured, then you can deploy Azure IoT Operations.
az keyvault create --enable-rbac-authorization false --name "<your unique key va
1. In the Azure portal search bar, search for and select **Azure Arc**.
-1. Select **Azure IoT Operations (preview)** from the **Application services** section of the Azure Arc menu.
+1. Select **Azure IoT Operations (preview)** from the **Application Services** section of the Azure Arc menu.
:::image type="content" source="./media/quickstart-deploy/arc-iot-operations.png" alt-text="Screenshot of selecting Azure IoT Operations from Azure Arc."::: 1. Select **Create**.
-1. On the **Basics** tab of the **Install Azure IoT Operations Arc Extension** page, provide the following information:
+1. On the **Basic** tab of the **Install Azure IoT Operations Arc Extension** page, provide the following information:
| Field | Value | | -- | -- |
az keyvault create --enable-rbac-authorization false --name "<your unique key va
| **Subscription** | Select the subscription that contains your Arc-enabled Kubernetes cluster. | | **Azure Key Vault** | Use the **Select a key vault** drop-down menu to choose the key vault that you set up in the previous section. |
-1. Once you select a key vault, the **Automation** tab populates an Azure CLI command that configures your cluster with your deployment information. Copy the CLI command.
-
- >[!TIP]
- >Select the **Azure CLI deployment -- Efficiency unleashed** automation option to generate a CLI command that performs the configuration tasks on your cluster and then also deploys Azure IoT Operations.
+1. Once you select a key vault, the **Automation** tab uses all the information you've selected so far to populate an Azure CLI command that configures your cluster and deploys Azure IoT Operations. Copy the CLI command.
- <!-- :::image type="content" source="./media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal."::: -->
+ :::image type="content" source="./media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal.":::
1. Sign in to Azure CLI on your development machine or in your codespace terminal. To prevent potential permission issues later, sign in interactively with a browser here even if you've already logged in before.
az keyvault create --enable-rbac-authorization false --name "<your unique key va
``` > [!NOTE]
- > When using a Github codespace in a browser, `az login` returns a localhost error in the browser window after logging in. To fix, either:
+ > When using a GitHub codespace in a browser, `az login` returns a localhost error in the browser window after logging in. To fix, either:
> > * Open the codespace in VS Code desktop, and then run `az login` again in the browser terminal. > * After you get the localhost error on the browser, copy the URL from the browser and run `curl "<URL>"` in a new terminal tab. You should see a JSON response with the message "You have logged into Microsoft Azure!." 1. Run the copied `az iot ops init` command on your development machine or in your codespace terminal.
- Wait for the command to complete before continuing to the next step.
- >[!TIP] >If you get an error that says *Your device is required to be managed to access your resource*, go back to the previous step and make sure that you signed in interactively.
-1. Return to the Azure portal and select **Review + Create**.
-
-1. Wait for the validation to pass and then select **Create**.
- ## View resources in your cluster While the deployment is in progress, you can watch the resources being applied to your cluster. You can use kubectl commands to observe changes on the cluster or, since the cluster is Arc-enabled, you can use the Azure portal.
iot-operations Howto Configure Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-reference.md
The two keys:
Each dataset can only have one primary key.
-All incoming data within the pipeline is stored in the `equipment` dataset in the reference data store. The stored data includes the `installationDate` timestamp and keys such as `equipment` and `location`.
+All incoming data within the pipeline is stored in the `equipment` dataset in the reference data store. The stored data includes the `installationDate` timestamp and keys such as `equipment name` and `location`.
These properties are available in the enrichment stages of other pipelines where you can use them to provide context and add additional information to the messages being processed. For example, you can use this data to supplement sensor readings from a specific piece of equipment with its installation date and location. To learn more, see the [Enrich](howto-configure-enrich-stage.md) stage.
-Within the `equipment` dataset, the `asset` key serves as the primary key. When th pipeline ingests new data, Data Processor checks this property to determine how to handle the incoming data:
+Within the `equipment` dataset, the `equipment name` key serves as the primary key. When th pipeline ingests new data, Data Processor checks this property to determine how to handle the incoming data:
-- If a message arrives with an `asset` key that doesn't yet exist in the dataset (such as `Pump`), Data Processor adds a new entry to the dataset. This entry includes the new `asset` type and its associated data such as `location`, `installationDate`, and `isSpare`.-- If a message arrives with an `asset` key that matches an existing entry in the dataset (such as `Slicer`), Data Processor updates that entry. The associated data for that equipment such as `location`, `installationDate`, and `isSpare` updates with the values from the incoming message.
+- If a message arrives with an `equipment name` key that doesn't yet exist in the dataset (such as `Pump`), Data Processor adds a new entry to the dataset. This entry includes the new `equipment name` type and its associated data such as `location`, `installationDate`, and `isSpare`.
+- If a message arrives with an `equipment name` key that matches an existing entry in the dataset (such as `Slicer`), Data Processor updates that entry. The associated data for that equipment such as `location`, `installationDate`, and `isSpare` updates with the values from the incoming message.
The `equipment` dataset in the reference data store is an up-to-date source of information that can enhance and contextualize the data flowing through other pipelines in Data Processor using the `Enrich` stage.
iot-operations Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/troubleshoot.md
This article contains troubleshooting tips for Azure IoT Operations Preview.
For general deployment and configuration troubleshooting, you can use the Azure CLI IoT Operations *check* and *support* commands.
-[Azure CLI version 2.42.0 or higher](/cli/azure/install-azure-cli) is required and the [Azure IoT Operations extension](/cli/azure/iot/ops) installed.
+[Azure CLI version 2.46.0 or higher](/cli/azure/install-azure-cli) is required and the [Azure IoT Operations extension](/cli/azure/iot/ops) installed.
- Use [az iot ops check](/cli/azure/iot/ops#az-iot-ops-check) to evaluate IoT Operations service deployment for health, configuration, and usability. The *check* command can help you find problems in your deployment and configuration.
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
load-balancer Configure Vm Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md
Last updated 01/11/2024-+ # Configure a Virtual Machine Scale Set with an existing Azure Standard Load Balancer
load-balancer Quickstart Load Balancer Standard Public Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-terraform.md
+
+ Title: "Quickstart: Create a public load balancer - Terraform"
+
+description: This quickstart shows how to create a load balancer by using Terraform.
++++++ Last updated : 01/02/2024++
+#Customer intent: I want to create a load balancer by using Terraform so that I can load balance internet traffic to VMs.
++
+# Quickstart: Create a public load balancer to load balance VMs using Terraform
+
+This quickstart shows you how to deploy a standard load balancer to load balance virtual machines using Terraform.
++
+> [!div class="checklist"]
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create an Azure Virtual Network using [azurerm_virtual_network](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_network)
+> * Create an Azure subnet using [azurerm_subnet](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subnet)
+> * Create an Azure public IP using [azurerm_public_ip](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/public_ip)
+> * Create an Azure Load Balancer using [azurerm_lb](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/lb)
+> * Create an Azure network interface using [azurerm_network_interface](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface)
+> * Create an Azure network interface load balancer backend address pool association using [azurerm_network_interface_backend_address_pool_association](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/network_interface_backend_address_pool_association)
+> * Create an Azure Linux Virtual Machine using [azurerm_linux_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/linux_virtual_machine)
+> * Create an Azure Virtual Machine Extension using [azurerm_virtual_machine_extension](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_extension)
+
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ ```
+ terraform {
+   required_version = ">=0.12"
+
+   required_providers {
+     azapi = {
+       source  = "azure/azapi"
+       version = "~>1.5"
+     }
+     azurerm = {
+       source  = "hashicorp/azurerm"
+       version = "~>2.0"
+     }
+     random = {
+       source  = "hashicorp/random"
+       version = "~>3.0"
+     }
+   }
+ }
+
+ provider "azurerm" {
+   features {}
+ }
+ ```
+
+1. Create a file named `main.tf` and insert the following code:
+
+ ```
+ resource "random_string" "my_resource_group" {
+ length = 8
+ upper = false
+ special = false
+ }
+
+ # Create Resource Group
+ resource "azurerm_resource_group" "my_resource_group" {
+ name = "test-group-${random_string.my_resource_group.result}"
+ location = var.resource_group_location
+ }
+
+ # Create Virtual Network
+ resource "azurerm_virtual_network" "my_virtual_network" {
+   name                = var.virtual_network_name
+   address_space       = ["10.0.0.0/16"]
+   location            = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+ }
+
+ # Create a subnet in the Virtual Network
+ resource "azurerm_subnet" "my_subnet" {
+   name                 = var.subnet_name
+   resource_group_name  = azurerm_resource_group.my_resource_group.name
+   virtual_network_name = azurerm_virtual_network.my_virtual_network.name
+   address_prefixes     = ["10.0.1.0/24"]
+ }
+
+ # Create Network Security Group and rules
+ resource "azurerm_network_security_group" "my_nsg" {
+   name                = var.network_security_group_name
+   location            = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+
+   security_rule {
+     name                       = "web"
+     priority                   = 1008
+     direction                  = "Inbound"
+     access                     = "Allow"
+     protocol                   = "Tcp"
+     source_port_range          = "*"
+     destination_port_range     = "80"
+     source_address_prefix      = "*"
+     destination_address_prefix = "10.0.1.0/24"
+   }
+ }
+
+ # Associate the Network Security Group to the subnet
+ resource "azurerm_subnet_network_security_group_association" "my_nsg_association" {
+   subnet_id                 = azurerm_subnet.my_subnet.id
+   network_security_group_id = azurerm_network_security_group.my_nsg.id
+ }
+
+ # Create Public IP
+ resource "azurerm_public_ip" "my_public_ip" {
+   name                = var.public_ip_name
+   location            = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   allocation_method   = "Static"
+   sku                 = "Standard"
+ }
+
+ # Create Network Interface
+ resource "azurerm_network_interface" "my_nic" {
+   count               = 2
+   name                = "${var.network_interface_name}${count.index}"
+   location            = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+
+   ip_configuration {
+     name                          = "ipconfig${count.index}"
+     subnet_id                     = azurerm_subnet.my_subnet.id
+     private_ip_address_allocation = "Dynamic"
+     primary = true
+   }
+ }
+
+ # Associate Network Interface to the Backend Pool of the Load Balancer
+ resource "azurerm_network_interface_backend_address_pool_association" "my_nic_lb_pool" {
+   count                   = 2
+   network_interface_id    = azurerm_network_interface.my_nic[count.index].id
+   ip_configuration_name   = "ipconfig${count.index}"
+   backend_address_pool_id = azurerm_lb_backend_address_pool.my_lb_pool.id
+ }
+
+ # Create Virtual Machine
+ resource "azurerm_linux_virtual_machine" "my_vm" {
+   count                 = 2
+   name                  = "${var.virtual_machine_name}${count.index}"
+   location              = azurerm_resource_group.my_resource_group.location
+   resource_group_name   = azurerm_resource_group.my_resource_group.name
+   network_interface_ids = [azurerm_network_interface.my_nic[count.index].id]
+   size                  = var.virtual_machine_size
+
+   os_disk {
+     name                 = "${var.disk_name}${count.index}"
+     caching              = "ReadWrite"
+     storage_account_type = var.redundancy_type
+   }
+
+   source_image_reference {
+     publisher = "Canonical"
+     offer     = "0001-com-ubuntu-server-jammy"
+     sku       = "22_04-lts-gen2"
+     version   = "latest"
+   }
+
+ admin_username                  = var.username
+   admin_password                  = var.password
+   disable_password_authentication = false
+
+ }
+
+ # Enable virtual machine extension and install Nginx
+ resource "azurerm_virtual_machine_extension" "my_vm_extension" {
+   count                = 2
+   name                 = "Nginx"
+   virtual_machine_id   = azurerm_linux_virtual_machine.my_vm[count.index].id
+   publisher            = "Microsoft.Azure.Extensions"
+   type                 = "CustomScript"
+   type_handler_version = "2.0"
+
+   settings = <<SETTINGS
+  {
+   "commandToExecute": "sudo apt-get update && sudo apt-get install nginx -y && echo \"Hello World from $(hostname)\" > /var/www/html/https://docsupdatetracker.net/index.html && sudo systemctl restart nginx"
+  }
+ SETTINGS
+
+ }
+
+ # Create Public Load Balancer
+ resource "azurerm_lb" "my_lb" {
+   name                = var.load_balancer_name
+   location            = azurerm_resource_group.my_resource_group.location
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   sku                 = "Standard"
+
+   frontend_ip_configuration {
+     name                 = var.public_ip_name
+     public_ip_address_id = azurerm_public_ip.my_public_ip.id
+   }
+ }
+
+ resource "azurerm_lb_backend_address_pool" "my_lb_pool" {
+   loadbalancer_id      = azurerm_lb.my_lb.id
+   name                 = "test-pool"
+ }
+
+ resource "azurerm_lb_probe" "my_lb_probe" {
+   resource_group_name = azurerm_resource_group.my_resource_group.name
+   loadbalancer_id     = azurerm_lb.my_lb.id
+   name                = "test-probe"
+   port                = 80
+ }
+
+ resource "azurerm_lb_rule" "my_lb_rule" {
+   resource_group_name            = azurerm_resource_group.my_resource_group.name
+   loadbalancer_id                = azurerm_lb.my_lb.id
+   name                           = "test-rule"
+   protocol                       = "Tcp"
+   frontend_port                  = 80
+   backend_port                   = 80
+   disable_outbound_snat          = true
+   frontend_ip_configuration_name = var.public_ip_name
+   probe_id                       = azurerm_lb_probe.my_lb_probe.id
+   backend_address_pool_ids       = [azurerm_lb_backend_address_pool.my_lb_pool.id]
+ }
+
+ resource "azurerm_lb_outbound_rule" "my_lboutbound_rule" {
+   resource_group_name     = azurerm_resource_group.my_resource_group.name
+   name                    = "test-outbound"
+   loadbalancer_id         = azurerm_lb.my_lb.id
+   protocol                = "Tcp"
+   backend_address_pool_id = azurerm_lb_backend_address_pool.my_lb_pool.id
+
+   frontend_ip_configuration {
+     name = var.public_ip_name
+   }
+ }
+ ```
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ ```
+ variable "resource_group_location" {
+   type        = string
+   default     = "eastus"
+   description = "Location of the resource group."
+ }
+
+ variable "username" {
+   type        = string
+   default     = "microsoft"
+   description = "The username for the local account that will be created on the new VM."
+ }
+
+ variable "password" {
+   type        = string
+   default     = "Microsoft@123"
+   description = "The passoword for the local account that will be created on the new VM."
+ }
+
+ variable "virtual_network_name" {
+   type        = string
+   default     = "test-vnet"
+   description = "Name of the Virtual Network."
+ }
+
+ variable "subnet_name" {
+   type        = string
+   default     = "test-subnet"
+   description = "Name of the subnet."
+ }
+
+ variable public_ip_name {
+   type        = string
+   default     = "test-public-ip"
+   description = "Name of the Public IP."
+ }
+
+ variable network_security_group_name {
+   type        = string
+   default     = "test-nsg"
+   description = "Name of the Network Security Group."
+ }
+
+ variable "network_interface_name" {
+   type        = string
+   default     = "test-nic"
+   description = "Name of the Network Interface."  
+ }
+
+ variable "virtual_machine_name" {
+   type        = string
+   default     = "test-vm"
+   description = "Name of the Virtual Machine."
+ }
+
+ variable "virtual_machine_size" {
+   type        = string
+   default     = "Standard_B2s"
+   description = "Size or SKU of the Virtual Machine."
+ }
+
+ variable "disk_name" {
+   type        = string
+   default     = "test-disk"
+   description = "Name of the OS disk of the Virtual Machine."
+ }
+
+ variable "redundancy_type" {
+   type        = string
+   default     = "Standard_LRS"
+   description = "Storage redundancy type of the OS disk."
+ }
+
+ variable "load_balancer_name" {
+   type        = string
+   default     = "test-lb"
+   description = "Name of the Load Balancer."
+ }
+ ```
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ ```
+ output "public_ip_address" {
+ value = "http://${azurerm_public_ip.my_public_ip.ip_address}"
+ }
+ ```
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+1. When you apply the execution plan, Terraform displays the frontend public IP address. If you've cleared the screen, you can retrieve that value with the following Terraform command:
+
+ ```console
+ echo $(terraform output -raw public_ip_address)
+ ```
+
+1. Paste the public IP address into the address bar of your web browser. The custom VM page of the Nginx web server is displayed in the browser.
+
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+In this quickstart, you:
+
+* Created an Azure Load Balancer
+* Attached 2 VMs to the load balancer
+* Tested the load balancer
+
+To learn more about Azure Load Balancer, continue to:
+> [!div class="nextstepaction"]
+> [What is Azure Load Balancer?](load-balancer-overview.md)
load-balancer Upgrade Basic Standard With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-with-powershell.md
This article introduces a PowerShell module that creates a Standard Load Balancer with the same configuration as the Basic Load Balancer, then associates the Virtual Machine Scale Set or Virtual Machine backend pool members with the new Load Balancer.
+For an in-depth walk-through of the upgrade module and process, please see the following video:
+> [!VIDEO https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6]
+[01:38 - Quick Demo](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h0m35s) | [03:06 - Step-by-step](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h3m06s) | [32:54 - Recovery](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h32m45s) | [40:55 - Advanced Scenarios](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h40m55s) | [57:54 - Resources](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h57m54s)
+ ## Upgrade Overview The PowerShell module performs the following functions:
At the end of its execution, the upgrade module performs the following validatio
### What happens if my upgrade fails mid-migration?
-The module is designed to accommodate failures, either due to unhandled errors or unexpected script termination. The failure design is a 'fail forward' approach, where instead of attempting to move back to the Basic Load Balancer, you should correct the issue causing the failure (see the error output or log file), and retry the migration again, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerbackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters. For public load balancers, because the Public IP Address SKU has been updated to Standard, moving the same IP back to a Basic Load Balancer won't be possible.
+The module is designed to accommodate failures, either due to unhandled errors or unexpected script termination. The failure design is a 'fail forward' approach, where instead of attempting to move back to the Basic Load Balancer, you should correct the issue causing the failure (see the error output or log file), and retry the migration again, specifying the `-FailedMigrationRetryFilePathLB <BasicLoadBalancerBackupFilePath> -FailedMigrationRetryFilePathVMSS <VMSSBackupFile>` parameters. For public load balancers, because the Public IP Address SKU has been updated to Standard, moving the same IP back to a Basic Load Balancer won't be possible.
+
+[**Click here to watch a video of the recovery process**](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=8e203b99-41ff-4454-9cbd-58856708f1c6#time=0h32m45s)
If your failed migration was targeting multiple load balancers at the same time, using the `-MultiLBConfig` parameter, recover each Load Balancer individually using the same process as below.
logic-apps Logic Apps Enterprise Integration Rosettanet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-rosettanet.md
Previously updated : 01/04/2024 Last updated : 01/31/2024 #Customer intent: As a logic apps developer, I want to send and receive RosettaNet messages using workflows in Azure Logic Apps so that I can use a standardized process to share business information with partners.
The RosettaNet connector is available only for Consumption logic app workflows.
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. The **RosettaNet** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [RosettaNet connector operations](#rosettanet-operations) <br>- [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
+| **Consumption** | Multitenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. The **RosettaNet** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [RosettaNet connector operations](#rosettanet-operations) <br>- [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
| **Consumption** | Integration service environment (ISE) | Built-in connector, which appears in the designer with the **CORE** label. The **RosettaNet** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [RosettaNet connector operations](#rosettanet-operations) <br>- [ISE message limits](logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | <a name="rosettanet-operations"></a>
The **RosettaNet** connector has no triggers. The following table describes the
> To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region. > To use integration account artifacts in your workflow, make sure to [link your logic app resource to your integration account](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account).
-* At least two [partners](../logic-apps/logic-apps-enterprise-integration-partners.md) that are defined in your integration account and configured with the **DUNS** qualifier under **Business Identities** in the Azure portal.
+* At least two [partners](../logic-apps/logic-apps-enterprise-integration-partners.md) defined in your integration account and use the **DUNS** qualifier under **Business Identities** in the Azure portal.
+
+ > [!NOTE]
+ >
+ > Make sure that you select **DUNS** as the qualifier, which you can find near the
+ > bottom of the **Qualifier** list, and not **1 - D-U-N-S (Dun & Bradstreet)**.
* Optional [certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md) for encrypting, decrypting, or signing the messages that you upload to the integration account. Certificates are required only if you use signing or encryption.
To send or receive RosettaNet messages, your integration account requires a PIP
| **PIP Code** | Yes | The three-digit PIP code. For more information, see [RosettaNet PIPs](/biztalk/adapters-and-accelerators/accelerator-rosettanet/rosettanet-pips). | | **PIP Version** | Yes | The PIP version number, which depends on your selected PIP code. |
- For more information about these PIP properties, visit the [RosettaNet website](https://resources.gs1us.org/RosettaNet-Standards/Standards-Library/PIP-Directory#1043208-pipsreg).
+ For more information about these PIP properties, visit the [RosettaNet website](https://www.gs1us.org/resources/rosettanet/standards-library/pip-directory).
1. When you're done, select **OK** to create the PIP configuration.
To send or receive RosettaNet messages, your integration account requires a PIP
1. On the integration account navigation menu, under **Settings**, select **Agreements**.
- :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/select-agreements.png" alt-text="Screenshot of the Azure portal with the integration account page open. On the navigation menu, Agreements is selected.":::
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/select-agreements.png" alt-text="Screenshot shows Azure portal with the integration account page open. On the navigation menu, the Agreements option is selected.":::
1. On the **Agreements** page, select **Add**. Under **Add**, enter your agreement details.
- :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-details.png" alt-text="Screenshot of the Agreements page, with Add selected. On the Add pane, boxes appear for the agreement name and type and for partner information.":::
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-details.png" alt-text="Screenshot shows Agreements page with Add option selected. On the pane named Add, boxes appear for the agreement name and type and for partner information.":::
| Property | Required | Description | |-|-|-|
To send or receive RosettaNet messages, your integration account requires a PIP
| **Action URL** | Yes | The URL to use for sending action messages. The URL is a required field for both synchronous and asynchronous messages. | | **Acknowledgment URL** | Yes | The URL to use for sending acknowledgment messages. The URL is a required field for asynchronous messages. |
- :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-send-details.png" alt-text="Screenshot of the Send Settings page, with options for signing and encrypting messages and for entering algorithms, certificates, and endpoints.":::
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-send-details.png" alt-text="Screenshot shows the Send Settings page, with options for signing and encrypting messages and for entering algorithms, certificates, and endpoints.":::
1. To set up your agreement with the RosettaNet PIP references for partners, select **RosettaNet PIP references**. Under **PIP Name**, select the name of the PIP that you created earlier.
To send or receive RosettaNet messages, your integration account requires a PIP
Your selection populates the remaining properties, which are based on the PIP that you set up in your integration account. If necessary, you can change the **PIP Role**.
- :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-selected-pip.png" alt-text="Screenshot that shows a table of PIP information. A row for the PIP called MyPIPConfig contains accurate information.":::
+ :::image type="content" source="media/logic-apps-enterprise-integration-rosettanet/add-agreement-selected-pip.png" alt-text="Screenshot shows a table with PIP information. The row for the PIP named MyPIPConfig shows accurate information.":::
After you complete these steps, you're ready to send or receive RosettaNet messages.
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 01/29/2024 Last updated : 01/30/2024
For more information about security in Azure, review these topics:
## Access to logic app operations
-For Consumption logic apps only, before you can create or manage logic apps and their connections, you need specific permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can also set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, you can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles:
+For Consumption logic apps only, before you can create or manage logic apps and their connections, you need specific permissions, which are provided through roles using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). You can also set up permissions so that only specific users or groups can run specific tasks, such as managing, editing, and viewing logic apps. To control their permissions, you can assign built-in or customized roles to members who have access to your Azure subscription. Azure Logic Apps has the following specific roles, based on whether you have a Consumption or Standard logic app workflow:
-* [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor): Lets you manage logic apps, but you can't change access to them.
+##### Consumption workflows
+
+| Role | Description |
+||-|
+| [**Logic App Contributor**](../role-based-access-control/built-in-roles.md#logic-app-contributor) | You can manage logic app workflows, but you can't change access to them. |
+| [**Logic App Operator**](../role-based-access-control/built-in-roles.md#logic-app-operator) | You can read, enable, and disable logic app workflows, but you can't edit or update them. |
+| [**Contributor**](../role-based-access-control/built-in-roles.md#contributor) | You have full access to manage all resources, but you can't assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. |
-* [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator): Lets you read, enable, and disable logic apps, but you can't edit or update them.
+For example, suppose you have to work with a logic app workflow that you didn't create and authenticate connections used by that logic app workflow. Your Azure subscription requires **Contributor** permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
-* [Contributor](../role-based-access-control/built-in-roles.md#contributor): Grants full access to manage all resources, but doesn't allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+To prevent others from changing or deleting your logic app workflow, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources. For more information about connection security, review [Connection configuration in Azure Logic Apps](../connectors/introduction.md#connection-configuration) and [Connection security and encryption](../connectors/introduction.md#connection-security-encryption).
- For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
+##### Standard workflows
-To prevent others from changing or deleting your logic app, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources. For more information about connection security, review [Connection configuration in Azure Logic Apps](../connectors/introduction.md#connection-configuration) and [Connection security and encryption](../connectors/introduction.md#connection-security-encryption).
+> [!NOTE]
+>
+> This capability is in preview and is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+| Role | Description |
+||-|
+| [**Logic Apps Standard Reader** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-reader) | You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. |
+| [**Logic Apps Standard Operator** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-operator) | You have access to enable, resubmit, and disable workflows and to create connections to services, systems, and networks for a Standard logic app. The Operator role can perform administration and support tasks on the Azure Logic Apps platform, but doesn't have permissions to edit workflows or settings. |
+| [**Logic Apps Standard Developer** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-developer) | You have access to create and edit workflows, connections, and settings for a Standard logic app. The Developer role doesn't have permissions to make changes outside the scope of workflows, for example, application-wide changes such as configure virtual network integration. App Service Plans aren't supported. |
+| [**Logic Apps Standard Contributor** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-contributor) | You have access to manage all aspects of a Standard logic app, but you can't change access or ownership. |
<a name="secure-run-history"></a>
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024 ms.suite: integration
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
A Conda dependency YAML file can define many session-level Conda packages in a s
- [Azure Synapse Runtime for Apache Spark 3.3](../synapse-analytics/spark/apache-spark-33-runtime.md#python-libraries-normal-vms) - [Azure Synapse Runtime for Apache Spark 3.2](../synapse-analytics/spark/apache-spark-32-runtime.md#python-libraries-normal-vms)
+> [!IMPORTANT]
+> Azure Synapse Runtime for Apache Spark: Announcements
+> * Azure Synapse Runtime for Apache Spark 3.2:
+> * EOLA Announcement Date: July 8, 2023
+> * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
+> * For continued support and optimal performance, we advise that you migrate to
+ > [!NOTE] > For a session-level Conda package: > - the *Cold start* will need about ten to fifteen minutes.
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
monikerRange: 'azureml-api-2 || azureml-api-1'
# Customer-managed keys for Azure Machine Learning
-Azure Machine Learning is built on top of multiple Azure services. While the data is stored securely using encryption keys that Microsoft provides, you can enhance security by also providing your own (customer-managed) keys. The keys you provide are stored securely using Azure Key Vault. Your data is stored on a set of additional resources managed in your Azure subscription.
+Azure Machine Learning is built on top of multiple Azure services. Although the stored data is encrypted through encryption keys that Microsoft provides, you can enhance security by also providing your own (customer-managed) keys. The keys that you provide are stored in Azure Key Vault. Your data is stored on a set of additional resources that you manage in your Azure subscription.
-In addition to customer-managed keys, Azure Machine Learning also provides a [hbi_workspace flag](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace). Enabling this flag reduces the amount of data Microsoft collects for diagnostic purposes and enables [extra encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). This flag also enables the following behaviors:
+In addition to customer-managed keys, Azure Machine Learning provides an [hbi_workspace flag](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace). Enabling this flag reduces the amount of data that Microsoft collects for diagnostic purposes and enables [extra encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). This flag also enables the following behaviors:
-* Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster, provided you haven't created any previous clusters in that subscription. Else, you need to raise a support ticket to enable encryption of the scratch disk of your compute clusters.
+* Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster, if you didn't create any previous clusters in that subscription. Otherwise, you need to raise a support ticket to enable encryption of the scratch disk for your compute clusters.
* Cleans up your local scratch disk between jobs.
-* Securely passes credentials for your storage account, container registry, and SSH account from the execution layer to your compute clusters using your key vault.
+* Securely passes credentials for your storage account, container registry, and Secure Shell (SSH) account from the execution layer to your compute clusters by using your key vault.
-> [!TIP]
-> The `hbi_workspace` flag does not impact encryption in transit, only encryption at rest.
+The `hbi_workspace` flag doesn't affect encryption in transit. It affects only encryption at rest.
## Prerequisites * An Azure subscription.
-* An Azure Key Vault instance. The key vault contains the key(s) used to encrypt your services.
+* An Azure Key Vault instance. The key vault contains the keys for encrypting your services.
- * The key vault instance must enable soft delete and purge protection.
- * The managed identity for the services secured by a customer-managed key must have the following permissions in key vault:
+The key vault must enable soft delete and purge protection. The managed identity for the services that you help secure by using a customer-managed key must have the following permissions to the key vault:
- * wrap key
- * unwrap key
- * get
+* Wrap Key
+* Unwrap Key
+* Get
- For example, the managed identity for Azure Cosmos DB would need to have those permissions to the key vault.
+For example, the managed identity for Azure Cosmos DB would need to have those permissions to the key vault.
## Limitations
-* After workspace creation, the customer-managed encryption key for resources the workspace depends on can only be updated to another key in the original Azure Key Vault resource.
-* Encrypted data is stored on resources that live in a Microsoft-managed resource group in your subscription. You cannot create these resources upfront or transfer ownership of these to you. Data lifecycle is managed indirectly via the Azure ML APIs as you create objects in Azure Machine Learning service.
-* You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace.
-* The compute cluster OS disk cannot be encrypted using your customer-managed keys, but only Microsoft-managed keys.
+* After workspace creation, the customer-managed encryption key for resources that the workspace depends on can only be updated to another key in the original Azure Key Vault resource.
+* Encrypted data is stored on resources in a Microsoft-managed resource group in your subscription. You can't create these resources up front or transfer ownership of them to you. The data lifecycle is managed indirectly via the Azure Machine Learning APIs as you create objects in the Azure Machine Learning service.
+* You can't delete Microsoft-managed resources that you use for customer-managed keys without also deleting your workspace.
+* You can't encrypt the compute cluster's OS disk by using your customer-managed keys. You must use Microsoft-managed keys.
+
+> [!WARNING]
+> Don't delete the resource group that contains the Azure Cosmos DB instance, or any of the resources that are automatically created in this group. If you need to delete the resource group or Microsoft-managed services in it, you must delete the Azure Machine Learning workspace that uses it. The resource group's resources are deleted when you delete the associated workspace.
-## How and what workspace metadata is stored
+## Storage of workspace metadata
-When you bring your own encryption key, service metadata is stored on dedicated resources in your Azure subscription. Microsoft creates a separate resource group in your subscription for this named *"azureml-rg-workspacename_GUID"*. Resource in this managed resource group can only be modified by Microsoft.
+When you bring your own encryption key, service metadata is stored on dedicated resources in your Azure subscription. Microsoft creates a separate resource group in your subscription for this purpose: *azureml-rg-workspacename_GUID*. Only Microsoft can modify the resources in this managed resource group.
-The following resources are created and store metadata for your workspace:
+Microsoft creates the following resources to store metadata for your workspace:
| Service | Usage | Example data |
-| -- | -- | -- |
-| Azure Cosmos DB | Stores job history data, compute metadata, asset metadata | Job name, status, sequence number and status; Compute cluster name, number of cores, number of nodes; Datastore names and tags, descriptions on assets like models; data label names |
-| Azure AI Search | Stores indices that are used to help query your machine learning content. | These indices are built on top of the data stored in CosmosDB. |
-| Azure Storage Account | Stores metadata related to Azure Machine Learning pipelines data. | Designer pipeline names, pipeline layout, execution properties. |
+| -- | -- | -- |
+| Azure Cosmos DB | Stores job history data, compute metadata, and asset metadata. | Data can include job name, status, sequence number, and status; compute cluster name, number of cores, and number of nodes; datastore names and tags, and descriptions on assets like models; and data label names. |
+| Azure AI Search | Stores indexes that help with querying your machine learning content. | These indexes are built on top of the data stored in Azure Cosmos DB. |
+| Azure Storage | Stores metadata related to Azure Machine Learning pipeline data. | Data can include designer pipeline names, pipeline layout, and execution properties. |
-From a data lifecycle management point of view, data in the above resources are created and deleted as you create and delete their corresponding objects in Azure Machine Learning.
+From the perspective of data lifecycle management, data in the preceding resources is created and deleted as you create and delete corresponding objects in Azure Machine Learning.
-Your Azure Machine Learning workspace reads and writes data using its managed identity. This identity is granted access to the resources using a role assignment (Azure role-based access control) on the data resources. The encryption key you provide is used to encrypt data that is stored on Microsoft-managed resources. It's also used to create indices for Azure AI Search, which are created at runtime.
+Your Azure Machine Learning workspace reads and writes data by using its managed identity. This identity is granted access to the resources through a role assignment (Azure role-based access control) on the data resources. The encryption key that you provide is used to encrypt data that's stored on Microsoft-managed resources. It's also used to create indexes for Azure AI Search at runtime.
-Extra networking controls are configured when you create a private link endpoint on your workspace to allow for inbound connectivity. In this configuration, a private link endpoint connection will be created to the CosmosDB instance and network access will be restricted to only trusted Microsoft services.
+Extra networking controls are configured when you create a private link endpoint on your workspace to allow for inbound connectivity. This configuration includes the creation of a private link endpoint connection to the Azure Cosmos DB instance. Network access is restricted to only trusted Microsoft services.
## Customer-managed keys
-When you __don't use a customer-managed key__, Microsoft creates and manages these resources in a Microsoft owned Azure subscription and uses a Microsoft-managed key to encrypt the data.
+When you *don't* use a customer-managed key, Microsoft creates and manages resources in a Microsoft-owned Azure subscription and uses a Microsoft-managed key to encrypt the data.
-When you __use a customer-managed key__, these resources are _in your Azure subscription_ and encrypted with your key. While they exist in your subscription, these resources are __managed by Microsoft__. They're automatically created and configured when you create your Azure Machine Learning workspace.
-
-> [!IMPORTANT]
-> When using a customer-managed key, the costs for your subscription will be higher because these resources are in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+When you use a customer-managed key, the resources are in your Azure subscription and encrypted with your key. While these resources exist in your subscription, Microsoft manages them. They're automatically created and configured when you create your Azure Machine Learning workspace.
-These Microsoft-managed resources are located in a new Azure resource group is created in your subscription. This group is in addition to the resource group for your workspace. This resource group contains the Microsoft-managed resources that your key is used with. The resource group will be named using the formula of `<Azure Machine Learning workspace resource group name><GUID>`.
+These Microsoft-managed resources are located in a new Azure resource group that's created in your subscription. This resource group is separate from the resource group for your workspace. It contains the Microsoft-managed resources that your key is used with. The formula for naming the resource group is: `<Azure Machine Learning workspace resource group name><GUID>`.
> [!TIP]
-> * The [__Request Units__](../cosmos-db/request-units.md) for the Azure Cosmos DB automatically scale as needed.
-> * If your Azure Machine Learning workspace uses a private endpoint, this resource group will also contain a Microsoft-managed Azure Virtual Network. This VNet is used to secure communications between the managed services and the workspace. You __cannot provide your own VNet for use with the Microsoft-managed resources__. You also __cannot modify the virtual network__. For example, you cannot change the IP address range that it uses.
+> The [Request Units](../cosmos-db/request-units.md) for Azure Cosmos DB automatically scale as needed.
-> [!IMPORTANT]
-> If your subscription does not have enough quota for these services, a failure will occur.
+If your Azure Machine Learning workspace uses a private endpoint, this resource group also contains a Microsoft-managed Azure virtual network. This virtual network helps secure communication between the managed services and the workspace. You *can't provide your own virtual network* for use with the Microsoft-managed resources. You also *can't modify the virtual network*. For example, you can't change the IP address range that it uses.
-> [!WARNING]
-> __Don't delete the resource group__ that contains this Azure Cosmos DB instance, or any of the resources automatically created in this group. If you need to delete the resource group or Microsoft-managed services in it, you must delete the Azure Machine Learning workspace that uses it. The resource group resources are deleted when the associated workspace is deleted.
+> [!IMPORTANT]
+> If your subscription doesn't have enough quota for these services, a failure will occur.
+>
+> When you use a customer-managed key, the costs for your subscription are higher because these resources are in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
-## How compute data is stored
+## Storage of compute data
-Azure Machine Learning uses compute resources to train and deploy machine learning models. The following table describes the compute options and how data is encrypted by each one:
+Azure Machine Learning uses compute resources to train and deploy machine learning models. The following table describes the compute options and how each one encrypts data:
:::moniker range="azureml-api-1" | Compute | Encryption | | -- | -- |
-| Azure Container Instance | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md). |
-| Azure Kubernetes Service | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Bring your own keys with Azure disks in Azure Kubernetes Services](../aks/azure-disk-customer-managed-keys.md). |
-| Azure Machine Learning compute instance | Local scratch disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. |
-| Azure Machine Learning compute cluster | OS disk encrypted in Azure Storage with Microsoft-managed keys. Temporary disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. |
+| Azure Container Instances | Data is encrypted with a Microsoft-managed key or a customer-managed key.</br>For more information, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md). |
+| Azure Kubernetes Service | Data is encrypted with a Microsoft-managed key or a customer-managed key.</br>For more information, see [Bring your own keys with Azure disks in Azure Kubernetes Service](../aks/azure-disk-customer-managed-keys.md). |
+| Azure Machine Learning compute instance | The local scratch disk is encrypted if you enable the `hbi_workspace` flag for the workspace. |
+| Azure Machine Learning compute cluster | The OS disk is encrypted in Azure Storage with Microsoft-managed keys. The temporary disk is encrypted if you enable the `hbi_workspace` flag for the workspace. |
:::moniker-end :::moniker range="azureml-api-2" | Compute | Encryption | | -- | -- |
-| Azure Kubernetes Service | Data is encrypted by a Microsoft-managed key or a customer-managed key.</br>For more information, see [Bring your own keys with Azure disks in Azure Kubernetes Services](../aks/azure-disk-customer-managed-keys.md). |
-| Azure Machine Learning compute instance | Local scratch disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. |
-| Azure Machine Learning compute cluster | OS disk encrypted in Azure Storage with Microsoft-managed keys. Temporary disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. |
+| Azure Kubernetes Service | Data is encrypted with a Microsoft-managed key or a customer-managed key.</br>For more information, see [Bring your own keys with Azure disks in Azure Kubernetes Service](../aks/azure-disk-customer-managed-keys.md). |
+| Azure Machine Learning compute instance | The local scratch disk is encrypted if you enable the `hbi_workspace` flag for the workspace. |
+| Azure Machine Learning compute cluster | The OS disk is encrypted in Azure Storage with Microsoft-managed keys. The temporary disk is encrypted if you enable the `hbi_workspace` flag for the workspace. |
:::moniker-end
-**Compute cluster**
+### Compute cluster
+
+Compute clusters have local OS disk storage and can mount data from storage accounts in your subscription during a job. When you're mounting data from your own storage account in a job, you can enable customer-managed keys on those storage accounts for encryption.
-Compute clusters have local OS disk storage and can mount data from storage accounts in your subscription during the job.
+The OS disk for each compute node that's stored in Azure Storage is always encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts, and not with customer-managed keys. This compute target is ephemeral, so data that's stored on the OS disk is deleted after the cluster scales down. Clusters typically scale down when no jobs are queued, autoscaling is on, and the minimum node count is set to zero. The underlying virtual machine is deprovisioned, and the OS disk is deleted.
-When mounting data from your own storage account in a job, you can enable customer-managed keys on those storage accounts for encryption.
+Azure Disk Encryption isn't supported for the OS disk. Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If you create the workspace with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short lived (only during your job), and encryption support is limited to system-managed keys only.
-The OS disk for each compute node stored in Azure Storage is always encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts, and not using customer-managed keys. This compute target is ephemeral, and hence data that is stored on the OS disk is deleted once the cluster scales down. Clusters are typically scaled down when no jobs are queued, autoscaling is on and the minimum node count is set to zero. The underlying virtual machine is deprovisioned, and the OS disk is deleted.
+### Compute instance
-Azure Disk Encryption isn't supported for the OS disk. Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only during your job) and encryption support is limited to system-managed keys only.
+The OS disk for a compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If you create the workspace with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on the compute instance is encrypted with Microsoft-managed keys. Customer-managed key encryption is not supported for OS and temporary disks.
-**Compute instance**
-The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption isn't supported for OS and temp disk.
+### hbi_workspace flag
-### HBI_workspace flag
+You can set the `hbi_workspace` flag only when you create a workspace. You can't change it for an existing workspace.
-* The `hbi_workspace` flag can only be set when a workspace is created. It can't be changed for an existing workspace.
-* When this flag is set to True, it may increase the difficulty of troubleshooting issues because less telemetry data is sent to Microsoft. There's less visibility into success rates or problem types. Microsoft may not be able to react as proactively when this flag is True.
+When you set this flag to `TRUE`, it might increase the difficulty of troubleshooting problems because less telemetry data is sent to Microsoft. There's less visibility into success rates or problem types. Microsoft might not be able to react as proactively when this flag is `TRUE`.
-To enable the `hbi_workspace` flag when creating an Azure Machine Learning workspace, follow the steps in one of the following articles:
+To enable the `hbi_workspace` flag when you're creating an Azure Machine Learning workspace, follow the steps in one of the following articles:
-* [How to create and manage a workspace](how-to-manage-workspace.md).
-* [How to create and manage a workspace using the Azure CLI](how-to-manage-workspace-cli.md).
-* [How to create a workspace using Hashicorp Terraform](how-to-manage-workspace-terraform.md).
-* [How to create a workspace using Azure Resource Manager templates](how-to-create-workspace-template.md).
+* [Create and manage a workspace by using the Azure portal or the Python SDK](how-to-manage-workspace.md)
+* [Create and manage a workspace by using the Azure CLI](how-to-manage-workspace-cli.md)
+* [Create a workspace by using HashiCorp Terraform](how-to-manage-workspace-terraform.md)
+* [Create a workspace by using Azure Resource Manager templates](how-to-create-workspace-template.md)
-## Next Steps
+## Next steps
-* [How to configure customer-managed keys with Azure Machine Learning](how-to-setup-customer-managed-keys.md).
+* [Configure customer-managed keys with Azure Machine Learning](how-to-setup-customer-managed-keys.md)
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
Title: Data encryption with Azure Machine Learning
-description: 'Learn how Azure Machine Learning computes and data stores provides data encryption at rest and in transit.'
+description: 'Learn how Azure Machine Learning computes and datastores provide data encryption at rest and in transit.'
monikerRange: 'azureml-api-2 || azureml-api-1'
# Data encryption with Azure Machine Learning
-Azure Machine Learning relies on a various of Azure data storage services and compute resources when training models and performing inferences. In this article, learn about the data encryption for each service both at rest and in transit.
+Azure Machine Learning relies on various Azure data storage services and compute resources when you're training models and performing inferences. In this article, learn about the data encryption for each service both at rest and in transit.
-> [!IMPORTANT]
-> For production grade encryption during __training__, Microsoft recommends using Azure Machine Learning compute cluster. For production grade encryption during __inference__, Microsoft recommends using Azure Kubernetes Service.
->
-> Azure Machine Learning compute instance is a dev/test environment. When using it, we recommend that you store your files, such as notebooks and scripts, in a file share. Your data should be stored in a datastore.
+For production-grade encryption during training, we recommend that you use an Azure Machine Learning compute cluster. For production-grade encryption during inference, we recommend that you use Azure Kubernetes Service (AKS).
+
+An Azure Machine Learning compute instance is a dev/test environment. When you use it, we recommend that you store your files, such as notebooks and scripts, in a file share. Store your data in a datastore.
## Encryption at rest
-Azure Machine Learning end to end projects integrates with services like Azure Blob Storage, Azure Cosmos DB, Azure SQL Database etc. The article describes encryption method of such services.
+Azure Machine Learning end-to-end projects integrate with services like Azure Blob Storage, Azure Cosmos DB, and Azure SQL Database. This article describes encryption methods for such services.
-### Azure Blob storage
+### Azure Blob Storage
-Azure Machine Learning stores snapshots, output, and logs in the Azure Blob storage account (default storage account) that's tied to the Azure Machine Learning workspace and your subscription. All the data stored in Azure Blob storage is encrypted at rest with Microsoft-managed keys.
+Azure Machine Learning stores snapshots, output, and logs in the Azure Blob Storage account (default storage account) that's tied to the Azure Machine Learning workspace and your subscription. All the data stored in Azure Blob Storage is encrypted at rest with Microsoft-managed keys.
-For information on how to use your own keys for data stored in Azure Blob storage, see [Azure Storage encryption with customer-managed keys in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md).
+For information on how to use your own keys for data stored in Azure Blob Storage, see [Azure Storage encryption with customer-managed keys in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md).
-Training data is typically also stored in Azure Blob storage so that it's accessible to training compute targets. This storage isn't managed by Azure Machine Learning but mounted to compute targets as a remote file system.
+Training data is typically also stored in Azure Blob Storage so that training compute targets can access it. Azure Machine Learning doesn't manage this storage. This storage is mounted to compute targets as a remote file system.
-If you need to __rotate or revoke__ your key, you can do so at any time. When rotating a key, the storage account will start using the new key (latest version) to encrypt data at rest. When revoking (disabling) a key, the storage account takes care of failing requests. It usually takes an hour for the rotation or revocation to be effective.
+If you need to _rotate or revoke_ your key, you can do so at any time. When you rotate a key, the storage account starts using the new key (latest version) to encrypt data at rest. When you revoke (disable) a key, the storage account takes care of failing requests. It usually takes an hour for the rotation or revocation to be effective.
-For information on regenerating the access keys, see [Regenerate storage access keys](how-to-change-storage-access-key.md).
+For information on regenerating the access keys, see [Regenerate storage account access keys](how-to-change-storage-access-key.md).
### Azure Data Lake Storage [!INCLUDE [Note](../../includes/data-lake-storage-gen1-rename-note.md)]
-**ADLS Gen2**
-Azure Data Lake Storage Gen 2 is built on top of Azure Blob Storage and is designed for enterprise big data analytics. ADLS Gen2 is used as a datastore for Azure Machine Learning. Same as Azure Blob Storage the data at rest is encrypted with Microsoft-managed keys.
+Azure Data Lake Storage Gen2 is built on top of Azure Blob Storage and is designed for big data analytics in enterprises. Data Lake Storage Gen2 is used as a datastore for Azure Machine Learning. Like Azure Blob Storage, the data at rest is encrypted with Microsoft-managed keys.
For information on how to use your own keys for data stored in Azure Data Lake Storage, see [Azure Storage encryption with customer-managed keys in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md).
-### Azure Relational Databases
+### Azure relational databases
-Azure Machine Learning services support data from different data sources such as Azure SQL Database, Azure PostgreSQL and Azure MYSQL.
+The Azure Machine Learning service supports data from the following data sources.
-**Azure SQL Database**
-Transparent Data Encryption protects Azure SQL Database against threat of malicious offline activity by encrypting data at rest. By default, TDE is enabled for all newly deployed SQL Databases with Microsoft managed keys.
+#### Azure SQL Database
-For information on how to use customer managed keys for transparent data encryption, see [Azure SQL Database Transparent Data Encryption](/azure/azure-sql/database/transparent-data-encryption-tde-overview) .
+Transparent data encryption helps protect Azure SQL Database against the threat of malicious offline activity by encrypting data at rest. By default, transparent data encryption is enabled for all newly deployed SQL databases that use Microsoft-managed keys.
-**Azure Database for PostgreSQL**
-Azure PostgreSQL uses Azure Storage encryption to encrypt data at rest by default using Microsoft managed keys. It is similar to Transparent Data Encryption (TDE) in other databases such as SQL Server.
+For information on how to use customer-managed keys for transparent data encryption, see [Azure SQL Database transparent data encryption](/azure/azure-sql/database/transparent-data-encryption-tde-overview).
-For information on how to use customer managed keys for transparent data encryption, see [Azure Database for PostgreSQL Single server data encryption with a customer-managed key](../postgresql/single-server/concepts-data-encryption-postgresql.md).
+#### Azure Database for PostgreSQL
-**Azure Database for MySQL**
-Azure Database for MySQL is a relational database service in the Microsoft cloud based on the MySQL Community Edition database engine. The Azure Database for MySQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest.
+By default, Azure Database for PostgreSQL uses Azure Storage encryption to encrypt data at rest by using Microsoft-managed keys. It's similar to transparent data encryption in other databases, such as SQL Server.
-To encrypt data using customer managed keys, see [Azure Database for MySQL data encryption with a customer-managed key](../mysql/single-server/concepts-data-encryption-mysql.md) .
+For information on how to use customer-managed keys for transparent data encryption, see [Azure Database for PostgreSQL Single Server data encryption with a customer-managed key](../postgresql/single-server/concepts-data-encryption-postgresql.md).
+#### Azure Database for MySQL
+
+Azure Database for MySQL is a relational database service in the Microsoft Cloud. It's based on the MySQL Community Edition database engine. The Azure Database for MySQL service uses the FIPS 140-2 validated cryptographic module for Azure Storage encryption of data at rest.
+
+To encrypt data by using customer-managed keys, see [Azure Database for MySQL data encryption with a customer-managed key](../mysql/single-server/concepts-data-encryption-mysql.md).
### Azure Cosmos DB
-Azure Machine Learning stores metadata in an Azure Cosmos DB instance. This instance is associated with a Microsoft subscription managed by Azure Machine Learning. All the data stored in Azure Cosmos DB is encrypted at rest with Microsoft-managed keys.
+Azure Machine Learning stores metadata in an Azure Cosmos DB instance. This instance is associated with a Microsoft subscription that Azure Machine Learning manages. All the data stored in Azure Cosmos DB is encrypted at rest with Microsoft-managed keys.
-When using your own (customer-managed) keys to encrypt the Azure Cosmos DB instance, a Microsoft managed Azure Cosmos DB instance is created in your subscription. This instance is created in a Microsoft-managed resource group, which is different than the resource group for your workspace. For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
+When you're using your own (customer-managed) keys to encrypt the Azure Cosmos DB instance, a Microsoft-managed Azure Cosmos DB instance is created in your subscription. This instance is created in a Microsoft-managed resource group, which is different from the resource group for your workspace. For more information, see [Customer-managed keys for Azure Machine Learning](concept-customer-managed-keys.md).
### Azure Container Registry
-All container images in your registry (Azure Container Registry) are encrypted at rest. Azure automatically encrypts an image before storing it and decrypts it when Azure Machine Learning pulls the image.
+All container images in your container registry (an instance of Azure Container Registry) are encrypted at rest. Azure automatically encrypts an image before storing it and decrypts it when Azure Machine Learning pulls the image.
-To use customer-managed keys to encrypt your Azure Container Registry, you need to create your own ACR and attach it while provisioning the workspace. You can encrypt the default instance that gets created at the time of workspace provisioning.
+To use customer-managed keys to encrypt your container registry, you need to create and attach the container registry while you're provisioning the workspace. You can encrypt the default instance that's created at the time of workspace provisioning.
> [!IMPORTANT]
-> Azure Machine Learning requires the admin account be enabled on your Azure Container Registry. By default, this setting is disabled when you create a container registry. For information on enabling the admin account, see [Admin account](../container-registry/container-registry-authentication.md#admin-account).
+> Azure Machine Learning requires you to enable the admin account on your container registry. By default, this setting is disabled when you create a container registry. For information on enabling the admin account, see [Admin account](../container-registry/container-registry-authentication.md#admin-account) later in this article.
>
-> Once an Azure Container Registry has been created for a workspace, do not delete it. Doing so will break your Azure Machine Learning workspace.
+> After you create a container registry for a workspace, don't delete it. Doing so will break your Azure Machine Learning workspace.
-For an example of creating a workspace using an existing Azure Container Registry, see the following articles:
+For examples of creating a workspace by using an existing container registry, see the following articles:
-* [Create a workspace for Azure Machine Learning with Azure CLI](how-to-manage-workspace-cli.md).
-* [Create a workspace with Python SDK](how-to-manage-workspace.md?tabs=python#create-a-workspace).
+* [Create a workspace for Azure Machine Learning by using the Azure CLI](how-to-manage-workspace-cli.md)
+* [Create a workspace with the Python SDK](how-to-manage-workspace.md?tabs=python#create-a-workspace)
* [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](how-to-create-workspace-template.md) :::moniker range="azureml-api-1"
-### Azure Container Instance
+
+### Azure Container Instances
> [!IMPORTANT]
-> Deployments to ACI rely on the Azure Machine Learning Python SDK and CLI v1.
+> Deployments to Azure Container Instances rely on the Azure Machine Learning Python SDK and CLI v1.
-You may encrypt a deployed Azure Container Instance (ACI) resource using customer-managed keys. The customer-managed key used for ACI can be stored in the Azure Key Vault for your workspace. For information on generating a key, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#generate-a-new-key).
+You can encrypt a deployed Azure Container Instances resource by using customer-managed keys. The customer-managed keys that you use for Container Instances can be stored in the key vault for your workspace.
[!INCLUDE [sdk v1](includes/machine-learning-sdk-v1.md)]
-To use the key when deploying a model to Azure Container Instance, create a new deployment configuration using `AciWebservice.deploy_configuration()`. Provide the key information using the following parameters:
+To use the key when you're deploying a model to Container Instances, create a new deployment configuration by using `AciWebservice.deploy_configuration()`. Provide the key information by using the following parameters:
* `cmk_vault_base_url`: The URL of the key vault that contains the key. * `cmk_key_name`: The name of the key.
To use the key when deploying a model to Azure Container Instance, create a new
For more information on creating and using a deployment configuration, see the following articles:
-* [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-) reference
+* [AciWebservice class reference](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-)
+* [Deploy machine learning models to Azure](./v1/how-to-deploy-and-where.md)
-* [Where and how to deploy](./v1/how-to-deploy-and-where.md)
-
-For more information on using a customer-managed key with ACI, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md).
+For more information on using a customer-managed key with Container Instances, see [Encrypt deployment data](../container-instances/container-instances-encrypt-data.md).
:::moniker-end ### Azure Kubernetes Service
-You may encrypt a deployed Azure Kubernetes Service resource using customer-managed keys at any time. For more information, see [Bring your own keys with Azure Kubernetes Service](../aks/azure-disk-customer-managed-keys.md).
+You can encrypt a deployed Azure Kubernetes Service resource by using customer-managed keys at any time. For more information, see [Bring your own keys with Azure Kubernetes Service](../aks/azure-disk-customer-managed-keys.md).
-This process allows you to encrypt both the Data and the OS Disk of the deployed virtual machines in the Kubernetes cluster.
+This process allows you to encrypt both the data and the OS disk of the deployed virtual machines in the Kubernetes cluster.
> [!IMPORTANT]
-> This process only works with AKS K8s version 1.17 or higher. Azure Machine Learning added support for AKS 1.17 on Jan 13, 2020.
+> This process works with only AKS version 1.17 or later. Azure Machine Learning added support for AKS 1.17 on Jan 13, 2020.
-### Machine Learning Compute
+### Machine Learning compute
-**Compute cluster**
-The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no jobs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption is not enabled for workspaces by default. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, then the OS disk is encrypted.
+#### Compute cluster
-Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only during your job,) and encryption support is limited to system-managed keys only.
+The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no jobs are queued. The underlying virtual machine is deprovisioned, and the OS disk is deleted.
-Managed online endpoint and batch endpoint use machine learning compute in the backend, and follows the same encryption mechanism.
+Azure Disk Encryption is not enabled for workspaces by default. If you create the workspace with the `hbi_workspace` parameter set to `TRUE`, the OS disk is encrypted.
-**Compute instance**
-The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local OS and temporary disks on compute instance are encrypted with Microsoft managed keys. Customer managed key encryption is not supported for OS and temporary disks.
+Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If you create the workspace with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short lived (only during your job), and encryption support is limited to system-managed keys only.
-For more information, see [Customer-managed keys](concept-customer-managed-keys.md).
+Managed online endpoints and batch endpoints use Azure Machine Learning compute in the back end, and they follow the same encryption mechanism.
-### Azure Data Factory
+#### Compute instance
+
+The OS disk for a compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If you create the workspace with the `hbi_workspace` parameter set to `TRUE`, the local OS and temporary disks on a compute instance are encrypted with Microsoft-managed keys. Customer-managed key encryption is not supported for OS and temporary disks.
-The Azure Data Factory pipeline is used to ingest data for use with Azure Machine Learning. Azure Data Factory encrypts data at rest, including entity definitions and any data cached while runs are in progress. By default, data is encrypted with a randomly generated Microsoft-managed key that is uniquely assigned to your data factory.
+For more information, see [Customer-managed keys for Azure Machine Learning](concept-customer-managed-keys.md).
-For information on how to use customer managed keys for encryption use [Encrypt Azure Data Factory with customer managed keys](../data-factory/enable-customer-managed-key.md) .
+### Azure Data Factory
+
+The Azure Data Factory pipeline ingests data for use with Azure Machine Learning. Azure Data Factory encrypts data at rest, including entity definitions and any data that's cached while runs are in progress. By default, data is encrypted with a randomly generated Microsoft-managed key that's uniquely assigned to your data factory.
+For information on how to use customer-managed keys for encryption, see [Encrypt Azure Data Factory with customer-managed keys](../data-factory/enable-customer-managed-key.md).
### Azure Databricks
-Azure Databricks can be used in Azure Machine Learning pipelines. By default, the Databricks File System (DBFS) used by Azure Databricks is encrypted using a Microsoft-managed key. To configure Azure Databricks to use customer-managed keys, see [Configure customer-managed keys on default (root) DBFS](/azure/databricks/security/customer-managed-keys-dbfs).
+You can use Azure Databricks in Azure Machine Learning pipelines. By default, the Databricks File System (DBFS) that Azure Databricks uses is encrypted through a Microsoft-managed key. To configure Azure Databricks to use customer-managed keys, see [Configure customer-managed keys on default (root) DBFS](/azure/databricks/security/customer-managed-keys-dbfs).
### Microsoft-generated data
-When using services such as Automated Machine Learning, Microsoft may generate a transient, pre-processed data for training multiple models. This data is stored in a datastore in your workspace, which allows you to enforce access controls and encryption appropriately.
+When you use services like Azure Machine Learning, Microsoft might generate transient, pre-processed data for training multiple models. This data is stored in a datastore in your workspace, so you can enforce access controls and encryption appropriately.
-You may also want to encrypt [diagnostic information logged from your deployed endpoint](how-to-enable-app-insights.md) into your Azure Application Insights instance.
+You might also want to encrypt [diagnostic information that's logged from your deployed endpoint](how-to-enable-app-insights.md) into Application Insights.
## Encryption in transit
-Azure Machine Learning uses TLS to secure internal communication between various Azure Machine Learning microservices. All Azure Storage access also occurs over a secure channel.
+Azure Machine Learning uses Transport Layer Security (TLS) to help secure internal communication between various Azure Machine Learning microservices. All Azure Storage access also occurs over a secure channel.
:::moniker range="azureml-api-1"
-To secure external calls made to the scoring endpoint, Azure Machine Learning uses TLS. For more information, see [Use TLS to secure a web service through Azure Machine Learning](./v1/how-to-secure-web-service.md).
+To help secure external calls made to the scoring endpoint, Azure Machine Learning uses TLS. For more information, see [Use TLS to secure a web service through Azure Machine Learning](./v1/how-to-secure-web-service.md).
:::moniker-end ## Data collection and handling
-### Microsoft collected data
-
-Microsoft may collect non-user identifying information like resource names (for example the dataset name, or the machine learning experiment name), or job environment variables for diagnostic purposes. All such data is stored using Microsoft-managed keys in storage hosted in Microsoft owned subscriptions and follows [Microsoft's standard Privacy policy and data handling standards](https://privacy.microsoft.com/privacystatement). This data is kept within the same region as your workspace.
+For diagnostic purposes, Microsoft might collect information that doesn't identify users. For example, Microsoft might collect resource names (for example, the dataset name or the machine learning experiment name) or job environment variables. All such data is stored through Microsoft-managed keys in storage hosted in Microsoft-owned subscriptions. The storage follows [Microsoft's standard privacy policy and data-handling standards](https://privacy.microsoft.com/privacystatement). This data stays within the same region as your workspace.
-Microsoft also recommends not storing sensitive information (such as account key secrets) in environment variables. Environment variables are logged, encrypted, and stored by us. Similarly when naming your jobs, avoid including sensitive information such as user names or secret project names. This information may appear in telemetry logs accessible to Microsoft Support engineers.
+We recommend not storing sensitive information (such as account key secrets) in environment variables. Microsoft logs, encrypts, and stores environment variables. Similarly, when you name your jobs, avoid including sensitive information such as user names or secret project names. This information might appear in telemetry logs that Microsoft support engineers can access.
-You may opt out from diagnostic data being collected by setting the `hbi_workspace` parameter to `TRUE` while provisioning the workspace. This functionality is supported when using the Azure Machine Learning Python SDK, the Azure CLI, REST APIs, or Azure Resource Manager templates.
+You can opt out from the collection of diagnostic data by setting the `hbi_workspace` parameter to `TRUE` while provisioning the workspace. This functionality is supported when you use the Azure Machine Learning Python SDK, the Azure CLI, REST APIs, or Azure Resource Manager templates.
-## Using Azure Key Vault
+## Credential storage in Azure Key Vault
-Azure Machine Learning uses the Azure Key Vault instance associated with the workspace to store credentials of various kinds:
+Azure Machine Learning uses the Azure Key Vault instance that's associated with the workspace to store credentials of various kinds:
-* The associated storage account connection string
-* Passwords to Azure Container Repository instances
-* Connection strings to data stores
+* The associated connection string for the storage account
+* Passwords to Azure Container Registry instances
+* Connection strings to datastores
-SSH passwords and keys to compute targets like Azure HDInsight and VMs are stored in a separate key vault that's associated with the Microsoft subscription. Azure Machine Learning doesn't store any passwords or keys provided by users. Instead, it generates, authorizes, and stores its own SSH keys to connect to VMs and HDInsight to run the experiments.
+Secure Shell (SSH) passwords and keys to compute targets like Azure HDInsight and virtual machines are stored in a separate key vault that's associated with the Microsoft subscription. Azure Machine Learning doesn't store any passwords or keys that users provide. Instead, it generates, authorizes, and stores its own SSH keys to connect to virtual machines and HDInsight to run the experiments.
Each workspace has an associated system-assigned managed identity that has the same name as the workspace. This managed identity has access to all keys, secrets, and certificates in the key vault. ## Next steps :::moniker range="azureml-api-2"
-* [Use datastores](how-to-datastore.md)
+* [Create datastores](how-to-datastore.md)
* [Create data assets](how-to-create-data-assets.md) * [Access data in a training job](how-to-read-write-data-v2.md)
+* [Use customer-managed keys](concept-customer-managed-keys.md)
:::moniker-end :::moniker range="azureml-api-1"
-* [Connect to Azure storage](./v1/how-to-access-data.md)
+* [Connect to Azure storage services](./v1/how-to-access-data.md)
* [Get data from a datastore](./v1/how-to-create-register-datasets.md) * [Connect to data](v1/how-to-connect-data-ui.md) * [Train with datasets](v1/how-to-train-with-datasets.md)
+* [Use customer-managed keys](concept-customer-managed-keys.md)
:::moniker-end
-* [Customer-managed keys](concept-customer-managed-keys.md)
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
Last updated 09/13/2023
# Enterprise security and governance for Azure Machine Learning
-In this article, you learn about security and governance features available for Azure Machine Learning. These features are useful for administrators, DevOps, and MLOps who want to create a secure configuration that is compliant with your companies policies. With Azure Machine Learning and the Azure platform, you can:
+In this article, you learn about security and governance features that are available for Azure Machine Learning. These features are useful for administrators, DevOps engineers, and MLOps engineers who want to create a secure configuration that complies with an organization's policies.
-* Restrict access to resources and operations by user account or groups
-* Restrict incoming and outgoing network communications
-* Encrypt data in transit and at rest
-* Scan for vulnerabilities
-* Apply and audit configuration policies
+With Azure Machine Learning and the Azure platform, you can:
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+* Restrict access to resources and operations by user account or groups.
+* Restrict incoming and outgoing network communications.
+* Encrypt data in transit and at rest.
+* Scan for vulnerabilities.
+* Apply and audit configuration policies.
## Restrict access to resources and operations
-[Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) is the identity service provider for Azure Machine Learning. It allows you to create and manage the security objects (user, group, service principal, and managed identity) that are used to _authenticate_ to Azure resources. Multi-factor authentication is supported if Microsoft Entra ID is configured to use it.
+[Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) is the identity service provider for Azure Machine Learning. You can use it to create and manage the security objects (user, group, service principal, and managed identity) that are used to authenticate to Azure resources. Multifactor authentication (MFA) is supported if Microsoft Entra ID is configured to use it.
-Here's the authentication process for Azure Machine Learning using multi-factor authentication in Microsoft Entra ID:
+Here's the authentication process for Azure Machine Learning through MFA in Microsoft Entra ID:
1. The client signs in to Microsoft Entra ID and gets an Azure Resource Manager token.
-1. The client presents the token to Azure Resource Manager and to all Azure Machine Learning.
-1. Azure Machine Learning provides a Machine Learning service token to the user compute target (for example, Azure Machine Learning compute cluster or [serverless compute](./how-to-use-serverless-compute.md)). This token is used by the user compute target to call back into the Machine Learning service after the job is complete. The scope is limited to the workspace.
+1. The client presents the token to Azure Resource Manager and to Azure Machine Learning.
+1. Azure Machine Learning provides a Machine Learning service token to the user compute target (for example, Machine Learning compute cluster or [serverless compute](./how-to-use-serverless-compute.md)). The user compute target uses this token to call back into the Machine Learning service after the job is complete. The scope is limited to the workspace.
-[![Authentication in Azure Machine Learning](media/concept-enterprise-security/authentication.png)](media/concept-enterprise-security/authentication.png#lightbox)
+[![Diagram that illustrates authentication in Azure Machine Learning.](media/concept-enterprise-security/authentication.png)](media/concept-enterprise-security/authentication.png#lightbox)
-Each workspace has an associated system-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) that has the same name as the workspace. This managed identity is used to securely access resources used by the workspace. It has the following Azure RBAC permissions on associated resources:
+Each workspace has an associated system-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) that has the same name as the workspace. This managed identity is used to securely access resources that the workspace uses. It has the following Azure role-based access control (RBAC) permissions on associated resources:
| Resource | Permissions | | -- | -- | | Workspace | Contributor | | Storage account | Storage Blob Data Contributor | | Key vault | Access to all keys, secrets, certificates |
-| Azure Container Registry | Contributor |
+| Container registry | Contributor |
| Resource group that contains the workspace | Contributor |
-The system-assigned managed identity is used for internal service-to-service authentication between Azure Machine Learning and other Azure resources. The identity token isn't accessible to users and they can't use it to gain access to these resources. Users can only access the resources through [Azure Machine Learning control and data plane APIs](how-to-assign-roles.md), if they have sufficient RBAC permissions.
+The system-assigned managed identity is used for internal service-to-service authentication between Azure Machine Learning and other Azure resources. Users can't access the identity token, and they can't use it to gain access to these resources. Users can access the resources only through [Azure Machine Learning control and data plane APIs](how-to-assign-roles.md), if they have sufficient RBAC permissions.
We don't recommend that admins revoke the access of the managed identity to the resources mentioned in the preceding table. You can restore access by using the [resync keys operation](how-to-change-storage-access-key.md). > [!NOTE]
-> If your Azure Machine Learning workspaces has compute targets (compute cluster, compute instance, Azure Kubernetes Service, etc.) that were created __before May 14th, 2021__, you may also have an additional Microsoft Entra account. The account name starts with `Microsoft-AzureML-Support-App-` and has contributor-level access to your subscription for every workspace region.
->
-> If your workspace does not have an Azure Kubernetes Service (AKS) attached, you can safely delete this Microsoft Entra account.
->
-> If your workspace has attached AKS clusters, _and they were created before May 14th, 2021_, __do not delete this Microsoft Entra account__. In this scenario, you must first delete and recreate the AKS cluster before you can delete the Microsoft Entra account.
+> If your Azure Machine Learning workspace has compute targets (for example, compute cluster, compute instance, or Azure Kubernetes Service [AKS] instance) that were created _before May 14, 2021_, you might have an additional Microsoft Entra account. The account name starts with `Microsoft-AzureML-Support-App-` and has contributor-level access to your subscription for every workspace region.
+>
+> If your workspace doesn't have an AKS instance attached, you can safely delete this Microsoft Entra account.
+>
+> If your workspace has an attached AKS cluster, and it was created before May 14, 2021, _do not delete this Microsoft Entra account_. In this scenario, you must delete and re-create the AKS cluster before you can delete the Microsoft Entra account.
-You can provision the workspace to use user-assigned managed identity, and grant the managed identity additional roles, for example to access your own Azure Container Registry for base Docker images. You can also configure managed identities for use with Azure Machine Learning compute cluster. This managed identity is independent of workspace managed identity. With a compute cluster, the managed identity is used to access resources such as secured datastores that the user running the training job may not have access to. For more information, see [Use managed identities for access control](how-to-identity-based-service-authentication.md).
+You can provision the workspace to use a user-assigned managed identity, and then grant the managed identity additional roles. For example, you might grant a role to access your own Azure Container Registry instance for base Docker images.
+
+You can also configure managed identities for use with an Azure Machine Learning compute cluster. This managed identity is independent of the workspace managed identity. With a compute cluster, the managed identity is used to access resources such as secured datastores that the user running the training job might not have access to. For more information, see [Use managed identities for access control](how-to-identity-based-service-authentication.md).
> [!TIP]
-> There are some exceptions to the use of Microsoft Entra ID and Azure RBAC within Azure Machine Learning:
-> * You can optionally enable __SSH__ access to compute resources such as Azure Machine Learning compute instance and compute cluster. SSH access is based on public/private key pairs, not Microsoft Entra ID. SSH access is not governed by Azure RBAC.
-> * You can authenticate to models deployed as online endpoints using __key__ or __token__-based authentication. Keys are static strings, while tokens are retrieved using a Microsoft Entra security object. For more information, see [How to authenticate online endpoints](how-to-authenticate-online-endpoint.md).
+> There are exceptions to the use of Microsoft Entra ID and Azure RBAC in Azure Machine Learning:
+> * You can optionally enable Secure Shell (SSH) access to compute resources such as an Azure Machine Learning compute instance and a compute cluster. SSH access is based on public/private key pairs, not Microsoft Entra ID. Azure RBAC doesn't govern SSH access.
+> * You can authenticate to models deployed as online endpoints by using key-based or token-based authentication. Keys are static strings, whereas tokens are retrieved thorugh a Microsoft Entra security object. For more information, see [Authenticate clients for online endpoints](how-to-authenticate-online-endpoint.md).
For more information, see the following articles:
-* [Authentication for Azure Machine Learning workspace](how-to-setup-authentication.md)
-* [Manage access to Azure Machine Learning](how-to-assign-roles.md)
-* [Connect to storage services](how-to-access-data.md)
-* [Use Azure Key Vault for secrets when training](how-to-use-secrets-in-runs.md)
-* [Use Microsoft Entra managed identity with Azure Machine Learning](how-to-identity-based-service-authentication.md)
-## Network security and isolation
+* [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md)
+* [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md)
+* [Use datastores](how-to-access-data.md)
+* [Use authentication credential secrets in Azure Machine Learning jobs](how-to-use-secrets-in-runs.md)
+* [Set up authentication between Azure Machine Learning and other services](how-to-identity-based-service-authentication.md)
+
+## Provide network security and isolation
-To restrict network access to Azure Machine Learning resources, you can use an [Azure Machine Learning managed virtual network](how-to-managed-network.md) or [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). Using a virtual network reduces the attack surface for your solution, and the chances of data exfiltration.
+To restrict network access to Azure Machine Learning resources, you can use an [Azure Machine Learning managed virtual network](how-to-managed-network.md) or an [Azure Virtual Network instance](../virtual-network/virtual-networks-overview.md). Using a virtual network reduces the attack surface for your solution and the chances of data exfiltration.
-You don't have to pick one or the other. For example, you can use a managed virtual network to secure managed compute resources and an Azure Virtual Network for your unmanaged resources or to secure client access to the workspace.
+You don't have to choose one or the other. For example, you can use an Azure Machine Learning managed virtual network to help secure managed compute resources and an Azure Virtual Network instance for your unmanaged resources or to help secure client access to the workspace.
-* __Azure Machine Learning managed virtual network__ provides a fully managed solution that enables network isolation for your workspace and managed compute resources. You can use private endpoints to secure communication with other Azure services, and can restrict outbound communications. The following managed compute resources are secured with a managed network:
+* __Azure Machine Learning managed virtual network__: Provides a fully managed solution that enables network isolation for your workspace and managed compute resources. You can use private endpoints to help secure communication with other Azure services, and you can restrict outbound communication. Use a managed virtual network to help secure the following managed compute resources:
- * Serverless compute (including Spark serverless)
- * Compute cluster
- * Compute instance
- * Managed online endpoints
- * Batch online endpoints
+ * Serverless compute (including Spark serverless)
+ * Compute cluster
+ * Compute instance
+ * Managed online endpoint
+ * Batch online endpoint
- For more information, see [Azure Machine Learning managed virtual network](how-to-managed-network.md).
+ For more information, see [Workspace managed virtual network isolation](how-to-managed-network.md).
-* __Azure Virtual Networks__ provides a more customizable virtual network offering. However, you're responsible for configuration and management. You may need to use network security groups, user-defined routing, or a firewall to restrict outbound communication.
+* __Azure Virtual Network instance__: Provides a more customizable virtual network offering. However, you're responsible for configuration and management. You might need to use network security groups, user-defined routes, or a firewall to restrict outbound communication.
- For more information, see the following documents:
+ For more information, see the following articles:
- * [Virtual network isolation and privacy overview](how-to-network-security-overview.md)
- * [Secure workspace resources](how-to-secure-workspace-vnet.md)
- * [Secure training environment](how-to-secure-training-vnet.md)
- * [Secure inference environment](./how-to-secure-inferencing-vnet.md)
- * [Use studio in a secured virtual network](how-to-enable-studio-virtual-network.md)
- * [Use custom DNS](how-to-custom-dns.md)
- * [Configure firewall](how-to-access-azureml-behind-firewall.md)
+ * [Secure Azure Machine Learning workspace resources using virtual networks](how-to-network-security-overview.md)
+ * [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md)
+ * [Secure an Azure Machine Learning training environment with virtual networks](how-to-secure-training-vnet.md)
+ * [Secure an Azure Machine Learning inferencing environment with virtual networks](./how-to-secure-inferencing-vnet.md)
+ * [Use Azure Machine Learning studio in an Azure virtual network](how-to-enable-studio-virtual-network.md)
+ * [Use your workspace with a custom DNS server](how-to-custom-dns.md)
+ * [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md)
<a id="encryption-at-rest"></a><a id="azure-blob-storage"></a>
-## Data encryption
+## Encrypt data
-Azure Machine Learning uses various compute resources and data stores on the Azure platform. To learn more about how each of these resources supports data encryption at rest and in transit, see [Data encryption with Azure Machine Learning](concept-data-encryption.md).
+Azure Machine Learning uses various compute resources and datastores on the Azure platform. To learn more about how each of these resources supports data encryption at rest and in transit, see [Data encryption with Azure Machine Learning](concept-data-encryption.md).
-## Data exfiltration prevention
+## Prevent data exfiltration
-Azure Machine Learning has several inbound and outbound network dependencies. Some of these dependencies can expose a data exfiltration risk by malicious agents within your organization. These risks are associated with the outbound requirements to Azure Storage, Azure Front Door, and Azure Monitor. For recommendations on mitigating this risk, see the [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md) article.
+Azure Machine Learning has several inbound and outbound network dependencies. Some of these dependencies can expose a data exfiltration risk by malicious agents within your organization. These risks are associated with the outbound requirements to Azure Storage, Azure Front Door, and Azure Monitor. For recommendations on mitigating this risk, see [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md).
-## Vulnerability scanning
+## Scan for vulnerabilities
-[Microsoft Defender for Cloud](../security-center/security-center-introduction.md) provides unified security management and advanced threat protection across hybrid cloud workloads. For Azure Machine Learning, you should enable scanning of your [Azure Container Registry](../container-registry/container-registry-intro.md) resource and Azure Kubernetes Service resources. For more information, see [Azure Container Registry image scanning by Defender for Cloud](../security-center/defender-for-container-registries-introduction.md) and [Azure Kubernetes Services integration with Defender for Cloud](../security-center/defender-for-kubernetes-introduction.md).
+[Microsoft Defender for Cloud](../security-center/security-center-introduction.md) provides unified security management and advanced threat protection across hybrid cloud workloads. For Azure Machine Learning, you should enable scanning of your [Azure Container Registry](../container-registry/container-registry-intro.md) resource and AKS resources. For more information, see [Introduction to Microsoft Defender for container registries](../security-center/defender-for-container-registries-introduction.md) and [Introduction to Microsoft Defender for Kubernetes](../security-center/defender-for-kubernetes-introduction.md).
## Audit and manage compliance
-[Azure Policy](../governance/policy/index.yml) is a governance tool that allows you to ensure that Azure resources are compliant with your policies. You can set policies to allow or enforce specific configurations, such as whether your Azure Machine Learning workspace uses a private endpoint. For more information on Azure Policy, see the [Azure Policy documentation](../governance/policy/overview.md). For more information on the policies specific to Azure Machine Learning, see [Audit and manage compliance with Azure Policy](how-to-integrate-azure-policy.md).
+[Azure Policy](../governance/policy/index.yml) is a governance tool that helps you ensure that Azure resources comply with your policies. You can set policies to allow or enforce specific configurations, such as whether your Azure Machine Learning workspace uses a private endpoint.
+
+For more information on Azure Policy, see the [Azure Policy documentation](../governance/policy/overview.md). For more information on the policies that are specific to Azure Machine Learning, see [Audit and manage Azure Machine Learning](how-to-integrate-azure-policy.md).
## Next steps * [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) * [Use Azure Machine Learning with Azure Firewall](how-to-access-azureml-behind-firewall.md) * [Use Azure Machine Learning with Azure Virtual Network](how-to-network-security-overview.md)
-* [Data encryption at rest and in transit](concept-data-encryption.md)
+* [Encrypt data at rest and in transit](concept-data-encryption.md)
* [Build a real-time recommendation API on Azure](/azure/architecture/reference-architectures/ai/real-time-recommendation)
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Prebuilt Docker container images for inference are used when deploying a model w
Framework version | CPU/GPU | Pre-installed packages | MCR Path | | | |
-NA | CPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu18.04-py37-cpu-inference:latest`
-NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu18.04-py37-cuda11.0.3-gpu-inference:latest`
NA | CPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu20.04-py38-cpu-inference:latest` NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu20.04-py38-cuda11.6.2-gpu-inference:latest`
+NA | CPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu22.04-py39-cpu-inference:latest`
+NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu22.04-py39-cuda11.8-gpu-inference:latest`
## How to use inference prebuilt docker images?
NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu20.04-py38-cuda11.6.2-g
* [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) * [Learn more about custom containers](how-to-deploy-custom-container.md)
-* [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online)
+* [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online)
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
monikerRange: 'azureml-api-2 || azureml-api-1'
# Network traffic flow when using a secured workspace
-When your Azure Machine Learning workspace and associated resources are secured in an Azure Virtual Network, it changes the network traffic between resources. Without a virtual network, network traffic flows over the public internet or within an Azure data center. Once a virtual network (VNet) is introduced, you may also want to harden network security. For example, blocking inbound and outbound communications between the VNet and public internet. However, Azure Machine Learning requires access to some resources on the public internet. For example, Azure Resource Management is used for deployments and management operations.
+When you put your Azure Machine Learning workspace and associated resources in an Azure virtual network, it changes the network traffic between resources. Without a virtual network, network traffic flows over the public internet or within an Azure datacenter. After you introduce a virtual network, you might also want to harden network security. For example, you might want to block inbound and outbound communications between the virtual network and the public internet. However, Azure Machine Learning requires access to some resources on the public internet. For example, it uses Azure Resource Manager for deployments and management operations.
-This article lists the required traffic to/from the public internet. It also explains how network traffic flows between your client development environment and a secured Azure Machine Learning workspace in the following scenarios:
+This article lists the required traffic to and from the public internet. It also explains how network traffic flows between your client development environment and a secured Azure Machine Learning workspace in the following scenarios:
-* Using Azure Machine Learning __studio__ to work with:
+* Using Azure Machine Learning studio to work with:
- * Your workspace
- * AutoML
- * Designer
- * Datasets and datastores
+ * Your workspace
+ * AutoML
+ * Designer
+ * Datasets and datastores
- > [!TIP]
- > Azure Machine Learning studio is a web-based UI that runs partially in your web browser, and makes calls to Azure services to perform tasks such as training a model, using designer, or viewing datasets. Some of these calls use a different communication flow than if you are using the SDK, CLI, REST API, or VS Code.
+ Azure Machine Learning studio is a web-based UI that runs partially in your web browser. It makes calls to Azure services to perform tasks such as training a model, using the designer, or viewing datasets. Some of these calls use a different communication flow than if you're using the Azure Machine Learning SDK, the Azure CLI, the REST API, or Visual Studio Code.
-* Using Azure Machine Learning __studio__, __SDK__, __CLI__, or __REST API__ to work with:
+* Using Azure Machine Learning studio, the Azure Machine Learning SDK, the Azure CLI, or the REST API to work with:
- * Compute instances and clusters
- * Azure Kubernetes Service
- * Docker images managed by Azure Machine Learning
+ * Compute instances and clusters
+ * Azure Kubernetes Service (AKS)
+ * Docker images that Azure Machine Learning manages
-> [!TIP]
-> If a scenario or task is not listed here, it should work the same with or without a secured workspace.
+If a scenario or task isn't listed here, it should work the same with or without a secured workspace.
## Assumptions This article assumes the following configuration:
-* Azure Machine Learning workspace using a private endpoint to communicate with the VNet.
-* The Azure Storage Account, Key Vault, and Container Registry used by the workspace also use a private endpoint to communicate with the VNet.
-* A VPN gateway or Express Route is used by the client workstations to access the VNet.
+* The Azure Machine Learning workspace uses a private endpoint to communicate with the virtual network.
+* The Azure storage account, key vault, and container registry that the workspace uses also use a private endpoint to communicate with the virtual network.
+* Client workstations use a VPN gateway or Azure ExpressRoute to access the virtual network.
## Inbound and outbound requirements -
-| __Scenario__ | __Required inbound__ | __Required outbound__ | __Additional configuration__ |
+| Scenario | Required inbound | Required outbound | Additional configuration |
| -- | -- | -- | -- |
-| [Access workspace from studio](#scenario-access-workspace-from-studio) | NA | <ul><li>Microsoft Entra ID</li><li>Azure Front Door</li><li>Azure Machine Learning service</li></ul> | You may need to use a custom DNS server. For more information, see [Use your workspace with a custom DNS](how-to-custom-dns.md). |
-| [Use AutoML, designer, dataset, and datastore from studio](#scenario-use-automl-designer-dataset-and-datastore-from-studio) | NA | NA | <ul><li>Workspace service principal configuration</li><li>Allow access from trusted Azure services</li></ul>For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). |
-| [Use compute instance and compute cluster](#scenario-use-compute-instance-and-compute-cluster) | <ul><li>Azure Machine Learning service on port 44224</li><li>Azure Batch Management service on ports 29876-29877</li></ul> | <ul><li>Microsoft Entra ID</li><li>Azure Resource Manager</li><li>Azure Machine Learning service</li><li>Azure Storage Account</li><li>Azure Key Vault</li></ul> | If you use a firewall, create user-defined routes. For more information, see [Configure inbound and outbound traffic](how-to-access-azureml-behind-firewall.md). |
-| [Use Azure Kubernetes Service](#scenario-use-azure-kubernetes-service) | NA | For information on the outbound configuration for AKS, see [How to secure Kubernetes inference](how-to-secure-kubernetes-inferencing-environment.md). | |
-| [Use Docker images managed by Azure Machine Learning](#scenario-use-docker-images-managed-by-azure-machine-learning) | NA | <ul><li>Microsoft Container Registry</li><li>`viennaglobal.azurecr.io` global container registry</li></ul> | If the Azure Container Registry for your workspace is behind the VNet, configure the workspace to use a compute cluster to build images. For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). |
-
-> [!IMPORTANT]
-> Azure Machine Learning uses multiple storage accounts. Each stores different data, and has a different purpose:
->
-> * __Your storage__: The Azure Storage Account(s) in your Azure subscription are used to store your data and artifacts such as models, training data, training logs, and Python scripts. For example, the _default_ storage account for your workspace is in your subscription. The Azure Machine Learning compute instance and compute clusters access __file__ and __blob__ data in this storage over ports 445 (SMB) and 443 (HTTPS).
->
-> When using a __compute instance__ or __compute cluster__, your storage account is mounted as a __file share__ using the SMB protocol. The compute instance and cluster use this file share to store the data, models, Jupyter notebooks, datasets, etc. The compute instance and cluster use the private endpoint when accessing the storage account.
->
-> * __Microsoft storage__: The Azure Machine Learning compute instance and compute clusters rely on Azure Batch, and access storage located in a Microsoft subscription. This storage is used only for the management of the compute instance/cluster. None of your data is stored here. The compute instance and compute cluster access the __blob__, __table__, and __queue__ data in this storage, using port 443 (HTTPS).
->
-> Machine Learning also stores metadata in an Azure Cosmos DB instance. By default, this instance is hosted in a Microsoft subscription and managed by Microsoft. You can optionally use an Azure Cosmos DB instance in your Azure subscription. For more information, see [Data encryption with Azure Machine Learning](concept-data-encryption.md#azure-cosmos-db).
-
-## Scenario: Access workspace from studio
+| [Access a workspace from the studio](#scenario-access-a-workspace-from-the-studio) | Not applicable | <ul><li>Microsoft Entra ID</li><li>Azure Front Door</li><li>Azure Machine Learning</li></ul> | You might need to use a custom DNS server. For more information, see [Use your workspace with a custom DNS server](how-to-custom-dns.md). |
+| [Use AutoML, the designer, the dataset, and the datastore from the studio](#scenario-use-automl-the-designer-the-dataset-and-the-datastore-from-the-studio) | Not applicable | Not applicable | <ul><li>Configure the workspace service principal</li><li>Allow access from trusted Azure services</li></ul>For more information, see [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). |
+| [Use a compute instance and a compute cluster](#scenario-use-a-compute-instance-and-a-compute-cluster) | <ul><li>Azure Machine Learning on port 44224</li><li>Azure Batch on ports 29876-29877</li></ul> | <ul><li>Microsoft Entra ID</li><li>Azure Resource Manager</li><li>Azure Machine Learning</li><li>Azure Storage</li><li>Azure Key Vault</li></ul> | If you use a firewall, create user-defined routes. For more information, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md). |
+| [Use Azure Kubernetes Service](#scenario-use-azure-kubernetes-service) | Not applicable | For information on the outbound configuration for AKS, see [Secure Azure Kubernetes Service inferencing environment](how-to-secure-kubernetes-inferencing-environment.md). | |
+| [Use Docker images that Azure Machine Learning manages](#scenario-use-docker-images-that-azure-machine-learning-manages) | Not applicable | <ul><li>Microsoft Artifact Registry</li><li>`viennaglobal.azurecr.io` global container registry</li></ul> | If the container registry for your workspace is behind the virtual network, configure the workspace to use a compute cluster to build images. For more information, see [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). |
+
+## Purposes of storage accounts
+
+Azure Machine Learning uses multiple storage accounts. Each stores different data and has a different purpose:
+
+* __Your storage__: The storage accounts in your Azure subscription store your data and artifacts, such as models, training data, training logs, and Python scripts. For example, the _default_ storage account for your workspace is in your subscription. The Azure Machine Learning compute instance and compute cluster access file and blob data in this storage over ports 445 (SMB) and 443 (HTTPS).
+
+ When you're using a compute instance or compute cluster, your storage account is mounted as a file share via the SMB protocol. The compute instance and cluster use this file share to store items like the data, models, Jupyter notebooks, and datasets. The compute instance and cluster use the private endpoint when they access the storage account.
+
+* __Microsoft storage__: The Azure Machine Learning compute instance and compute cluster rely on Azure Batch. They access storage located in a Microsoft subscription. This storage is used only for the management of the compute instance or cluster. None of your data is stored here. The compute instance and compute cluster access the blob, table, and queue data in this storage, by using port 443 (HTTPS).
+
+Machine Learning also stores metadata in an Azure Cosmos DB instance. By default, this instance is hosted in a Microsoft subscription, and Microsoft manages it. You can optionally use an Azure Cosmos DB instance in your Azure subscription. For more information, see [Data encryption with Azure Machine Learning](concept-data-encryption.md#azure-cosmos-db).
+
+## Scenario: Access a workspace from the studio
> [!NOTE]
-> The information in this section is specific to using the workspace from the Azure Machine Learning studio. If you use the Azure Machine Learning SDK, REST API, CLI, or Visual Studio Code, the information in this section does not apply to you.
+> The information in this section is specific to using the workspace from Azure Machine Learning studio. If you use the Azure Machine Learning SDK, the REST API, the Azure CLI, or Visual Studio Code, the information in this section doesn't apply to you.
-When accessing your workspace from studio, the network traffic flows are as follows:
+When you access your workspace from the studio, the network traffic flows are as follows:
-* To authenticate to resources, __Azure Active Directory__ is used.
-* For management and deployment operations, __Azure Resource Manager__ is used.
-* For Azure Machine Learning specific tasks, __Azure Machine Learning service__ is used
-* For access to Azure Machine Learning studio (https://ml.azure.com), __Azure FrontDoor__ is used.
-* For most storage operations, traffic flows through the private endpoint of the default storage for your workspace. Exceptions are discussed in the [Use AutoML, designer, dataset, and datastore](#scenario-use-automl-designer-dataset-and-datastore-from-studio) section.
-* You also need to configure a DNS solution that allows you to resolve the names of the resources within the VNet. For more information, see [Use your workspace with a custom DNS](how-to-custom-dns.md).
+* To authenticate to resources, the configuration uses Microsoft Entra ID.
+* For management and deployment operations, the configuration uses Azure Resource Manager.
+* For tasks that are specific to Azure Machine Learning, the configuration uses the Azure Machine Learning service.
+* For access to [Azure Machine Learning studio](https://ml.azure.com), the configuration uses Azure Front Door.
+* For most storage operations, traffic flows through the private endpoint of the default storage for your workspace. The [Use AutoML, the designer, the dataset, and the datastore from the studio](#scenario-use-automl-the-designer-the-dataset-and-the-datastore-from-the-studio) section of this article discusses exceptions.
+* You also need to configure a DNS solution that allows you to resolve the names of the resources within the virtual network. For more information, see [Use your workspace with a custom DNS server](how-to-custom-dns.md).
-## Scenario: Use AutoML, designer, dataset, and datastore from studio
+## Scenario: Use AutoML, the designer, the dataset, and the datastore from the studio
The following features of Azure Machine Learning studio use _data profiling_:
-* Dataset: Explore the dataset from studio.
+* Dataset: Explore the dataset from the studio.
* Designer: Visualize module output data.
-* AutoML: View a data preview/profile and choose a target column.
-* Labeling
+* AutoML: View a data preview or profile and choose a target column.
+* Labeling: Use labels to prepare data for a machine learning project.
-Data profiling depends on the Azure Machine Learning managed service being able to access the default Azure Storage Account for your workspace. The managed service _doesn't exist in your VNet_, so can't directly access the storage account in the VNet. Instead, the workspace uses a service principal to access storage.
+Data profiling depends on the ability of the Azure Machine Learning managed service to access the default Azure storage account for your workspace. The managed service _doesn't exist in your virtual network_, so it can't directly access the storage account in the virtual network. Instead, the workspace uses a service principal to access storage.
> [!TIP]
-> You can provide a service principal when creating the workspace. If you do not, one is created for you and will have the same name as your workspace.
+> You can provide a service principal when you're creating the workspace. If you don't, one is created for you and has the same name as your workspace.
-To allow access to the storage account, configure the storage account to allow a __resource instance__ for your workspace or select the __Allow Azure services on the trusted services list to access this storage account__. This setting allows the managed service to access storage through the Azure data center network.
+To allow access to the storage account, configure the storage account to allow a resource instance for your workspace or select __Allow Azure services on the trusted services list to access this storage account__. This setting allows the managed service to access storage through the Azure datacenter network.
-Next, add the service principal for the workspace to the __Reader__ role to the private endpoint of the storage account. This role is used to verify the workspace and storage subnet information. If they're the same, access is allowed. Finally, the service principal also requires __Blob data contributor__ access to the storage account.
+Next, add the service principal for the workspace to the __Reader__ role to the private endpoint of the storage account. Azure uses this role to verify the workspace and storage subnet information. If they're the same, Azure allows access. Finally, the service principal also requires __Blob data contributor__ access to the storage account.
-For more information, see the Azure Storage Account section of [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
+For more information, see the "Secure Azure storage accounts" section of [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
-## Scenario: Use compute instance and compute cluster
+## Scenario: Use a compute instance and a compute cluster
-Azure Machine Learning compute instance and compute cluster are managed services hosted by Microsoft. They're built on top of the Azure Batch service. While they exist in a Microsoft managed environment, they're also injected into your VNet.
+An Azure Machine Learning compute instance and compute cluster are managed services that Microsoft hosts. They're built on top of the Azure Batch service. Although they exist in a Microsoft-managed environment, they're also injected into your virtual network.
-When you create a compute instance or compute cluster, the following resources are also created in your VNet:
+When you create a compute instance or compute cluster, the following resources are also created in your virtual network:
-* A Network Security Group with required outbound rules. These rules allow __inbound__ access from the Azure Machine Learning (TCP on port 44224) and Azure Batch service (TCP on ports 29876-29877).
+* A network security group with required outbound rules. These rules allow _inbound_ access from Azure Machine Learning (TCP on port 44224) and Azure Batch (TCP on ports 29876-29877).
- > [!IMPORTANT]
- > If you use a firewall to block internet access into the VNet, you must configure the firewall to allow this traffic. For example, with Azure Firewall you can create user-defined routes. For more information, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
+ > [!IMPORTANT]
+ > If you use a firewall to block internet access into the virtual network, you must configure the firewall to allow this traffic. For example, with Azure Firewall, you can create user-defined routes. For more information, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
-* A load balancer with a public IP.
+* A load balancer with a public IP address.
-Also allow __outbound__ access to the following service tags. For each tag, replace `region` with the Azure region of your compute instance/cluster:
+Also allow _outbound_ access to the following service tags. For each tag, replace `region` with the Azure region of your compute instance or cluster:
-* `Storage.region` - This outbound access is used to connect to the Azure Storage Account inside the Azure Batch service-managed VNet.
-* `Keyvault.region` - This outbound access is used to connect to the Azure Key Vault account inside the Azure Batch service-managed VNet.
+* `Storage.region`: This outbound access is used to connect to the Azure storage account inside the Azure Batch managed virtual network.
+* `Keyvault.region`: This outbound access is used to connect to the Azure Key Vault account inside the Azure Batch managed virtual network.
-Data access from your compute instance or cluster goes through the private endpoint of the Storage Account for your VNet.
+Data access from your compute instance or cluster goes through the private endpoint of the storage account for your virtual network.
If you use Visual Studio Code on a compute instance, you must allow other outbound traffic. For more information, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md). :::moniker range="azureml-api-2"+ ## Scenario: Use online endpoints
-Security for inbound and outbound communication are configured separately for managed online endpoints.
+You configure security for inbound and outbound communication separately for managed online endpoints.
-#### Inbound communication
+### Inbound communication
-__Inbound__ communication with the scoring URL of the online endpoint can be secured using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` ensures that the online endpoint receives traffic only from a client's virtual network through the Azure Machine Learning workspace's private endpoint.
+You can help secure inbound communication with the scoring URL of the online endpoint by using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` ensures that the online endpoint receives traffic only from a client's virtual network through the Azure Machine Learning workspace's private endpoint.
-The `public_network_access` flag of the Azure Machine Learning workspace also governs the visibility of the online endpoint. If this flag is `disabled`, then the scoring endpoints can only be accessed from virtual networks that contain a private endpoint for the workspace. If it is `enabled`, then the scoring endpoint can be accessed from the virtual network and public networks.
+The `public_network_access` flag of the Azure Machine Learning workspace also governs the visibility of the online endpoint. If this flag is `disabled`, the scoring endpoints can be accessed only from virtual networks that contain a private endpoint for the workspace. If this flag is `enabled`, the scoring endpoint can be accessed from the virtual network and public networks.
-#### Outbound communication
+### Outbound communication
-__Outbound__ communication from a deployment can be secured at the workspace level by enabling managed virtual network isolation for your Azure Machine Learning workspace. Enabling this setting causes Azure Machine Learning to create a managed virtual network for the workspace. Any deployments in the workspace's managed virtual network can use the virtual network's private endpoints for outbound communication.
+You can help secure outbound communication from a deployment at the workspace level by using managed virtual network isolation for your Azure Machine Learning workspace. Using this setting causes Azure Machine Learning to create a managed virtual network for the workspace. Any deployments in the workspace's managed virtual network can use the virtual network's private endpoints for outbound communication.
-The [legacy network isolation method for securing outbound communication](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) worked by disabling a deployment's `egress_public_network_access` flag. We strongly recommend that you secure outbound communication for deployments by using a [workspace managed virtual network](concept-secure-online-endpoint.md) instead. Unlike the legacy approach, the `egress_public_network_access` flag for the deployment no longer applies when you use a workspace managed virtual network with your deployment. Instead, outbound communication will be controlled by the rules set for the workspace's managed virtual network.
+The [legacy network isolation method for securing outbound communication](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) worked by disabling a deployment's `egress_public_network_access` flag. We strongly recommend that you help secure outbound communication for deployments by using a [workspace managed virtual network](concept-secure-online-endpoint.md) instead. Unlike the legacy approach, the `egress_public_network_access` flag for the deployment no longer applies when you use a workspace managed virtual network with your deployment. Instead, the rules that you set for the workspace's managed virtual network control outbound communication.
:::moniker-end ## Scenario: Use Azure Kubernetes Service
-For information on the outbound configuration required for Azure Kubernetes Service, see the connectivity requirements section of [How to secure inference](how-to-secure-inferencing-vnet.md).
+For information on the required outbound configuration for Azure Kubernetes Service, see [Secure an Azure Machine Learning inferencing environment with virtual networks](how-to-secure-inferencing-vnet.md).
> [!NOTE]
-> The Azure Kubernetes Service load balancer is not the same as the load balancer created by Azure Machine Learning. If you want to host your model as a secured application, only available on the VNet, use the internal load balancer created by Azure Machine Learning. If you want to allow public access, use the public load balancer created by Azure Machine Learning.
+> The Azure Kubernetes Service load balancer is not the same as the load balancer that Azure Machine Learning creates. If you want to host your model as a secured application that's available only on the virtual network, use the internal load balancer that Azure Machine Learning creates. If you want to allow public access, use the public load balancer that Azure Machine Learning creates.
If your model requires extra inbound or outbound connectivity, such as to an external data source, use a network security group or your firewall to allow the traffic.
-## Scenario: Use Docker images managed by Azure Machine Learning
+## Scenario: Use Docker images that Azure Machine Learning manages
-Azure Machine Learning provides Docker images that can be used to train models or perform inference. If you don't specify your own images, the ones provided by Azure Machine Learning are used. These images are hosted on the Microsoft Container Registry (MCR). They're also hosted on a geo-replicated Azure Container Registry named `viennaglobal.azurecr.io`.
+Azure Machine Learning provides Docker images that you can use to train models or perform inference. These images are hosted on Microsoft Artifact Registry. They're also hosted on a geo-replicated Azure Container Registry instance named `viennaglobal.azurecr.io`.
-If you provide your own docker images, such as on an Azure Container Registry that you provide, you don't need the outbound communication with MCR or `viennaglobal.azurecr.io`.
+If you provide your own Docker images, such as on a container registry that you provide, you don't need the outbound communication with Artifact Registry or `viennaglobal.azurecr.io`.
> [!TIP]
-> If your Azure Container Registry is secured in the VNet, it cannot be used by Azure Machine Learning to build Docker images. Instead, you must designate an Azure Machine Learning compute cluster to build images. For more information, see [How to secure a workspace in a virtual network](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
+> If your container registry is secured in the virtual network, Azure Machine Learning can't use it to build Docker images. Instead, you must designate an Azure Machine Learning compute cluster to build images. For more information, see [Secure an Azure Machine Learning workspace with virtual networks](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
+ ## Next steps
-Now that you've learned how network traffic flows in a secured configuration, learn more about securing Azure Machine Learning in a virtual network by reading the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
+Now that you've learned how network traffic flows in a secured configuration, learn more about securing Azure Machine Learning in a virtual network by reading the [overview article about virtual network isolation and privacy](how-to-network-security-overview.md).
For information on best practices, see the [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security) article.
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
Title: Vulnerability management
-description: Learn how Azure Machine Learning manages vulnerabilities in images provided by the service, and how you can keep components that are managed by you up to date with the latest security updates.
+description: Learn how Azure Machine Learning manages vulnerabilities in images that the service provides, and how you can get the latest security updates for the components that you manage.
Vulnerability management involves detecting, assessing, mitigating, and reporting on any security vulnerabilities that exist in an organization's systems and software. Vulnerability management is a shared responsibility between you and Microsoft.
-In this article, we discuss these responsibilities and outline the vulnerability management controls provided by Azure Machine Learning. You'll learn how to keep your service instance and applications up to date with the latest security updates, and how to minimize the window of opportunity for attackers.
+This article discusses these responsibilities and outlines the vulnerability management controls that Azure Machine Learning provides. You learn how to keep your service instance and applications up to date with the latest security updates, and how to minimize the window of opportunity for attackers.
-## Microsoft-managed VM images
+## Microsoft-managed VM images
-Azure Machine Learning manages host OS VM images for Azure Machine Learning compute instance, Azure Machine Learning compute clusters, and Data Science Virtual Machines. The update frequency is monthly and includes the following:
+Azure Machine Learning manages host OS virtual machine (VM) images for Azure Machine Learning compute instances, Azure Machine Learning compute clusters, and Data Science Virtual Machines. The update frequency is monthly and includes the following details:
+
+* For each new VM image version, the latest updates are sourced from the original publisher of the OS. Using the latest updates helps ensure that you get all applicable OS-related patches. For Azure Machine Learning, the publisher is Canonical for all the Ubuntu images. These images are used for Azure Machine Learning compute instances, compute clusters, and Data Science Virtual Machines.
-* For each new VM image version, the latest updates are sourced from the original publisher of the OS. Using the latest updates ensures that all OS-related patches that are applicable are picked. For Azure Machine Learning, the publisher is Canonical for all the Ubuntu images. These images are used for Azure Machine Learning compute instances, compute clusters, and Data Science Virtual Machines.
* VM images are updated monthly.
-* In addition to patches applied by the original publisher, Azure Machine Learning updates system packages when updates are available.
-* Azure Machine Learning checks and validates any machine learning packages that may require an upgrade. In most circumstances, new VM images contain the latest package versions.
-* All VM images are built on secure subscriptions that run vulnerability scanning regularly. Any unaddressed vulnerabilities are flagged and are to be fixed within the next release.
-* The frequency is on a monthly interval for most images. For compute instance, the image release is aligned with the Azure Machine Learning SDK release cadence as it comes preinstalled in the environment.
-Next to the regular release cadence, hot fixes are applied in the case vulnerabilities are discovered. Hot fixes get rolled out within 72 hours for Azure Machine Learning compute and within a week for Compute Instance.
+* In addition to patches that the original publisher applies, Azure Machine Learning updates system packages when updates are available.
+
+* Azure Machine Learning checks and validates any machine learning packages that might require an upgrade. In most circumstances, new VM images contain the latest package versions.
+
+* All VM images are built on secure subscriptions that run vulnerability scanning regularly. Azure Machine Learning flags any unaddressed vulnerabilities and fixes them within the next release.
+
+* The frequency is a monthly interval for most images. For compute instances, the image release is aligned with the release cadence of the Azure Machine Learning SDK that's preinstalled in the environment.
+
+In addition to the regular release cadence, Azure Machine Learning applies hotfixes if vulnerabilities surface. Microsoft rolls out hotfixes within 72 hours for Azure Machine Learning compute clusters and within a week for compute instances.
> [!NOTE]
-> The host OS is not the OS version you might specify for an [environment](how-to-use-environments.md) when training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
+> The host OS is not the OS version that you might specify for an [environment](how-to-use-environments.md) when you're training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
## Microsoft-managed container images
-[Base docker images](https://github.com/Azure/AzureML-Containers) maintained by Azure Machine Learning get security patches frequently to address newly discovered vulnerabilities.
+[Base docker images](https://github.com/Azure/AzureML-Containers) that Azure Machine Learning maintains get security patches frequently to address newly discovered vulnerabilities.
-Azure Machine Learning releases updates for supported images every two weeks to address vulnerabilities. As a commitment, we aim to have no vulnerabilities older than 30 days in the latest version of supported images.
+Azure Machine Learning releases updates for supported images every two weeks to address vulnerabilities. As a commitment, we aim to have no vulnerabilities older than 30 days in the latest version of supported images.
-Patched images are released under new immutable tag and also updated `:latest` tag. Using the `:latest` tag or pinning to a particular image version may be a trade-off of security and environment reproducibility for your machine learning job.
+Patched images are released under a new immutable tag and an updated `:latest` tag. Using the `:latest` tag or pinning to a particular image version might be a tradeoff between security and environment reproducibility for your machine learning job.
## Managing environments and container images
-Reproducibility is a key aspect of software development and machine learning experimentation. [Azure Machine Learning Environment](concept-environments.md) component's primary focus is to guarantee reproducibility of the environment where user's code gets executed. To ensure reproducibility for any machine learning job, earlier built images will be pulled to the compute nodes without a need of rematerialization.
+Reproducibility is a key aspect of software development and machine learning experimentation. The [Azure Machine Learning environment](concept-environments.md) component's primary focus is to guarantee reproducibility of the environment where the user's code is executed. To ensure reproducibility for any machine learning job, earlier built images are pulled to the compute nodes without the need for rematerialization.
+
+Although Azure Machine Learning patches base images with each release, whether you use the latest image might be tradeoff between reproducibility and vulnerability management. It's your responsibility to choose the environment version that you use for your jobs or model deployments.
-While Azure Machine Learning patches base images with each release, whether you use the latest image may be tradeoff between reproducibility and vulnerability management. So, it's your responsibility to choose the environment version used for your jobs or model deployments.
+By default, dependencies are layered on top of base images that Azure Machine Learning provides when you're building environments. You can also use your own base images when you're using environments in Azure Machine Learning. After you install more dependencies on top of the Microsoft-provided images, or bring your own base images, vulnerability management becomes your responsibility.
-By default, dependencies are layered on top of base images provided by Azure Machine Learning when building environments. You can also use your own base images when using environments in Azure Machine Learning. Once you install more dependencies on top of the Microsoft-provided images, or bring your own base images, vulnerability management becomes your responsibility.
+Associated with your Azure Machine Learning workspace is an Azure Container Registry instance that functions as a cache for container images. Any image that materializes is pushed to the container registry. The workspace uses it if experimentation or deployment is triggered for the corresponding environment.
-Associated to your Azure Machine Learning workspace is an Azure Container Registry instance that's used as a cache for container images. Any image materialized, is pushed to the container registry, and used if experimentation or deployment is triggered for the corresponding environment. Azure Machine Learning doesn't delete any image from your container registry, and it's your responsibility to evaluate the need of an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-container-registries-usage.md) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate responses to Microsoft Defender for Cloud triggers](../defender-for-cloud/workflow-automation.md).
+Azure Machine Learning doesn't delete any image from your container registry. You're responsible for evaluating the need for an image over time. To monitor and maintain environment hygiene, you can use [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-container-registries-usage.md) to help scan your images for vulnerabilities. To automate your processes based on triggers from Microsoft Defender, see [Automate remediation responses](../defender-for-cloud/workflow-automation.md).
## Using a private package repository
-Azure Machine Learning uses Conda and pip for installing python packages. By default, packages are downloaded from public repositories. In case your organization requires packages to be sourced only from private repositories like Azure DevOps feeds, you may override the conda and pip configuration as part of your base images, and compute instance environment configurations. Below example configuration shows how to remove the default channels, and add your own private conda and pip feeds. Consider using [compute instance setup scripts](./how-to-customize-compute-instance.md) for automation.
+Azure Machine Learning uses Conda and Pip to install Python packages. By default, Azure Machine Learning downloads packages from public repositories. If your organization requires you to source packages only from private repositories like Azure DevOps feeds, you can override the Conda and Pip configuration as part of your base images and your environment configurations for compute instances.
+
+The following example configuration shows how to remove the default channels and add your own private Conda and Pip feeds. Consider using [compute instance setup scripts](./how-to-customize-compute-instance.md) for automation.
```dockerfile RUN conda config --set offline false \
RUN conda config --set offline false \
&& conda config --add channels https://my.private.conda.feed/conda/feed \ && conda config --add repodata_fns <repodata_file_on_your_server>.json
-# Configure pip private indices and ensure your host is trusted by the client
+# Configure Pip private indexes and ensure that the client trusts your host
RUN pip config set global.index https://my.private.pypi.feed/repository/myfeed/pypi/ \ && pip config set global.index-url https://my.private.pypi.feed/repository/myfeed/simple/
-# In case your feed host isn't secured using SSL
+# In case your feed host isn't secured through SSL
RUN pip config set global.trusted-host http://my.private.pypi.feed/ ```
-See [use your own dockerfile](how-to-use-environments.md#use-your-own-dockerfile) to learn how to specify your own base images in Azure Machine Learning. For more details on configuring Conda environments, see [Conda - Creating an environment file manually](https://docs.conda.io/projects/conda/en/4.6.1/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually).
+To learn how to specify your own base images in Azure Machine Learning, see [Create an environment from a Docker build context](how-to-use-environments.md#use-your-own-dockerfile). For more information on configuring Conda environments, see [Creating an environment file manually](https://docs.conda.io/projects/conda/en/4.6.1/user-guide/tasks/manage-environments.html#creating-an-environment-file-manually) on the Conda site.
-## Vulnerability management on compute hosts
+## Vulnerability management on compute hosts
-Managed compute nodes in Azure Machine Learning make use of Microsoft-managed OS VM images and pull the latest updated VM image at the time that a node gets provisioned. This applies to compute instance, compute cluster, [serverless compute](./how-to-use-serverless-compute.md) (preview), and managed inference compute SKUs.
-While OS VM images are regularly patched, compute nodes are not actively scanned for vulnerabilities while in use. For an extra layer of protection, consider network isolation of your compute.
-It's a shared responsibility between you and Microsoft to ensure that your environment is up-to-date and compute nodes use the latest OS version. Nodes that are non-idle can't get updated to the latest VM image. Considerations are slightly different for each compute type, as listed in the following sections.
+Managed compute nodes in Azure Machine Learning use Microsoft-managed OS VM images. When you provision a node, it pulls the latest updated VM image. This behavior applies to compute instance, compute cluster, [serverless compute](./how-to-use-serverless-compute.md) (preview), and managed inference compute options.
+
+Although OS VM images are regularly patched, Azure Machine Learning doesn't actively scan compute nodes for vulnerabilities while they're in use. For an extra layer of protection, consider network isolation of your compute.
+
+Ensuring that your environment is up to date and that compute nodes use the latest OS version is a shared responsibility between you and Microsoft. Nodes that aren't idle can't be updated to the latest VM image. Considerations are slightly different for each compute type, as listed in the following sections.
### Compute instance
-Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. Once a compute instance is deployed, it does not get actively updated. You could [query an instance's operating system version](how-to-manage-compute-instance.md#audit-and-observe-compute-instance-version). To keep current with the latest software updates and security patches, you could:
-
-1. Recreate a compute instance to get the latest OS image (recommended)
-
- * Data and customizations such as installed packages that are stored on the instance's OS and temporary disks will be lost.
- * [Store notebooks under "User files"](./concept-compute-instance.md#accessing-files) to persist them when recreating your instance.
- * [Mount data](how-to-customize-compute-instance.md) to persist files when recreating your instance.
- * See [Compute Instance release notes](azure-machine-learning-ci-image-release-notes.md) for details on image releases.
-
-1. Alternatively, regularly update OS and Python packages.
-
- * Use Linux package management tools to update the package list with the latest versions.
-
- ```bash
- sudo apt-get update
- ```
-
- * Use Linux package management tools to upgrade packages to the latest versions. Note that package conflicts might occur using this approach.
-
- ```bash
- sudo apt-get upgrade
- ```
-
- * Use Python package management tools to upgrade packages and check for updates.
-
- ```bash
- pip list --outdated
- ```
-
-You may install and run additional scanning software on compute instance to scan for security issues.
-
-* [Trivy](https://github.com/aquasecurity/trivy) may be used to discover OS and Python package level vulnerabilities.
-* [ClamAV](https://www.clamav.net/) may be used to discover malware and comes pre-installed on compute instance.
-* Defender for Server agent installation is currently not supported.
-* Consider using [customization scripts](./how-to-customize-compute-instance.md) for automation. For an example setup script that combines Trivy and ClamAV, see [compute instance sample setup scripts](https://github.com/Azure/azureml-examples/tree/main/setup/setup-ci).
+Compute instances get the latest VM images at the time of provisioning. Microsoft releases new VM images on a monthly basis. After you deploy a compute instance, it isn't actively updated. You can [query an instance's operating system version](how-to-manage-compute-instance.md#audit-and-observe-compute-instance-version). To keep current with the latest software updates and security patches, you can use one of these methods:
+
+* Re-create a compute instance to get the latest OS image (recommended).
+
+ If you use this method, you'll lose data and customizations (such as installed packages) that are stored on the instance's OS and temporary disks.
+
+ When you re-create your instance:
+
+ * [Store notebooks](./concept-compute-instance.md#accessing-files) in the *User files* directory to persist them.
+ * [Mount data](how-to-customize-compute-instance.md) to persist files.
+
+ For more information about image releases, see [Azure Machine Learning compute instance image release notes](azure-machine-learning-ci-image-release-notes.md).
+
+* Regularly update OS and Python packages.
+
+ * Use Linux package management tools to update the package list with the latest versions:
+
+ ```bash
+ sudo apt-get update
+ ```
+
+ * Use Linux package management tools to upgrade packages to the latest versions. Package conflicts might occur when you use this approach.
+
+ ```bash
+ sudo apt-get upgrade
+ ```
+
+ * Use Python package management tools to upgrade packages and check for updates:
+
+ ```bash
+ pip list --outdated
+ ```
+
+You can install and run additional scanning software on the compute instance to scan for security issues:
+
+* Use [Trivy](https://github.com/aquasecurity/trivy) to discover OS and Python package-level vulnerabilities.
+* Use [ClamAV](https://www.clamav.net/) to discover malware. It comes preinstalled on compute instances.
+
+Microsoft Defender for Servers agent installation is currently not supported.
+
+Consider using [customization scripts](./how-to-customize-compute-instance.md) for automation. For an example setup script that combines Trivy and ClamAV, see [Compute instance sample setup scripts](https://github.com/Azure/azureml-examples/tree/main/setup/setup-ci).
### Compute clusters
-Compute clusters automatically upgrade to the latest VM image. If the cluster is configured with min nodes = 0, it automatically upgrades nodes to the latest VM image version when all jobs are completed and the cluster reduces to zero nodes.
-* There are conditions in which cluster nodes do not scale down, and as a result are unable to get the latest VM images.
+Compute clusters automatically upgrade nodes to the latest VM image. If you configure the cluster with `min nodes = 0`, it automatically upgrades nodes to the latest VM image version when all jobs are completed and the cluster reduces to zero nodes.
- * Cluster minimum node count may be set to a value greater than 0.
- * Jobs may be scheduled continuously on your cluster.
+In the following conditions, cluster nodes don't scale down, so they can't get the latest VM image:
-* It is your responsibility to scale non-idle cluster nodes down to get the latest OS VM image updates. Azure Machine Learning does not abort any running workloads on compute nodes to issue VM updates.
+* The cluster's minimum node count is set to a value greater than zero.
+* Jobs are scheduled continuously on your cluster.
- * Temporarily change the minimum nodes to zero and allow the cluster to reduce to zero nodes.
+You're responsible for scaling down non-idle cluster nodes to get the latest OS VM image updates. Azure Machine Learning doesn't stop any running workloads on compute nodes to issue VM updates. Temporarily change the minimum nodes to zero and allow the cluster to reduce to zero nodes.
### Managed online endpoints
-* Managed Online Endpoints automatically receive OS host image updates that include vulnerability fixes. The update frequency of images is at least once a month.
-* Compute nodes get automatically upgraded to the latest VM image version once released. There's no action required on you.
+Managed online endpoints automatically receive OS host image updates that include vulnerability fixes. The update frequency of images is at least once a month.
+
+Compute nodes are automatically upgraded to the latest VM image version when that version is released. You don't need to take any action.
+
+### Customer-managed Kubernetes clusters
+
+[Kubernetes compute](how-to-attach-kubernetes-anywhere.md) lets you configure Kubernetes clusters to train, perform inference, and manage models in Azure Machine Learning.
+
+Because you manage the environment with Kubernetes, management of both OS VM vulnerabilities and container image vulnerabilities is your responsibility.
-### Customer managed Kubernetes clusters
+Azure Machine Learning frequently publishes new versions of Azure Machine Learning extension container images in Microsoft Artifact Registry. Microsoft is responsible for ensuring that new image versions are free from vulnerabilities. [Each release](https://github.com/Azure/AML-Kubernetes/blob/master/docs/release-notes.md) fixes vulnerabilities.
-[Kubernetes compute](how-to-attach-kubernetes-anywhere.md) lets you configure Kubernetes clusters to train, inference, and manage models in Azure Machine Learning.
-* Because you manage the environment with Kubernetes, both OS VM vulnerabilities and container image vulnerability management is your responsibility.
-* Azure Machine Learning frequently publishes new versions of Azure Machine Learning extension container images into Microsoft Container Registry. It's Microsoft's responsibility to ensure new image versions are free from vulnerabilities. Vulnerabilities are fixed with [each release](https://github.com/Azure/AML-Kubernetes/blob/master/docs/release-notes.md).
-* When your clusters run jobs without interruption, running jobs may run outdated container image versions. Once you upgrade the amlarc extension to a running cluster, newly submitted jobs will start to use the latest image version. When upgrading the AMLArc extension to its latest version, clean up the old container image versions from the clusters as required.
-* Observability on whether your Azure Arc cluster is running the latest version of AMLArc, you can find via the Azure portal. Under your Arc resource of the type 'Kubernetes - Azure Arc', see 'Extensions' to find the version of the AMLArc extension.
+When your clusters run jobs without interruption, running jobs might run outdated container image versions. After you upgrade the `amlarc` extension to a running cluster, newly submitted jobs start to use the latest image version. When you're upgrading the `amlarc` extension to its latest version, clean up the old container image versions from the clusters as required.
+To observe whether your Azure Arc cluster is running the latest version of `amlarc`, use the Azure portal. Under your Azure Arc resource of the type **Kubernetes - Azure Arc**, go to **Extensions** to find the version of the `amlarc` extension.
-## Automated ML and Designer environments
+## AutoML and Designer environments
-For code-based training experiences, you control which Azure Machine Learning environment is used. With AutoML and Designer, the environment is encapsulated as part of the service. These types of jobs can run on computes configured by you, allowing for extra controls such as network isolation.
+For code-based training experiences, you control which Azure Machine Learning environment to use. With AutoML and the designer, the environment is encapsulated as part of the service. These types of jobs can run on computes that you configure, to allow for extra controls such as network isolation.
-* Automated ML jobs run on environments that layer on top of Azure Machine Learning [base docker images](https://github.com/Azure/AzureML-Containers).
+AutoML jobs run on environments that layer on top of Azure Machine Learning [base Docker images](https://github.com/Azure/AzureML-Containers).
-* Designer jobs are compartmentalized into [Components](concept-component.md). Each component has its own environment that layers on top of the Azure Machine Learning base docker images. For more information on components, see the [Component reference](./component-reference-v2/component-reference-v2.md).
+Designer jobs are compartmentalized into [components](concept-component.md). Each component has its own environment that layers on top of the Azure Machine Learning base Docker images. For more information on components, see the [component reference](./component-reference-v2/component-reference-v2.md).
## Next steps
-* [Azure Machine Learning Base Images Repository](https://github.com/Azure/AzureML-Containers)
+* [Azure Machine Learning repository for base images](https://github.com/Azure/AzureML-Containers)
* [Data Science Virtual Machine release notes](./data-science-virtual-machine/release-notes.md)
-* [Azure Machine Learning Python SDK Release Notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/CHANGELOG.md)
-* [Machine learning enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
+* [Azure Machine Learning Python SDK release notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/CHANGELOG.md)
+* [Azure Machine Learning best practices for enterprise security](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-enterprise-security)
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
rule_scale_in = ScaleRule(
metric_resource_uri = deployment.id, time_grain = datetime.timedelta(minutes = 1), statistic = "Average",
- operator = "less Than",
+ operator = "LessThan",
time_aggregation = "Last", time_window = datetime.timedelta(minutes = 5), threshold = 30
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Previously updated : 10/19/2022 Last updated : 01/25/2024 # Create an Azure Machine Learning compute cluster [!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-Learn how to create and manage a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
+This article explains how to create and manage a [compute cluster](concept-compute-target.md#azure-machine-learning-compute-managed) in your Azure Machine Learning workspace.
-You can use Azure Machine Learning compute cluster to distribute a training or batch inference process across a cluster of CPU or GPU compute nodes in the cloud. For more information on the VM sizes that include GPUs, see [GPU-optimized virtual machine sizes](../virtual-machines/sizes-gpu.md).
+You can use Azure Machine Learning compute cluster to distribute a training or batch inference process across a cluster of CPU or GPU compute nodes in the cloud. For more information on the VM sizes that include GPUs, see [GPU-optimized virtual machine sizes](../virtual-machines/sizes-gpu.md).
-In this article, learn how to:
+Learn how to:
-* Create a compute cluster
-* Lower your compute cluster cost with low priority VMs
-* Set up a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the cluster
+* Create a compute cluster.
+* Lower your compute cluster cost with low priority VMs.
+* Set up a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the cluster.
[!INCLUDE [serverless compute](./includes/serverless-compute.md)] ## Prerequisites
-* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. For more information, see [Manage Azure Machine Learning workspaces](how-to-manage-workspace.md).
* The [Azure CLI extension for Machine Learning service (v2)](how-to-configure-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ai-ml-readme), or the [Azure Machine Learning Visual Studio Code extension](how-to-setup-vs-code.md).
In this article, learn how to:
[!INCLUDE [connect ws v2](includes/machine-learning-connect-ws-v2.md)] - ## What is a compute cluster? Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute cluster is a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. Compute cluster supports **no public IP** deployment as well in virtual network. The compute executes in a containerized environment and packages your model dependencies in a [Docker container](https://www.docker.com/why-docker).
-Compute clusters can run jobs securely in either a [managed virtual network](how-to-managed-network.md) or an [Azure virtual network](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
+Compute clusters can run jobs securely in either a [managed virtual network](how-to-managed-network.md) or an [Azure virtual network](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
## Limitations
-* Compute clusters can be created in a different region than your workspace. This functionality is only available for __compute clusters__, not compute instances.
+* Compute clusters can be created in a different region than your workspace. This functionality is only available for **compute clusters**, not compute instances.
> [!WARNING]
- > When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+ > When using a compute cluster in a different region than your workspace or datastores, you might see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
* Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see [Manage and request quotas for Azure resources](how-to-manage-quotas.md).
-* Azure allows you to place _locks_ on resources, so that they can't be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace prevents scaling operations for Azure Machine Learning compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
+* Azure allows you to place *locks* on resources, so that they can't be deleted or are read only. **Do not apply resource locks to the resource group that contains your workspace**. Applying a lock to the resource group that contains your workspace prevents scaling operations for Azure Machine Learning compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
## Create
+**Time estimate**: Approximately five minutes.
+ > [!NOTE] > If you use serverless compute, you don't need to create a compute cluster.
-**Time estimate**: Approximately 5 minutes.
+Azure Machine Learning Compute can be reused across runs. The compute can be shared with other users in the workspace and is retained between runs, automatically scaling nodes up or down based on the number of runs submitted, and the `max_nodes` set on your cluster. The `min_nodes` setting controls the minimum nodes available.
-Azure Machine Learning Compute can be reused across runs. The compute can be shared with other users in the workspace and is retained between runs, automatically scaling nodes up or down based on the number of runs submitted, and the max_nodes set on your cluster. The min_nodes setting controls the minimum nodes available.
-
-The dedicated cores per region per VM family quota and total regional quota, which applies to compute cluster creation, is unified and shared with Azure Machine Learning training compute instance quota.
+The dedicated cores per region per VM family quota and total regional quota, which applies to compute cluster creation, is unified and shared with Azure Machine Learning training compute instance quota.
[!INCLUDE [min-nodes-note](includes/machine-learning-min-nodes.md)] The compute autoscales down to zero nodes when it isn't used. Dedicated VMs are created to run your jobs as needed. - Use the following examples to create a compute cluster:
-
+ # [Python SDK](#tab/python)
-To create a persistent Azure Machine Learning Compute resource in Python, specify the **size** and **max_instances** properties. Azure Machine Learning then uses smart defaults for the other properties.
-
-* *size**: The VM family of the nodes created by Azure Machine Learning Compute.
-* **max_instances*: The max number of nodes to autoscale up to when you run a job on Azure Machine Learning Compute.
+To create a persistent Azure Machine Learning Compute resource in Python, specify the `size` and `max_instances` properties. Azure Machine Learning then uses smart defaults for the other properties.
+
+* **size**: The VM family of the nodes created by Azure Machine Learning Compute.
+* **max_instances**: The maximum number of nodes to autoscale up to when you run a job on Azure Machine Learning Compute.
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] [!notebook-python[](~/azureml-examples-main/sdk/python/resources/compute/compute.ipynb?name=cluster_basic)]
-You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute) for details.
+You can also configure several advanced properties when you create Azure Machine Learning Compute. The properties allow you to create a persistent cluster of fixed size, or within an existing Azure Virtual Network in your subscription. See the [AmlCompute class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute) for details.
> [!WARNING]
-> When setting the `location` parameter, if it is a different region than your workspace or datastores you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+> When setting the `location` parameter, if it's a different region than your workspace or datastores, you might see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
# [Azure CLI](#tab/azure-cli)
Where the file *create-cluster.yml* is:
:::code language="yaml" source="~/azureml-examples-main/cli/resources/compute/cluster-location.yml"::: - > [!WARNING]
-> When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
-
+> When you use a compute cluster in a different region than your workspace or datastores, you might see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
# [Studio](#tab/azure-studio)
-Create a single- or multi- node compute cluster for your training, batch inferencing or reinforcement learning workloads.
+Create a single- or multi- node compute cluster for your training, batch inference or reinforcement learning workloads.
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com).
-
-1. Under __Manage__, select __Compute__.
-1. If you have no compute resources, select **Create** in the middle of the page.
+
+1. Under **Manage**, select **Compute**.
+
+1. If you have no compute resources, select **Create** in the middle of the page.
- :::image type="content" source="media/how-to-create-attach-studio/create-compute-target.png" alt-text="Screenshot that shows creating a compute target":::
+ :::image type="content" source="media/how-to-create-attach-studio/create-compute-target.png" alt-text="Screenshot that shows the Create button to create a compute target.":::
1. If you see a list of compute resources, select **+New** above the list.
- :::image type="content" source="media/how-to-create-attach-studio/select-new.png" alt-text="Select new":::
+ :::image type="content" source="media/how-to-create-attach-studio/select-new.png" alt-text="Screenshot that shows the New button to create the resource.":::
-1. In the tabs at the top, select __Compute cluster__
+1. In the tabs at the top, select **Compute cluster**.
1. Fill out the form as follows: |Field |Description | |||
- | Location | The Azure region where the compute cluster is created. By default, this is the same location as the workspace. If you don't have sufficient quota in the default region, switch to a different region for more options. </br>When using a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. |
- |Virtual machine type | Choose CPU or GPU. This type can't be changed after creation |
- |Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job may be preempted.
- |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
+ | Location | The Azure region where the compute cluster is created. By default, this is the same location as the workspace. If you don't have sufficient quota in the default region, switch to a different region for more options. <br>When using a different region than your workspace or datastores, you might see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it. |
+ |Virtual machine type | Choose CPU or GPU. This type can't be changed after creation. |
+ |Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job might be preempted. |
+ |Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines) |
1. Select **Next** to proceed to **Advanced Settings** and fill out the form as follows: |Field |Description | |||
- |Compute name | * Name is required and must be between 3 to 24 characters long.<br><br> * Valid characters are upper and lower case letters, digits, and the **-** character.<br><br> * Name must start with a letter<br><br> * Name needs to be unique across all existing computes within an Azure region. You see an alert if the name you choose isn't unique<br><br> * If **-** character is used, then it needs to be followed by at least one letter later in the name |
+ |Compute name | * Name is required and must be between 3 to 24 characters long.<br><br> * Valid characters are upper and lower case letters, digits, and the **-** character.<br><br> * Name must start with a letter. <br><br> * Name needs to be unique across all existing computes within an Azure region. You see an alert if the name you choose isn't unique. <br><br> * If **-** character is used, then it needs to be followed by at least one letter later in the name. |
|Minimum number of nodes | Minimum number of nodes that you want to provision. If you want a dedicated number of nodes, set that count here. Save money by setting the minimum to 0, so you don't pay for any nodes when the cluster is idle. | |Maximum number of nodes | Maximum number of nodes that you want to provision. The compute automatically scales to a maximum of this node count when a job is submitted. | | Idle seconds before scale down | Idle time before scaling the cluster down to the minimum node count. |
- | Enable SSH access | Use the same instructions as [Enable SSH access](#enable-ssh-access) for a compute instance (above). |
- |Advanced settings | Optional. Configure network settings.<br><br> * If an *Azure Virtual Network*, Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside the network). For more information, see [network requirements](./how-to-secure-training-vnet.md).<br><br> * If an *Azure Machine Learning managed network*, the compute cluster is automatically in the managed network. For more information, see [managed computes with a managed network](how-to-managed-network-compute.md).<br><br> * No public IP configures whether the compute cluster has a public IP address when in a network.<br><br> * Assign a [managed identity](#set-up-managed-identity) to grant access to resources.
-
-1. Select __Create__.
+ | Enable SSH access | Use the same instructions as [Enable SSH access](#enable-ssh-access) for a compute instance. |
+ |Advanced settings | Optional. Configure network settings.<br><br> * If an *Azure Virtual Network*, Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside the network. For more information, see [network requirements](how-to-secure-training-vnet.md).<br><br> * If an *Azure Machine Learning managed network*, the compute cluster is automatically in the managed network. For more information, see [managed computes with a managed network](how-to-managed-network-compute.md).<br><br> * No public IP configures whether the compute cluster has a public IP address when in a network.<br><br> * Assign a [managed identity](#set-up-managed-identity) to grant access to resources. |
+1. Select **Create**.
### Enable SSH access
SSH access is disabled by default. SSH access can't be changed after creation.
- ## Lower your compute cluster cost with low priority VMs
+## Lower your compute cluster cost with low priority VMs
-You may also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs don't have guaranteed availability and may be preempted while in use. You have to restart a preempted job.
+You can also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs don't have guaranteed availability and might be preempted while in use. You have to restart a preempted job.
-Using Azure Low Priority Virtual Machines allows you to take advantage of Azure's unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts Azure Low Priority Virtual Machines. Therefore, Azure Low Priority Virtual Machine is great for workloads that can handle interruptions. The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Low Priority Virtual Machines, Azure allocates the VMs if there's capacity available, but there's no SLA for these VMs. An Azure Low Priority Virtual Machine offers no high availability guarantees. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts Azure Low Priority Virtual Machines
+Using Azure Low Priority Virtual Machines allows you to take advantage of Azure's unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts Azure Low Priority Virtual Machines. Therefore, Azure Low Priority Virtual Machine is great for workloads that can handle interruptions. The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Low Priority Virtual Machines, Azure allocates the VMs if there's capacity available, but there's no SLA for these VMs. An Azure Low Priority Virtual Machine offers no high availability guarantees. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts Azure Low Priority Virtual Machines.
Use any of these ways to specify a low-priority VM:
-
+ # [Python SDK](#tab/python) [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] [!notebook-python[](~/azureml-examples-main/sdk/python/resources/compute/compute.ipynb?name=cluster_low_pri)]
-
# [Azure CLI](#tab/azure-cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] Set the `vm-priority`:
-
+ ```azurecli az ml compute create -f create-cluster.yml ```
Where the file *create-cluster.yml* is:
:::code language="yaml" source="~/azureml-examples-main/cli/resources/compute/cluster-low-priority.yml"::: > [!NOTE]
-> When you use [serverless compute](./how-to-use-serverless-compute.md), you don't need to create a compute cluster. To specify a low-priority serverless compute, set the `job_tier` to `Spot` in the [queue settings](./how-to-use-serverless-compute.md#configure-properties-for-command-jobs).
+> If you use [serverless compute](./how-to-use-serverless-compute.md), you don't need to create a compute cluster. To specify a low-priority serverless compute, set the `job_tier` to `Spot` in the [queue settings](how-to-use-serverless-compute.md#configure-properties-for-command-jobs).
# [Studio](#tab/azure-studio) In the studio, choose **Low Priority** when you create a VM.
-
+ ## Set up managed identity
There's a chance that some users who created their Azure Machine Learning worksp
### Stuck at resizing
-If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0) for the node state, Azure resource locks may be the cause.
+If your Azure Machine Learning compute cluster appears stuck at resizing (0 -> 0) for the node state, Azure resource locks might be the cause.
[!INCLUDE [resource locks](includes/machine-learning-resource-lock.md)]
-## Next steps
+## Next step
Use your compute cluster to:
-* [Submit a training run](./how-to-train-model.md)
-* [Run batch inference](./tutorial-pipeline-batch-scoring-classification.md).
+* [Submit a training run](./how-to-train-model.md)
+* [Run batch inference](./tutorial-pipeline-batch-scoring-classification.md)
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
To accelerate labeling tasks, on the **ML assisted labeling** page, you can trig
At the start of your labeling project, the items are shuffled into a random order to reduce potential bias. However, the trained model reflects any biases that are present in the dataset. For example, if 80 percent of your items are of a single class, then approximately 80 percent of the data used to train the model lands in that class.
-To enable assisted labeling, select **Enable ML assisted labeling** and specify a GPU. If you don't have a GPU in your workspace, a GPU cluster is created for you and added to your workspace. The cluster is created with a minimum of zero nodes, which means it costs nothing when not in use.
+To enable assisted labeling, select **Enable ML assisted labeling** and specify a GPU. If you don't have a GPU in your workspace, a GPU cluster (resource name: DefLabelNC6v3, vmsize: Standard_NC6s_v3) is created for you and added to your workspace. The cluster is created with a minimum of zero nodes, which means it costs nothing when not in use.
ML-assisted labeling consists of two phases:
machine-learning How To Deploy Online Endpoint With Secret Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoint-with-secret-injection.md
reviewer: msakande Last updated 01/10/2024 -+ # Access secrets from online deployment using secret injection (preview)
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
Previously updated : 11/16/2022 Last updated : 01/29/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
monikerRange: 'azureml-api-2 || azureml-api-1'
[!INCLUDE [managed network](includes/managed-vnet-note.md)]
-In this article, you learn how to use Azure Machine Learning studio in a virtual network. The studio includes features like AutoML, the designer, and data labeling.
+This article explains how to use Azure Machine Learning studio in a virtual network. The studio includes features like AutoML, the designer, and data labeling.
-Some of the studio's features are disabled by default in a virtual network. To re-enable these features, you must enable managed identity for storage accounts you intend to use in the studio.
+Some of the studio's features are disabled by default in a virtual network. To re-enable these features, you must enable managed identity for storage accounts you intend to use in the studio.
The following operations are disabled by default in a virtual network:
In this article, you learn how to:
> - Access the studio from a resource inside of a virtual network. > - Understand how the studio impacts storage security.
-> [!TIP]
-> This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
->
-> * [Virtual network overview](how-to-network-security-overview.md)
-> * [Secure the workspace resources](how-to-secure-workspace-vnet.md)
-> * [Secure the training environment](how-to-secure-training-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
-> * [Use custom DNS](how-to-custom-dns.md)
-> * [Use a firewall](how-to-access-azureml-behind-firewall.md)
-> * [Virtual network overview](how-to-network-security-overview.md)
-> * [Secure the workspace resources](./v1/how-to-secure-workspace-vnet.md)
-> * [Secure the training environment](./v1/how-to-secure-training-vnet.md)
-> * [Secure the inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * [Use custom DNS](how-to-custom-dns.md)
-> * [Use a firewall](how-to-access-azureml-behind-firewall.md)
->
-> For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
- ## Prerequisites
-+ Read the [Network security overview](how-to-network-security-overview.md) to understand common virtual network scenarios and architecture.
+* Read the [Network security overview](how-to-network-security-overview.md) to understand common virtual network scenarios and architecture.
-+ A pre-existing virtual network and subnet to use.
+* A pre-existing virtual network and subnet to use.
:::moniker range="azureml-api-2"
-+ An existing [Azure Machine Learning workspace with a private endpoint](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint).
+* An existing [Azure Machine Learning workspace with a private endpoint](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint).
-+ An existing [Azure storage account added your virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
+* An existing [Azure storage account added your virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
:::moniker-end :::moniker range="azureml-api-1"
-+ An existing [Azure Machine Learning workspace with a private endpoint](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint).
+* An existing [Azure Machine Learning workspace with a private endpoint](v1/how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint).
-+ An existing [Azure storage account added your virtual network](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
+* An existing [Azure storage account added your virtual network](v1/how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
:::moniker-end
+* For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
+ ## Limitations ### Azure Storage Account
-* When the storage account is in the VNet, there are extra validation requirements when using studio:
+* When the storage account is in the virtual network, there are extra validation requirements to use studio:
- * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet.
- * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets.
+ * If the storage account uses a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts), the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet.
+ * If the storage account uses a [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets.
### Designer sample pipeline
-There's a known issue where user can't run sample pipeline in Designer homepage. This problem occurs because the sample dataset used in the sample pipeline is an Azure Global dataset. It can't be accessed from a virtual network environment.
+There's a known issue where users can't run a sample pipeline in the designer homepage. This problem occurs because the sample dataset used in the sample pipeline is an Azure Global dataset. It can't be accessed from a virtual network environment.
To resolve this issue, use a public workspace to run the sample pipeline. Or replace the sample dataset with your own dataset in the workspace within a virtual network.
Use the following steps to enable access to data stored in Azure Blob and File s
> [!TIP] > The first step is not required for the default storage account for the workspace. All other steps are required for *any* storage account behind the VNet and used by the workspace, including the default storage account.
-1. **If the storage account is the *default* storage for your workspace, skip this step**. If it isn't the default, __Grant the workspace managed identity the 'Storage Blob Data Reader' role__ for the Azure storage account so that it can read data from blob storage.
+1. **If the storage account is the *default* storage for your workspace, skip this step**. If it isn't the default, **grant the workspace managed identity the Storage Blob Data Reader role** for the Azure storage account so that it can read data from blob storage.
For more information, see the [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) built-in role.
-1. __Grant the workspace managed identity the 'Reader' role for storage private endpoints__. If your storage service uses a __private endpoint__, grant the workspace's managed identity __Reader__ access to the private endpoint. The workspace's managed identity in Microsoft Entra ID has the same name as your Azure Machine Learning workspace. A private endpoint is necessary for both __blob and file__ storage types.
+1. **Grant the workspace managed identity the Reader role for storage private endpoints**. If your storage service uses a private endpoint, grant the workspace's managed identity **Reader** access to the private endpoint. The workspace's managed identity in Microsoft Entra ID has the same name as your Azure Machine Learning workspace. A private endpoint is necessary for both blob and file storage types.
> [!TIP]
- > Your storage account may have multiple private endpoints. For example, one storage account may have separate private endpoint for blob, file, and dfs (Azure Data Lake Storage Gen2). Add the managed identity to all these endpoints.
+ > Your storage account might have multiple private endpoints. For example, one storage account might have separate private endpoint for blob, file, and dfs (Azure Data Lake Storage Gen2). Add the managed identity to all these endpoints.
For more information, see the [Reader](../role-based-access-control/built-in-roles.md#reader) built-in role. <a id='enable-managed-identity'></a>
-1. __Enable managed identity authentication for default storage accounts__. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account. Both are defined when you create your workspace. You can also set new defaults in the __Datastore__ management page.
+1. **Enable managed identity authentication for default storage accounts**. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account. Both are defined when you create your workspace. You can also set new defaults in the Datastore management page.
- ![Screenshot showing where default datastores can be found](./media/how-to-enable-studio-virtual-network/default-datastores.png)
+ :::image type="content" source="media/how-to-enable-studio-virtual-network/default-datastores.png" alt-text="Screenshot showing where default datastores can be found." lightbox="media/how-to-enable-studio-virtual-network/default-datastores.png":::
The following table describes why managed identity authentication is used for your workspace default storage accounts.
Use the following steps to enable access to data stored in Azure Blob and File s
|Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. If managed identity authentication is disabled, the user's identity is used to access data stored in the blob. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment fails regardless of any other datastores in use.| |Workspace default file store| Stores AutoML experiment assets. Enable managed identity authentication on this storage account to submit AutoML experiments. |
-1. __Configure datastores to use managed identity authentication__. After you add an Azure storage account to your virtual network with either a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts) or [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account.
+1. **Configure datastores to use managed identity authentication**. After you add an Azure storage account to your virtual network with either a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts) or [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account.
Azure Machine Learning uses [datastore](concept-data.md#datastore) to connect to storage accounts. When creating a new datastore, use the following steps to configure a datastore to use managed identity authentication:
- 1. In the studio, select __Datastores__.
-
- 1. To update an existing datastore, select the datastore and select __Update credentials__.
+ 1. In the studio, select **Datastores**.
- To create a new datastore, select __+ New datastore__.
+ 1. To create a new datastore, select **+ Create**.
- 1. In the datastore settings, select __Yes__ for __Use workspace managed identity for data preview and profiling in Azure Machine Learning studio__.
+ 1. In the datastore settings, turn on the switch for **Use workspace managed identity for data preview and profiling in Azure Machine Learning studio**.
- ![Screenshot showing how to enable managed workspace identity](./media/how-to-enable-studio-virtual-network/enable-managed-identity.png)
+ :::image type="content" source="media/how-to-enable-studio-virtual-network/enable-managed-identity.png" alt-text="Screenshot showing how to enable managed workspace identity." lightbox="media/how-to-enable-studio-virtual-network/enable-managed-identity.png":::
- 1. In the __Networking__ settings for the __Azure Storage Account__, add the Microsoft.MachineLearningService/workspaces __Resource type__, and set the __Instance name__ to the workspace.
+ 1. In the **Networking** settings for the Azure Storage Account, add the `Microsoft.MachineLearningService/workspaces` **Resource type**, and set the **Instance name** to the workspace.
- These steps add the workspace's managed identity as a __Reader__ to the new storage service using Azure RBAC. __Reader__ access allows the workspace to view the resource, but not make changes.
+ These steps add the workspace's managed identity as a Reader to the new storage service using Azure RBAC. Reader access allows the workspace to view the resource, but not make changes.
## Datastore: Azure Data Lake Storage Gen1
When using Azure Data Lake Storage Gen1 as a datastore, you can only use POSIX-s
When using Azure Data Lake Storage Gen2 as a datastore, you can use both Azure RBAC and POSIX-style access control lists (ACLs) to control data access inside of a virtual network.
-__To use Azure RBAC__, follow the steps in the [Datastore: Azure Storage Account](#datastore-azure-storage-account) section of this article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC.
+**To use Azure RBAC**, follow the steps in the [Datastore: Azure Storage Account](#datastore-azure-storage-account) section of this article. Data Lake Storage Gen2 is based on Azure Storage, so the same steps apply when using Azure RBAC.
-__To use ACLs__, the workspace's managed identity can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
+**To use ACLs**, the workspace's managed identity can be assigned access just like any other security principal. For more information, see [Access control lists on files and directories](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories).
## Datastore: Azure SQL Database
After you create a SQL contained user, grant permissions to it by using the [GRA
When using the Azure Machine Learning designer intermediate component output, you can specify the output location for any component in the designer. Use this output to store intermediate datasets in separate location for security, logging, or auditing purposes. To specify output, use the following steps: 1. Select the component whose output you'd like to specify.
-1. In the component settings pane that appears to the right, select __Output settings__.
+1. In the component settings pane, select **Output settings**.
1. Specify the datastore you want to use for each component output. Make sure that you have access to the intermediate storage accounts in your virtual network. Otherwise, the pipeline fails. [Enable managed identity authentication](#enable-managed-identity) for intermediate storage accounts to visualize output data.+ ## Access the studio from a resource inside the VNet
-If you're accessing the studio from a resource inside of a virtual network (for example, a compute instance or virtual machine), you must allow outbound traffic from the virtual network to the studio.
+If you're accessing the studio from a resource inside of a virtual network (for example, a compute instance or virtual machine), you must allow outbound traffic from the virtual network to the studio.
-For example, if you're using network security groups (NSG) to restrict outbound traffic, add a rule to a __service tag__ destination of __AzureFrontDoor.Frontend__.
+For example, if you're using network security groups (NSG) to restrict outbound traffic, add a rule to a **service tag** destination of `AzureFrontDoor.Frontend`.
## Firewall settings
-Some storage services, such as Azure Storage Account, have firewall settings that apply to the public endpoint for that specific service instance. Usually this setting allows you to allow/disallow access from specific IP addresses from the public internet. __This is not supported__ when using Azure Machine Learning studio. It's supported when using the Azure Machine Learning SDK or CLI.
+Some storage services, such as Azure Storage Account, have firewall settings that apply to the public endpoint for that specific service instance. Usually this setting allows you to allow/disallow access from specific IP addresses from the public internet. **This is not supported** when using Azure Machine Learning studio. It's supported when using the Azure Machine Learning SDK or CLI.
> [!TIP] > Azure Machine Learning studio is supported when using the Azure Firewall service. For more information, see [Use your workspace behind a firewall](how-to-access-azureml-behind-firewall.md).+ ## Next steps This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Title: Log metrics, parameters and files with MLflow
+ Title: Log metrics, parameters, and files with MLflow
description: Enable logging on your ML training runs to monitor real-time run metrics with MLflow, and to help diagnose errors and warnings.
Previously updated : 04/28/2022 Last updated : 01/30/2024
-# Log metrics, parameters and files with MLflow
+# Log metrics, parameters, and files with MLflow
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] -
-Azure Machine Learning supports logging and tracking experiments using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, parameters, and artifacts with MLflow as it supports local mode to cloud portability.
+Azure Machine Learning supports logging and tracking experiments using [MLflow Tracking](https://www.mlflow.org/docs/latest/tracking.html). You can log models, metrics, parameters, and artifacts with MLflow, either locally on your computer or in a cloud environment.
> [!IMPORTANT]
-> Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the Azure Machine Learning SDK for Python (v2). See this guidance to learn how to log with MLflow. If you were using Azure Machine Learning SDK v1 before, we recommend you to start leveraging MLflow for tracking experiments. See [Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md) for specific guidance.
+> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the Azure Machine Learning SDK for Python (v2). If you used Azure Machine Learning SDK v1 before, we recommend that you leverage MLflow for tracking experiments. See [Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md) for specific guidance.
-Logs can help you diagnose errors and warnings, or track performance metrics like parameters and model performance. In this article, you learn how to enable logging in the following scenarios:
+Logs can help you diagnose errors and warnings, or track performance metrics like parameters and model performance. This article explains how to enable logging in the following scenarios:
> [!div class="checklist"]
-> * Log metrics, parameters and models when submitting jobs.
-> * Tracking runs when training interactively.
-> * Viewing diagnostic information about training.
+> * Log metrics, parameters, and models when submitting jobs.
+> * Track runs when training interactively.
+> * View diagnostic information about training.
> [!TIP] > This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md). ## Prerequisites
-* You must have an Azure Machine Learning workspace. [Create one if you don't have any](quickstart-create-resources.md).
-* You must have `mlflow`, and `azureml-mlflow` packages installed. If you don't, use the following command to install them in your development environment:
+* You must have an Azure Machine Learning workspace. If you don't have one, see [Create workspace resources](quickstart-create-resources.md).
+* You must have the `mlflow` and `azureml-mlflow` packages installed. If you don't, use the following command to install them in your development environment:
```bash pip install mlflow azureml-mlflow ```
-* If you are doing remote tracking (tracking experiments running outside Azure Machine Learning), configure MLflow to track experiments using Azure Machine Learning. See [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md) for more details.
-* To log metrics, parameters, artifacts and models in your experiments in Azure Machine Learning using MLflow, just import MLflow in your script:
+* If you're doing remote tracking (tracking experiments that run outside Azure Machine Learning), configure MLflow to track experiments. For more information, see [Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
+
+* To log metrics, parameters, artifacts, and models in your experiments in Azure Machine Learning using MLflow, just import MLflow into your script:
```python import mlflow ```
-### Configuring experiments
+### Configure experiments
-MLflow organizes the information in experiments and runs (in Azure Machine Learning, runs are called Jobs). There are some differences in how to configure them depending on how you are running your code:
+MLflow organizes the information in experiments and runs (in Azure Machine Learning, runs are called jobs). There are some differences in how to configure them depending on how you run your code:
# [Training interactively](#tab/interactive) When training interactively, such as in a Jupyter Notebook, use the following pattern:
-1. Create or set the active experiment.
+1. Create or set the active experiment.
1. Start the job. 1. Use logging methods to log metrics and other information. 1. End the job.
-For example, the following code snippet demonstrates configuring the experiment, and then logging during a job:
+For example, the following code snippet configures the experiment, and then logs during a job:
```python import mlflow
mlflow.end_run()
``` > [!TIP]
-> Technically you don't have to call `start_run()` as a new run is created if one doesn't exist and you call a logging API. In that case, you can use `mlflow.active_run()` to retrieve the run once currently being used. For more information, see [mlflow.active_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.active_run).
+> Technically you don't have to call `start_run()` because a new run is created if one doesn't exist and you call a logging API. In that case, you can use `mlflow.active_run()` to retrieve the run currently being used. For more information, see [mlflow.active_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.active_run).
You can also use the context manager paradigm:
with mlflow.start_run() as run:
pass ```
-When you start a new run with `mlflow.start_run`, it may be useful to indicate the parameter `run_name` which will then translate to the name of the run in Azure Machine Learning user interface and help you identify the run quicker:
+When you start a new run with `mlflow.start_run`, it might be useful to indicate the parameter `run_name`, which then translates to the name of the run in Azure Machine Learning user interface and helps you identify the run quicker:
```python with mlflow.start_run(run_name="iris-classifier-random-forest") as run:
For more information on MLflow logging APIs, see the [MLflow reference](https://
# [Training with jobs](#tab/jobs)
-When running training jobs in Azure Machine Learning, you don't need to call `mlflow.start_run` as runs are automatically started. Hence, you can use mlflow tracking capabilities directly in your training scripts:
+When running training jobs in Azure Machine Learning, you don't need to call `mlflow.start_run` because runs are automatically started. Hence, you can use mlflow tracking capabilities directly in your training scripts:
```python import mlflow
mlflow.log_metric('anothermetric',1)
-## Logging parameters
+## Log parameters
MLflow supports the logging parameters used by your experiments. Parameters can be of any type, and can be logged using the following syntax:
params = {
mlflow.log_params(params) ```
-## Logging metrics
+## Log metrics
Metrics, as opposite to parameters, are always numeric. The following table describes how to log specific numeric types:
-|Logged Value|Example code| Notes|
+|Logged value|Example code| Notes|
|-|-|-| |Log a numeric value (int or float) | `mlflow.log_metric("my_metric", 1)`| |
-|Log a numeric value (int or float) over time | `mlflow.log_metric("my_metric", 1, step=1)`| Use parameter `step` to indicate the step at which you are logging the metric value. It can be any integer number. It defaults to zero. |
+|Log a numeric value (int or float) over time | `mlflow.log_metric("my_metric", 1, step=1)`| Use parameter `step` to indicate the step at which you log the metric value. It can be any integer number. It defaults to zero. |
|Log a boolean value | `mlflow.log_metric("my_metric", 0)`| 0 = True, 1 = False| > [!IMPORTANT]
-> __Performance considerations:__ If you need to log multiple metrics (or multiple values for the same metric) avoid making calls to `mlflow.log_metric` in loops. Better performance can be achieved by logging batch of metrics. Use the method `mlflow.log_metrics` which accepts a dictionary with all the metrics you want to log at once or use `MLflowClient.log_batch` which accepts multiple type of elements for logging. See [Logging curves or list of values](#logging-curves-or-list-of-values) for an example.
+> **Performance considerations:** If you need to log multiple metrics (or multiple values for the same metric), avoid making calls to `mlflow.log_metric` in loops. Better performance can be achieved by logging a batch of metrics. Use the method `mlflow.log_metrics` which accepts a dictionary with all the metrics you want to log at once or use `MLflowClient.log_batch` which accepts multiple type of elements for logging. See [Log curves or list of values](#log-curves-or-list-of-values) for an example.
-### Logging curves or list of values
+### Log curves or list of values
-Curves (or list of numeric values) can be logged with MLflow by logging the same metric multiple times. The following example shows how to do it:
+Curves (or a list of numeric values) can be logged with MLflow by logging the same metric multiple times. The following example shows how to do it:
```python list_to_log = [1, 2, 3, 2, 1, 2, 3, 2, 1]
client.log_batch(mlflow.active_run().info.run_id,
metrics=[Metric(key="sample_list", value=val, timestamp=int(time.time() * 1000), step=0) for val in list_to_log]) ```
-## Logging images
+## Log images
-MLflow supports two ways of logging images. Both of them persists the given image as an artifact inside of the run.
+MLflow supports two ways of logging images. Both ways persist the given image as an artifact inside of the run.
-|Logged Value|Example code| Notes|
+|Logged value|Example code| Notes|
|-|-|-|
-|Log numpy metrics or PIL image objects|`mlflow.log_image(img, "figure.png")`| `img` should be an instance of `numpy.ndarray` or `PIL.Image.Image`. `figure.png` is the name of the artifact that will be generated inside of the run. It doesn't have to be an existing file.|
-|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`| `figure.png` is the name of the artifact that will be generated inside of the run. It doesn't have to be an existing file. |
+|Log numpy metrics or PIL image objects|`mlflow.log_image(img, "figure.png")`| `img` should be an instance of `numpy.ndarray` or `PIL.Image.Image`. `figure.png` is the name of the artifact generated inside of the run. It doesn't have to be an existing file.|
+|Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`| `figure.png` is the name of the artifact generated inside of the run. It doesn't have to be an existing file. |
-## Logging files
+## Log files
In general, files in MLflow are called artifacts. You can log artifacts in multiple ways in Mlflow:
-|Logged Value|Example code| Notes|
+|Logged value|Example code| Notes|
|-|-|-|
-|Log text in a text file | `mlflow.log_text("text string", "notes.txt")`| Text is persisted inside of the run in a text file with name `notes.txt`. |
-|Log dictionaries as `JSON` and `YAML` files | `mlflow.log_dict(dictionary, "file.yaml"` | `dictionary` is a dictionary object containing all the structure that you want to persist as `JSON` or `YAML` file. |
+|Log text in a text file | `mlflow.log_text("text string", "notes.txt")`| Text is persisted inside of the run in a text file with name *notes.txt*. |
+|Log dictionaries as JSON and YAML files | `mlflow.log_dict(dictionary, "file.yaml"` | `dictionary` is a dictionary object containing all the structure that you want to persist as a JSON or YAML file. |
|Log a trivial file already existing | `mlflow.log_artifact("path/to/file.pkl")`| Files are always logged in the root of the run. If `artifact_path` is provided, then the file is logged in a folder as indicated in that parameter. |
-|Log all the artifacts in an existing folder | `mlflow.log_artifacts("path/to/folder")`| Folder structure is copied to the run, but the root folder indicated is not included. |
+|Log all the artifacts in an existing folder | `mlflow.log_artifacts("path/to/folder")`| Folder structure is copied to the run, but the root folder indicated isn't included. |
> [!TIP]
-> When __logging large files__ with `log_artifact` or `log_model`, you may encounter time out errors before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`. It's default value is `300` (seconds).
+> When you log large files with `log_artifact` or `log_model`, you might encounter time out errors before the upload of the file is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`. It's default value is *300* (seconds).
-## Logging models
+## Log models
-MLflow introduces the concept of "models" as a way to package all the artifacts required for a given model to function. Models in MLflow are always a folder with an arbitrary number of files, depending on the framework used to generate the model. Logging models has the advantage of tracking all the elements of the model as a single entity that can be __registered__ and then __deployed__. On top of that, MLflow models enjoy the benefit of [no-code deployment](how-to-deploy-mlflow-models.md) and can be used with the [Responsible AI dashboard](how-to-responsible-ai-dashboard.md) in studio. Read the article [From artifacts to models in MLflow](concept-mlflow-models.md) for more information.
+MLflow introduces the concept of *models* as a way to package all the artifacts required for a given model to function. Models in MLflow are always a folder with an arbitrary number of files, depending on the framework used to generate the model. Logging models has the advantage of tracking all the elements of the model as a single entity that can be *registered* and then *deployed*. On top of that, MLflow models enjoy the benefit of [no-code deployment](how-to-deploy-mlflow-models.md) and can be used with the [Responsible AI dashboard](how-to-responsible-ai-dashboard.md) in studio. For more information, see [From artifacts to models in MLflow](concept-mlflow-models.md).
-To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For more details about how to log MLflow models see [Logging MLflow models](how-to-log-mlflow-models.md) For migrating existing models to MLflow, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
+To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For more information, see [Logging MLflow models](how-to-log-mlflow-models.md). For migrating existing models to MLflow, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
> [!TIP]
-> When __logging large models__, you may encounter the error `Failed to flush the queue within 300 seconds`. Usually, it means the operation is timing out before the upload of the model artifacts is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`.
+> When you log large models, you might encounter the error `Failed to flush the queue within 300 seconds`. Usually, it means the operation is timing out before the upload of the model artifacts is completed. Consider increasing the timeout value by adjusting the environment variable `AZUREML_ARTIFACTS_DEFAULT_TIMEOUT`.
## Automatic logging
-With Azure Machine Learning and MLflow, users can log metrics, model parameters and model artifacts automatically when training a model. Each framework decides what to track automatically for you. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported. [Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
+With Azure Machine Learning and MLflow, users can log metrics, model parameters, and model artifacts automatically when training a model. Each framework decides what to track automatically for you. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported. [Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
-To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
+To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging), insert the following code before your training code:
```Python mlflow.autolog() ``` > [!TIP]
-> You can control what gets automatically logged with autolog. For instance, if you indicate `mlflow.autolog(log_models=False)`, MLflow will log everything but models for you. Such control is useful in cases where you want to log models manually but still enjoy automatic logging of metrics and parameters. Also notice that some frameworks may disable automatic logging of models if the trained model goes behond specific boundaries. Such behavior depends on the flavor used and we recommend you to view they documentation if this is your case.
+> You can control what gets automatically logged with autolog. For instance, if you indicate `mlflow.autolog(log_models=False)`, MLflow logs everything but models for you. Such control is useful in cases where you want to log models manually but still enjoy automatic logging of metrics and parameters. Also notice that some frameworks might disable automatic logging of models if the trained model goes beyond specific boundaries. Such behavior depends on the flavor used and we recommend that you view the documentation if this is your case.
## View jobs/runs information with MLflow
tags = run.data.tags
``` >[!NOTE]
-> The metrics dictionary returned by `mlflow.get_run` or `mlflow.seach_runs` only returns the most recently logged value for a given metric name. For example, if you log a metric called `iteration` multiple times with values, `1`, then `2`, then `3`, then `4`, only `4` is returned when calling `run.data.metrics['iteration']`.
+> The metrics dictionary returned by `mlflow.get_run` or `mlflow.search_runs` only returns the most recently logged value for a given metric name. For example, if you log a metric called `iteration` multiple times with values, *1*, then *2*, then *3*, then *4*, only *4* is returned when calling `run.data.metrics['iteration']`.
> > To get all metrics logged for a particular metric name, you can use `MlFlowClient.get_metric_history()` as explained in the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run). <a name="view-the-experiment-in-the-web-portal"></a> > [!TIP]
-> MLflow can retrieve metrics and parameters from multiple runs at the same time, allowing for quick comparisons across multiple trials. Learn about this in [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
+> MLflow can retrieve metrics and parameters from multiple runs at the same time, allowing for quick comparisons across multiple trials. To learn more, see [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
-Any artifact logged by a run can be queried by MLflow. Artifacts can't be accessed using the run object itself and the MLflow client should be used instead:
+MLflow can query any artifact logged by a run. Artifacts can't be accessed using the run object itself and the MLflow client should be used instead:
```python client = mlflow.tracking.MlflowClient() client.list_artifacts("<RUN_ID>") ```
-The method above will list all the artifacts logged in the run, but they will remain stored in the artifacts store (Azure Machine Learning storage). To download any of them, use the method `download_artifact`:
+This method lists all the artifacts logged in the run, but they remain stored in the artifacts store (Azure Machine Learning storage). To download any of them, use the method `download_artifact`:
```python file_path = client.download_artifacts("<RUN_ID>", path="feature_importance_weight.png") ```
-For more information please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#getting-metrics-parameters-artifacts-and-models).
+For more information, please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#getting-metrics-parameters-artifacts-and-models).
## View jobs/runs information in the studio You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
-Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar. Click on the job of interest to enter the details view, and then select the **Metrics** tab.
+Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific experiments by applying the **Experiment** filter in the top menu bar. Select the job of interest to enter the details view, and then select the **Metrics** tab.
-Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you have created your desired view, you can save it for future use and share it with your teammates using a direct link.
+Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. After you create your desired view, you can save it for future use and share it with your teammates using a direct link.
### View and download diagnostic logs
Log files are an essential resource for debugging the Azure Machine Learning wor
1. Navigate to the **Jobs** tab. 1. Select the runID for a specific run. 1. Select **Outputs and logs** at the top of the page.
-2. Select **Download all** to download all your logs into a zip folder.
-3. You can also download individual log files by choosing the log file and selecting **Download**
+1. Select **Download all** to download all your logs into a zip folder.
+1. You can also download individual log files by choosing the log file and selecting **Download**
#### user_logs folder
-This folder contains information about the user generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up. This file contains `stdout` log and `stderr` logs from your control script and training script, one per process. In most cases, you'll monitor the logs here.
+This folder contains information about the user generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up. This file contains `stdout` log and `stderr` logs from your control script and training script, one per process. In most cases, you monitor the logs here.
#### system_logs folder
-This folder contains the logs generated by Azure Machine Learning and it will be closed by default. The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
+This folder contains the logs generated by Azure Machine Learning and it's closed by default. The logs generated by the system are grouped into different folders, based on the stage of the job in the runtime.
#### Other folders
-For jobs training on multi-compute clusters, logs are present for each node IP. The structure for each node is the same as single node jobs. There's one more logs folder for overall execution, stderr, and stdout logs.
+For jobs training on multi-compute clusters, logs are present for each IP node. The structure for each node is the same as single node jobs. There's one more logs folder for overall execution, stderr, and stdout logs.
-Azure Machine Learning logs information from various sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs aren't documented. If you encounter problems and contact Microsoft support, they may be able to use these logs during troubleshooting.
+Azure Machine Learning logs information from various sources during training, such as AutoML or the Docker container that runs the training job. Many of these logs aren't documented. If you encounter problems and contact Microsoft support, they might be able to use these logs during troubleshooting.
## Next steps
-* [Train ML models with MLflow and Azure Machine Learning](how-to-train-mlflow-projects.md).
-* [Migrate from SDK v1 logging to MLflow tracking](reference-migrate-sdk-v1-mlflow-tracking.md).
+* [Train ML models with MLflow and Azure Machine Learning](how-to-train-mlflow-projects.md)
+* [Migrate from SDK v1 logging to MLflow tracking](reference-migrate-sdk-v1-mlflow-tracking.md)
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
To enable the [serverless Spark jobs](how-to-submit-spark-jobs.md) for the manag
Use a YAML file to define the managed VNet configuration and add a private endpoint for the Azure Storage Account. Also set `spark_enabled: true`: > [!TIP]
- > This example is for a managed VNet configured using `isolation_mode: allow_internet_outbound` to allow internet traffic. If you want to allow only approved outbound traffic to enable data exfiltration protection (DEP), use `isolation_mode: allow_only_approved_outbound`.
+ > This example is for a managed VNet configured using `isolation_mode: allow_internet_outbound` to allow internet traffic. If you want to allow only approved outbound traffic, use `isolation_mode: allow_only_approved_outbound`.
```yml name: myworkspace
To enable the [serverless Spark jobs](how-to-submit-spark-jobs.md) for the manag
``` > [!NOTE]
- > - When data exfiltration protection (DEP) is enabled, conda package dependencies defined in Spark session configuration will fail to install. To resolve this problem, upload a self-contained Python package wheel with no external dependencies to an Azure storage account and create private endpoint to this storage account. Use the path to Python package wheel as `py_files` parameter in your Spark job.
+ > - When **Allow Only Approved Outbound** is enabled (`isolation_mode: allow_only_approved_outbound`), conda package dependencies defined in Spark session configuration will fail to install. To resolve this problem, upload a self-contained Python package wheel with no external dependencies to an Azure storage account and create private endpoint to this storage account. Use the path to Python package wheel as `py_files` parameter in your Spark job.
> - If the workspace was created with `isolation_mode: allow_internet_outbound`, it can not be updated later to use `isolation_mode: allow_only_approved_outbound`. # [Python SDK](#tab/python)
To enable the [serverless Spark jobs](how-to-submit-spark-jobs.md) for the manag
The following example demonstrates how to create a managed VNet for an existing Azure Machine Learning workspace named `myworkspace`. It also adds a private endpoint for the Azure Storage Account and sets `spark_enabled=true`: > [!TIP]
- > The following example is for a managed VNet configured using `IsolationMode.ALLOW_INTERNET_OUTBOUND` to allow internet traffic. If you want to allow only approved outbound traffic to enable data exfiltration protection (DEP), use `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND`.
+ > The following example is for a managed VNet configured using `IsolationMode.ALLOW_INTERNET_OUTBOUND` to allow internet traffic. If you want to allow only approved outbound traffic, use `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND`.
```python # Get the existing workspace
To enable the [serverless Spark jobs](how-to-submit-spark-jobs.md) for the manag
ml_client.workspaces.begin_update(ws) ``` > [!NOTE]
- > - When data exfiltration protection (DEP) is enabled, conda package dependencies defined in Spark session configuration will fail to install. To resolve this problem, upload a self-contained Python package wheel with no external dependencies to an Azure storage account and create private endpoint to this storage account. Use the path to Python package wheel as `py_files` parameter in the Spark job.
+ > - When **Allow Only Approved Outbound** is enabled (`isolation_mode: allow_only_approved_outbound`), conda package dependencies defined in Spark session configuration will fail to install. To resolve this problem, upload a self-contained Python package wheel with no external dependencies to an Azure storage account and create private endpoint to this storage account. Use the path to Python package wheel as `py_files` parameter in the Spark job.
> - If the workspace was created with `IsolationMode.ALLOW_INTERNET_OUTBOUND`, it can not be updated later to use `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND`.
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
Previously updated : 04/14/2023 Last updated : 01/31/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
Azure Machine Learning has several inbound and outbound dependencies. Some of th
- `automlresources-prod.azureedge.net` > [!TIP]
-> The information in this article is primarily about using an Azure Virtual Network. Azure Machine Learning can also use a **managed virtual networks** (preview). With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes.
+> The information in this article is primarily about using an Azure Virtual Network. Azure Machine Learning can also use a **managed virtual networks**. With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes.
> > To address data exfiltration concerns, managed virtual networks allow you to restrict egress to only approved outbound traffic. For more information, see [Workspace managed network isolation](how-to-managed-network.md).
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
Previously updated : 09/06/2022 Last updated : 01/31/2024 # Secure an Azure Machine Learning inferencing environment with virtual networks
In this article, you learn how to secure inferencing environments (online endpoi
* Azure Machine Learning managed online endpoints > [!TIP]
- > Microsoft recommends using an Azure Machine Learning **managed virtual networks** (preview) instead of the steps in this article when securing managed online endpoints. With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes. You can also add private endpoints for resources needed by the workspace, such as Azure Storage Account. For more information, see [Workspace managed network isolation](how-to-managed-network.md).
+ > Microsoft recommends using an Azure Machine Learning **managed virtual networks** instead of the steps in this article when securing managed online endpoints. With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes. You can also add private endpoints for resources needed by the workspace, such as Azure Storage Account. For more information, see [Workspace managed network isolation](how-to-managed-network.md).
* Azure Kubernetes Service
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
This process allows you to encrypt both the Data and the OS Disk of the deployed
## Next steps * [Customer-managed keys with Azure Machine Learning](concept-customer-managed-keys.md)
-* [Create a workspace with Azure CLI](how-to-manage-workspace-cli.md#customer-managed-key-and-high-business-impact-workspace) |
-* [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-data-encryption-key) |
-* [Create a workspace with a template](how-to-create-workspace-template.md#deploy-an-encrypted-workspace) |
-* [Create, run, and delete Azure Machine Learning resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys) |
+* [Create a workspace with Azure CLI](how-to-manage-workspace-cli.md#customer-managed-key-and-high-business-impact-workspace)
+* [Create and manage a workspace](how-to-manage-workspace.md#use-your-own-data-encryption-key)
+* [Create a workspace with a template](how-to-create-workspace-template.md#deploy-an-encrypted-workspace)
+* [Create, run, and delete Azure Machine Learning resources with REST](how-to-manage-rest.md#create-a-workspace-using-customer-managed-encryption-keys)
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
Previously updated : 10/05/2022 Last updated : 01/26/2024 #Customer intent: As a Python PyTorch developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my deep learning models at scale.
In this article, you'll learn to train, hyperparameter tune, and deploy a [PyTorch](https://pytorch.org/) model using the Azure Machine Learning Python SDK v2.
-You'll use the example scripts in this article to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. To learn more about transfer learning, see the [deep learning vs machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning) article.
+You'll use example scripts to classify chicken and turkey images to build a deep learning neural network (DNN) based on [PyTorch's transfer learning tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html). Transfer learning is a technique that applies knowledge gained from solving one problem to a different but related problem. Transfer learning shortens the training process by requiring less data, time, and compute resources than training from scratch. To learn more about transfer learning, see [Deep learning vs. machine learning](./concept-deep-learning-vs-machine-learning.md#what-is-transfer-learning).
Whether you're training a deep learning PyTorch model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with Azure Machine Learning. ## Prerequisites
-To benefit from this article, you'll need to:
--- Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/).
+- An Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/).
- Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook.
- - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary
- - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository.
- - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > python > jobs > single-step > pytorch > train-hyperparameter-tune-deploy-with-pytorch**.
- - Your Jupyter notebook server
- - [Install the Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
+ - Azure Machine Learning compute instanceΓÇöno downloads or installation necessary:
+ - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server preloaded with the SDK and the sample repository.
+ - Under the **Samples** tab in the **Notebooks** section of your workspace, find a completed and expanded notebook by navigating to this directory: *SDK v2/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch*
+ - Your Jupyter notebook server:
+ - Install the [Azure Machine Learning SDK (v2)](https://aka.ms/sdk-v2-install).
- Download the training script file [pytorch_train.py](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/src/pytorch_train.py).
-You can also find a completed [Jupyter Notebook version](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) of this guide on the GitHub samples page.
+You can also find a completed [Jupyter notebook version](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb) of this guide on the GitHub samples page.
[!INCLUDE [gpu quota](includes/machine-learning-gpu-quota-prereq.md)]
This section sets up the job for training by loading the required Python package
### Connect to the workspace
-First, you'll need to connect to your Azure Machine Learning workspace. The [Azure Machine Learning workspace](concept-workspace.md) is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
+First, you need to connect to your [Azure Machine Learning workspace](concept-workspace.md). The workspace is the top-level resource for the service. It provides you with a centralized place to work with all the artifacts you create when you use Azure Machine Learning.
We're using `DefaultAzureCredential` to get access to the workspace. This credential should be capable of handling most Azure SDK authentication scenarios.
-If `DefaultAzureCredential` doesn't work for you, see [`azure-identity reference documentation`](/python/api/azure-identity/azure.identity) or [`Set up authentication`](how-to-setup-authentication.md?tabs=sdk) for more available credentials.
+If `DefaultAzureCredential` doesn't work for you, see [azure.identity package](/python/api/azure-identity/azure.identity) or [Set up authentication](how-to-setup-authentication.md?tabs=sdk) for more available credentials.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=credential)]
If you prefer to use a browser to sign in and authenticate, you should uncomment
# credential = InteractiveBrowserCredential() ```
-Next, get a handle to the workspace by providing your Subscription ID, Resource Group name, and workspace name. To find these parameters:
+Next, get a handle to the workspace by providing your subscription ID, resource group name, and workspace name. To find these parameters:
1. Look for your workspace name in the upper-right corner of the Azure Machine Learning studio toolbar.
-2. Select your workspace name to show your Resource Group and Subscription ID.
-3. Copy the values for Resource Group and Subscription ID into the code.
+2. Select your workspace name to show your resource group and subscription ID.
+3. Copy the values for your resource group and subscription ID into the code.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=ml_client)]
-The result of running this script is a workspace handle that you'll use to manage other resources and jobs.
+The result of running this script is a workspace handle that you can use to manage other resources and jobs.
> [!NOTE]
-> - Creating `MLClient` will not connect the client to the workspace. The client initialization is lazy and will wait for the first time it needs to make a call. In this article, this will happen during compute creation.
+> Creating `MLClient` doesn't connect the client to the workspace. The client initialization is lazy and waits for the first time it needs to make a call. In this article, this happens during compute creation.
### Create a compute resource to run the job Azure Machine Learning needs a compute resource to run a job. This resource can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
-In the following example script, we provision a Linux [`compute cluster`](./how-to-create-attach-compute-cluster.md?tabs=python). You can see the [`Azure Machine Learning pricing`](https://azure.microsoft.com/pricing/details/machine-learning/) page for the full list of VM sizes and prices. Since we need a GPU cluster for this example, let's pick a *STANDARD_NC6* model and create an Azure Machine Learning compute.
+In the following example script, we provision a Linux [compute cluster](./how-to-create-attach-compute-cluster.md?tabs=python). You can see the [Azure Machine Learning pricing](https://azure.microsoft.com/pricing/details/machine-learning/) page for the full list of VM sizes and prices. Since we need a GPU cluster for this example, let's pick a `STANDARD_NC6` model and create an Azure Machine Learning compute.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=gpu_compute_target)] ### Create a job environment
-To run an Azure Machine Learning job, you'll need an environment. An Azure Machine Learning [environment](concept-environments.md) encapsulates the dependencies (such as software runtime and libraries) needed to run your machine learning training script on your compute resource. This environment is similar to a Python environment on your local machine.
+To run an Azure Machine Learning job, you need an environment. An Azure Machine Learning [environment](concept-environments.md) encapsulates the dependencies (such as software runtime and libraries) needed to run your machine learning training script on your compute resource. This environment is similar to a Python environment on your local machine.
-Azure Machine Learning allows you to either use a curated (or ready-made) environment or create a custom environment using a Docker image or a Conda configuration. In this article, you'll reuse the curated Azure Machine Learning environment `AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu`. You'll use the latest version of this environment using the `@latest` directive.
+Azure Machine Learning allows you to either use a curated (or ready-made) environment or create a custom environment using a Docker image or a Conda configuration. In this article, you reuse the curated Azure Machine Learning environment `AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu`. Use the latest version of this environment using the `@latest` directive.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=curated_env_name)] ## Configure and submit your training job
-In this section, we'll begin by introducing the data for training. We'll then cover how to run a training job, using a training script that we've provided. You'll learn to build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in Azure Machine Learning.
+In this section, we begin by introducing the data for training. We then cover how to run a training job, using a training script that we've provided. You'll learn to build the training job by configuring the command for running the training script. Then, you'll submit the training job to run in Azure Machine Learning.
### Obtain the training data
-You'll use data that is stored on a public blob as a [zip file](https://azuremlexamples.blob.core.windows.net/datasets/fowl_data.zip). This dataset consists of about 120 training images each for two classes (turkeys and chickens), with 100 validation images for each class. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). We'll download and extract the dataset as part of our training script `pytorch_train.py`.
+
+You can use the dataset in this [zipped file](https://azuremlexamples.blob.core.windows.net/datasets/fowl_data.zip). This dataset consists of about 120 training images each for two classes (turkeys and chickens), with 100 validation images for each class. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). The training script *pytorch_train.py* downloads and extracts the dataset.
### Prepare the training script
-In this article, we've provided the training script *pytorch_train.py*. In practice, you should be able to take any custom training script as is and run it with Azure Machine Learning without having to modify your code.
+In the prerequisites section, we provided the training script *pytorch_train.py*. In practice, you should be able to take any custom training script *as is* and run it with Azure Machine Learning without having to modify your code.
The provided training script downloads the data, trains a model, and registers the model. ### Build the training job
-Now that you have all the assets required to run your job, it's time to build it using the Azure Machine Learning Python SDK v2. For this example, we'll be creating a `command`.
+Now that you have all the assets required to run your job, it's time to build it using the Azure Machine Learning Python SDK v2. For this example, we create a `command`.
An Azure Machine Learning `command` is a resource that specifies all the details needed to execute your training code in the cloud. These details include the inputs and outputs, type of hardware to use, software to install, and how to run your code. The `command` contains information to execute a single command. - #### Configure the command
-You'll use the general purpose `command` to run the training script and perform your desired tasks. Create a `Command` object to specify the configuration details of your training job.
+You'll use the general purpose `command` to run the training script and perform your desired tasks. Create a `command` object to specify the configuration details of your training job.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=job)] - The inputs for this command include the number of epochs, learning rate, momentum, and output directory. - For the parameter values:
- - provide the compute cluster `gpu_compute_target = "gpu-cluster"` that you created for running this command;
- - provide the curated environment `AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu` that you initialized earlier;
- - configure the command line action itselfΓÇöin this case, the command is `python pytorch_train.py`. You can access the inputs and outputs in the command via the `${{ ... }}` notation; and
- - configure metadata such as the display name and experiment name; where an experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure Machine Learning studio.
+ 1. Provide the compute cluster `gpu_compute_target = "gpu-cluster"` that you created for running this command.
+ 1. Provide the curated environment `AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu` that you initialized earlier.
+ 1. If you're not using the completed notebook in the Samples folder, specify the location of the *pytorch_train.py* file.
+ 1. Configure the command line action itselfΓÇöin this case, the command is `python pytorch_train.py`. You can access the inputs and outputs in the command via the `${{ ... }}` notation.
+ 1. Configure metadata such as the display name and experiment name, where an experiment is a container for all the iterations one does on a certain project. All the jobs submitted under the same experiment name would be listed next to each other in Azure Machine Learning studio.
### Submit the job
-It's now time to submit the job to run in Azure Machine Learning. This time, you'll use `create_or_update` on `ml_client.jobs`.
+It's now time to submit the job to run in Azure Machine Learning. This time, you use `create_or_update` on `ml_client.jobs`.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=create_job)]
-Once completed, the job will register a model in your workspace (as a result of training) and output a link for viewing the job in Azure Machine Learning studio.
+Once completed, the job registers a model in your workspace (as a result of training) and outputs a link for viewing the job in Azure Machine Learning studio.
> [!WARNING] > Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](concept-train-machine-learning-model.md#understand-what-happens-when-you-submit-a-training-job) or don't include it in the source directory. ### What happens during job execution+ As the job is executed, it goes through the following stages: -- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the job history and can be viewed to monitor progress. If a curated environment is specified, the cached image backing that curated environment will be used.
+- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the job history and can be viewed to monitor progress. If a curated environment is specified, the cached image backing that curated environment is used.
- **Scaling**: The cluster attempts to scale up if it requires more nodes to execute the run than are currently available.
As the job is executed, it goes through the following stages:
## Tune model hyperparameters
-You've trained the model with one set of parameters, let's now see if you can further improve the accuracy of your model. You can tune and optimize your model's hyperparameters using Azure Machine Learning's [`sweep`](/python/api/azure-ai-ml/azure.ai.ml.sweep) capabilities.
+You trained the model with one set of parameters, let's now see if you can further improve the accuracy of your model. You can tune and optimize your model's hyperparameters using Azure Machine Learning's [`sweep`](/python/api/azure-ai-ml/azure.ai.ml.sweep) capabilities.
-To tune the model's hyperparameters, define the parameter space in which to search during training. You'll do this by replacing some of the parameters passed to the training job with special inputs from the `azure.ml.sweep` package.
+To tune the model's hyperparameters, define the parameter space in which to search during training. You do this by replacing some of the parameters passed to the training job with special inputs from the `azure.ml.sweep` package.
Since the training script uses a learning rate schedule to decay the learning rate every several epochs, you can tune the initial learning rate and the momentum parameters. [!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=job_for_sweep)]
-Then, you'll configure sweep on the command job, using some sweep-specific parameters, such as the primary metric to watch and the sampling algorithm to use.
+Then, you can configure sweep on the command job, using some sweep-specific parameters, such as the primary metric to watch and the sampling algorithm to use.
In the following code, we use random sampling to try different configuration sets of hyperparameters in an attempt to maximize our primary metric, `best_val_acc`. We also define an early termination policy, the `BanditPolicy`, to terminate poorly performing runs early.
-The `BanditPolicy` will terminate any run that doesn't fall within the slack factor of our primary evaluation metric. You will apply this policy every epoch (since we report our `best_val_acc` metric every epoch and `evaluation_interval`=1). Notice we will delay the first policy evaluation until after the first 10 epochs (`delay_evaluation`=10).
+The `BanditPolicy` terminates any run that doesn't fall within the slack factor of our primary evaluation metric. You apply this policy every epoch (since we report our `best_val_acc` metric every epoch and `evaluation_interval`=1). Notice we delay the first policy evaluation until after the first 10 epochs (`delay_evaluation`=10).
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=sweep_job)]
-Now, you can submit this job as before. This time, you'll be running a sweep job that sweeps over your train job.
+Now, you can submit this job as before. This time, you're running a sweep job that sweeps over your train job.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=create_sweep_job)]
-You can monitor the job by using the studio user interface link that is presented during the job run.
+You can monitor the job by using the studio user interface link that's presented during the job run.
## Find the best model
Once all the runs complete, you can find the run that produced the model with th
You can now deploy your model as an [online endpoint](concept-endpoints.md)ΓÇöthat is, as a web service in the Azure cloud.
-To deploy a machine learning service, you'll typically need:
+To deploy a machine learning service, you typically need:
- The model assets that you want to deploy. These assets include the model's file and metadata that you already registered in your training job. - Some code to run as a service. The code executes the model on a given input request (an entry script). This entry script receives data submitted to a deployed web service and passes it to the model. After the model processes the data, the script returns the model's response to the client. The script is specific to your model and must understand the data that the model expects and returns. When you use an MLFlow model, Azure Machine Learning automatically creates this script for you.
For more information about deployment, see [Deploy and score a machine learning
### Create a new online endpoint
-As a first step to deploying your model, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you'll create a unique name using a universally unique identifier (UUID).
+As a first step to deploying your model, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you create a unique name using a universally unique identifier (UUID).
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=online_endpoint_name)] [!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=endpoint)]
-Once you've created the endpoint, you can retrieve it as follows:
+After you create the endpoint, you can retrieve it as follows:
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=get_endpoint)] ### Deploy the model to the endpoint
-After you've created the endpoint, you can deploy the model with the entry script. An endpoint can have multiple deployments. Using rules, the endpoint can then direct traffic to these deployments.
+You can now deploy the model with the entry script. An endpoint can have multiple deployments. Using rules, the endpoint can then direct traffic to these deployments.
+
+In the following code, you'll create a single deployment that handles 100% of the incoming traffic. We specified an arbitrary color name *aci-blue* for the deployment. You could also use any other name such as *aci-green* or *aci-red* for the deployment.
-In the following code, you'll create a single deployment that handles 100% of the incoming traffic. We've specified an arbitrary color name (*aci-blue*) for the deployment. You could also use any other name such as *aci-green* or *aci-red* for the deployment.
-The code to deploy the model to the endpoint does the following:
+The code to deploy the model to the endpoint:
-- deploys the best version of the model that you registered earlier;-- scores the model, using the `score.py` file; and-- uses the curated environment (that you specified earlier) to perform inferencing.
+- Deploys the best version of the model that you registered earlier.
+- Scores the model, using the *score.py* file.
+- Uses the curated environment (that you specified earlier) to perform inferencing.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=blue_deployment)]
The code to deploy the model to the endpoint does the following:
### Test the deployed model
-Now that you've deployed the model to the endpoint, you can predict the output of the deployed model, using the `invoke` method on the endpoint.
+Now that you deployed the model to the endpoint, you can predict the output of the deployed model, using the `invoke` method on the endpoint.
To test the endpoint, let's use a sample image for prediction. First, let's display the image.
You can then invoke the endpoint with this JSON and print the result.
### Clean up resources
-If you won't be using the endpoint, delete it to stop using the resource. Make sure no other deployments are using the endpoint before you delete it.
+If you don't need the endpoint anymore, delete it to stop using resource. Make sure no other deployments are using the endpoint before you delete it.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb?name=delete_endpoint)] > [!NOTE] > Expect this cleanup to take a bit of time to finish. - ## Next steps In this article, you trained and registered a deep learning neural network using PyTorch on Azure Machine Learning. You also deployed the model to an online endpoint. See these other articles to learn more about Azure Machine Learning.
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
machine-learning How To Create Manage Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md
Azure Machine Learning supports the following types of runtimes:
|Runtime type|Underlying compute type|Life cycle management|Customize environment | ||-|||
-|Automatic runtime (preview) |Serverless compute| Automatic | Easily customize packages|
+|Automatic runtime (preview) |[Serverless compute](../how-to-use-serverless-compute.md)| Automatic | Easily customize packages|
|Compute instance runtime | Compute instance | Manual | Manually customize via Azure Machine Learning environment| If you're a new user, we recommend that you use the automatic runtime (preview). You can easily customize the environment by adding packages in the `requirements.txt` file in `flow.dag.yaml` in the flow folder. If you're already familiar with the Azure Machine Learning environment and compute instances, you can use your existing compute instance and environment to build a compute instance runtime.
Automatic is the default option for a runtime. You can start an automatic runtim
- Customize the idle time, which saves code by deleting the runtime automatically if it isn't in use. - Set the user-assigned managed identity. The automatic runtime uses this identity to pull a base image and install packages. Make sure that the user-assigned managed identity has Azure Container Registry pull permission.
- If you don't set this identity, you use the user identity by default. [Learn more about how to create and update user-assigned identities for a workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
+ If you don't set this identity, we use the user identity by default. [Learn more about how to create and update user-assigned identities for a workspace](../how-to-identity-based-service-authentication.md#to-create-a-workspace-with-multiple-user-assigned-identities-use-one-of-the-following-methods).
:::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png" alt-text="Screenshot of prompt flow with advanced settings for starting an automatic runtime on a flow page." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-automatic-settings.png":::
Before you create a compute instance runtime, make sure that a compute instance
:::image type="content" source="./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png" alt-text="Screenshot of the option to use an existing custom application and the box for selecting an application." lightbox = "./media/how-to-create-manage-runtime/runtime-creation-ci-existing-custom-application-ui.png":::
-## Use a runtime in prompt flow authoring
+## Use a runtime in prompt flow authoring UI
When you're authoring a flow, you can select and change the runtime from the **Runtime** dropdown list on the upper right of the flow page.
When you're performing evaluation, you can use the original runtime in the flow
:::image type="content" source="./media/how-to-create-manage-runtime/runtime-authoring-bulktest.png" alt-text="Screenshot of runtime details on the wizard page for configuring an evaluation." lightbox = "./media/how-to-create-manage-runtime/runtime-authoring-bulktest.png":::
+## Use a runtime to submit a flow run in CLI/SDK
+
+Same as authoring UI, you can also specify the runtime in CLI/SDK when you submit a flow run.
+
+# [Azure CLI](#tab/cli)
+
+In your `run.yml` you can specify the runtime name or use the automatic runtime. If you specify the runtime name, it uses the runtime with the name you specified. If you specify automatic, it uses the automatic runtime. If you don't specify the runtime, it uses the automatic runtime by default.
+
+In automatic runtime case, you can also specify the instance type, if you don't specify the instance type, Azure Machine Learning chooses an instance type (VM size) based on factors like quota, cost, performance and disk size, learn more about [serverless compute](../how-to-use-serverless-compute.md)
+
+```yaml
+$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
+flow: <path_to_flow>
+data: <path_to_flow>/data.jsonl
+
+column_mapping:
+ url: ${data.url}
+
+# define cloud resource
+# if omitted, it will use the automatic runtime, you can also specify the runtime name, specify automatic will also use the automatic runtime.
+# runtime: <runtime_name>
++
+# define instance type only work for automatic runtime, will be ignored if you specify the runtime name.
+resources:
+ instance_type: <instance_type>
+
+```
+
+Submit this run via CLI:
+
+```sh
+pfazure run create --file run.yml
+```
+
+# [Python SDK](#tab/python)
+
+```python
+# load flow
+flow = "<path_to_flow>"
+data = "<path_to_flow>/data.jsonl"
++
+# define cloud resource
+# runtime = <runtime_name>
+define instance type
+resources = {"instance_type": <instance_type>}
+
+# create run
+base_run = pf.run(
+ flow=flow,
+ data=data,
+ runtime=runtime, # if omitted, it will use the automatic runtime, you can also specify the runtime name, specif automatic will also use the automatic runtime.
+# resources = resources, # only work for automatic runtime, will be ignored if you specify the runtime name.
+ column_mapping={
+ "url": "${data.url}"
+ },
+)
+print(base_run)
+```
+
+Learn full end to end code first example: [Integrate prompt flow with LLM-based application DevOps](./how-to-integrate-with-llm-app-devops.md)
+
+### Reference files outside of the flow folder - automatic runtime only
+Sometimes, you might want to reference a `requirements.txt` file that is outside of the flow folder. For example, you might have complex project that includes multiple flows, and they share the same `requirements.txt` file. To do this, You can add this field `additional_includes` into the `flow.dag.yaml`. The value of this field is a list of the relative file/folder path to the flow folder. For example, if requirements.txt is in the parent folder of the flow folder, you can add `../requirements.txt` to the `additional_includes` field.
+
+```yaml
+inputs:
+ question:
+ type: string
+outputs:
+ output:
+ type: string
+ reference: ${answer_the_question_with_context.output}
+environment:
+ python_requirements_txt: requirements.txt
+additional_includes:
+ - ../requirements.txt
+...
+```
+
+When you submit flow run using automatic runtime, the `requirements.txt` file is copied to the flow folder, and use it to start your automatic runtime.
+ ## Update a runtime on the UI ### Update an automatic runtime (preview) on a flow page On a flow page, you can use the following options to manage an automatic runtime (preview): -- **Install packages** triggers `pip install -r requirements.txt` in the flow folder. This process can take a few minutes, depending on the packages that you install.
+- **Install packages** Open `requirements.txt` in prompt flow UI, you can add packages in it.
+- **View installed packages** shows the packages that are installed in the runtime. It includes the packages baked to base image and packages specify in the `requirements.txt` file in the flow folder.
- **Reset** deletes the current runtime and creates a new one with the same environment. If you encounter a package conflict issue, you can try this option. - **Edit** opens the runtime configuration page, where you can define the VM side and the idle time for the runtime. - **Stop** deletes the current runtime. If there's no active runtime on the underlying compute, the compute resource is also deleted.
To get the best experience and performance, try to keep your runtime up to date.
If you select **Use customized environment**, you first need to rebuild the environment by using the latest prompt flow image. Then update your runtime with the new custom environment.
+## Relationship between runtime, compute resource, flow and user.
+
+- One single user can have multiple compute resources (serverless or compute instance). Base on customer different need, we allow single user to have multiple compute resources. For example, one user can have multiple compute resources with different VM size. You can find
+- One compute resource can only be used by single user. Compute resource is model as private dev box of single user, so we didn't allow multiple user share same compute resources. In AI studio case, different user can join different project and data and other asset need to be isolated, so we didn't allow multiple user share same compute resources.
+- One compute resource can host multiple runtimes. Runtime is container running on underlying compute resource, as in common case, prompt flow authoring didn't need too many compute resources, we allow single compute resource to host multiple runtimes from same user.
+- One runtime only belongs to single compute resource in same time. But you can delete or stop runtime and reallocate it to other compute resource.
+- In automatic runtime, one flow only have one runtime, as we expect each flow is self contained it defined the base image and required python package in flow folder. In compute instance runtime, you can run different flow on same compute instance runtime, but you need make sure the packages and image is compatible.
+ ## Switch compute instance runtime to automatic runtime (preview) Automatic runtime (preview) has following advantages over compute instance runtime:
We would recommend you to switch to automatic runtime (preview) if you're using
- If you want to keep the automatic runtime (preview) as long running compute like compute instance, you can disable the idle shutdown toggle under automatic runtime (preview) edit option. + ## Next steps - [Develop a standard flow](how-to-develop-a-standard-flow.md)
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
Prompt flow relies on a file share storage to store a snapshot of the flow. If t
:::image type="content" source="../media/faq/flow-missing.png" alt-text="Screenshot that shows a flow missing an authoring page." lightbox = "../media/faq/flow-missing.png"::: There are possible reasons for this issue:-- If you disabled public access to storage account, then you need have access to storage account either add you IP to the storage Firewall or add access studio through the virtual network which have private endpoint to the storage account.
+- If public access to your storage account is disabled, you must ensure access by either adding your IP to the storage firewall or enabling access through a virtual network that has a private endpoint connected to the storage account.
:::image type="content" source="../media/faq/storage-account-networking-firewall.png" alt-text="Screenshot that shows firewall setting on storage account." lightbox = "../media/faq/storage-account-networking-firewall.png":::
There are possible reasons for this issue:
:::image type="content" source="../media/faq/datastore-with-wrong-account-key.png" alt-text="Screenshot that shows datastore with wrong account key." lightbox = "../media/faq/datastore-with-wrong-account-key.png"::: -- If you are using AI studio, the storage account need set CORS to allow AI studio access the storage account, otherwise, you will see the flow missing issue. You can add following CORS settings to the storage account to fix this issue.
+- If you're using AI studio, the storage account needs to set CORS to allow AI studio access the storage account, otherwise, you'll see the flow missing issue. You can add following CORS settings to the storage account to fix this issue.
- Go to storage account page, select `Resource sharing (CORS)` under `settings`, and select to `File service` tab. - Allowed origins: `https://mlworkspace.azure.ai,https://ml.azure.com,https://*.ml.azure.com,https://ai.azure.com,https://*.ai.azure.com,https://mlworkspacecanary.azure.ai,https://mlworkspace.azureml-test.net` - Allowed methods: `DELETE, GET, HEAD, POST, OPTIONS, PUT`
First, go to the compute instance terminal and run `docker ps` to find the root
Use `docker images` to check if the image was pulled successfully. If your image was pulled successfully, check if the Docker container is running. If it's already running, locate this runtime. It attempts to restart the runtime and compute instance.
-If you are using compute instance runtime AI studio, this is not scenario currently supported, please try use automatic runtime instead, [Switch compute instance runtime to automatic runtime](../how-to-create-manage-runtime.md#switch-compute-instance-runtime-to-automatic-runtime-preview).
+If you're using compute instance runtime AI studio, this scenario isn't currently supported, so use automatic runtime instead, [Switch compute instance runtime to automatic runtime](../how-to-create-manage-runtime.md#switch-compute-instance-runtime-to-automatic-runtime-preview).
### Run failed because of "No module named XXX"
You might experience timeout issues.
:::image type="content" source="../media/how-to-create-manage-runtime/ci-runtime-request-timeout.png" alt-text="Screenshot that shows a compute instance runtime timeout error in the studio UI." lightbox = "../media/how-to-create-manage-runtime/ci-runtime-request-timeout.png":::
-The error in the example says "UserError: Invoking runtime gega-ci timeout, error message: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing."
+The error in the example says "UserError: Invoking runtime gega-ci timeout, error message: The request was canceled due to the configured HttpClient. Timeout of 100 seconds elapsing."
### Identify which node consumes the most time
The error in the example says "UserError: Invoking runtime gega-ci timeout, erro
- **Case 1:** Python script node runs for a long time.
- :::image type="content" source="../media/how-to-create-manage-runtime/runtime-timeout-running-for-long-time.png" alt-text="Screenshot that shows a timeout run log in the studio UI." lightbox = "../media/how-to-create-manage-runtime/runtime-timeout-running-for-long-time.png":::
+ :::image type="content" source="../media/how-to-create-manage-runtime/runtime-timeout-running-for-long-time.png" alt-text="Screenshot that shows a timeout run sign in the studio UI." lightbox = "../media/how-to-create-manage-runtime/runtime-timeout-running-for-long-time.png":::
In this case, you can find that `PythonScriptNode` was running for a long time (almost 300 seconds). Then you can check the node details to see what's the problem.
The error in the example says "UserError: Invoking runtime gega-ci timeout, erro
1. If you can't find anything in runtime logs to indicate it's a specific node issue:
- - Contact the prompt flow team ([promptflow-eng](mailto:aml-pt-eng@microsoft.com)) with the runtime logs. We'll try to identify the root cause.
+ - Contact the prompt flow team ([promptflow-eng](mailto:aml-pt-eng@microsoft.com)) with the runtime logs. We try to identify the root cause.
### Find the compute instance runtime log for further investigation
Check if this compute instance is assigned to you and you have access to the wor
This error occurs because you're cloning a flow from others that's using a compute instance as the runtime. Because the compute instance runtime is user isolated, you need to create your own compute instance runtime or select a managed online deployment/endpoint runtime, which can be shared with others.
-### Find Python packages installed in runtime
+### Find Python packages installed in compute instance runtime
-Follow these steps to find Python packages installed in runtime:
+Follow these steps to find Python packages installed in compute instance runtime:
- Add a Python node in your flow. - Put the following code in the code section:
Follow these steps to find Python packages installed in runtime:
### How to find the raw inputs and outputs of in LLM tool for further investigation?
-In prompt flow, on flow page with successful run and run detail page, you can find the raw inputs and outputs of LLM tool in the output section. Click the `view full output` button to view full output.
+In prompt flow, on flow page with successful run and run detail page, you can find the raw inputs and outputs of LLM tool in the output section. Select the `view full output` button to view full output.
:::image type="content" source="../media/faq/view-full-output.png" alt-text="Screenshot that shows view full output on LLM node." lightbox = "../media/faq/view-full-output.png":::
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast.md
Previously updated : 11/18/2021 Last updated : 01/28/2024 show_latex: true
For time series forecasting, only **Rolling Origin Cross Validation (ROCV)** is
:::image type="content" source="../media/how-to-auto-train-forecast/rolling-origin-cross-validation.png" alt-text="Diagram showing cross validation folds separates the training and validation sets based on the cross validation step size.":::
-Pass your training and validation data as one dataset to the parameter `training_data`. Set the number of cross validation folds with the parameter `n_cross_validations` and set the number of periods between two consecutive cross-validation folds with `cv_step_size`. You can also leave either or both parameters empty and AutoML will set them automatically.
+Pass your training and validation data as one dataset to the parameter `training_data`. Set the number of cross validation folds with the parameter `n_cross_validations` and set the number of periods between two consecutive cross-validation folds with `cv_step_size`. You can also leave either or both parameters empty and AutoML sets them automatically.
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
The [`AutoMLConfig`](/python/api/azureml-train-automl-client/azureml.train.autom
### Supported models
-Automated machine learning automatically tries different models and algorithms as part of the model creation and tuning process. As a user, there is no need for you to specify the algorithm. For forecasting experiments, both native time-series and deep learning models are part of the recommendation system.
+Automated machine learning automatically tries different models and algorithms as part of the model creation and tuning process. As a user, there's no need for you to specify the algorithm. For forecasting experiments, both native time-series and deep learning models are part of the recommendation system.
>[!Tip] > Traditional regression models are also tested as part of the recommendation system for forecasting experiments. See a complete list of the [supported models](/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels) in the SDK reference documentation.
Similar to a regression problem, you define standard training parameters like ta
|`forecast_horizon`|Defines how many periods forward you would like to forecast. The horizon is in units of the time series frequency. Units are based on the time interval of your training data, for example, monthly, weekly that the forecaster should predict out.| The following code,
-* Leverages the [`ForecastingParameters`](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters) class to define the forecasting parameters for your experiment training
+* Uses the [`ForecastingParameters`](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters) class to define the forecasting parameters for your experiment training
* Sets the `time_column_name` to the `day_datetime` field in the data set. * Sets the `forecast_horizon` to 50 in order to predict for the entire test set.
The following formula calculates the amount of historic data that what would be
Minimum historic data required: (2x `forecast_horizon`) + #`n_cross_validations` + max(max(`target_lags`), `target_rolling_window_size`)
-An `Error exception` is raised for any series in the dataset that does not meet the required amount of historic data for the relevant settings specified.
+An `Error exception` is raised for any series in the dataset that doesn't meet the required amount of historic data for the relevant settings specified.
### Featurization steps
However, the following steps are performed only for `forecasting` task types:
* Detect time-series sample frequency (for example, hourly, daily, weekly) and create new records for absent time points to make the series continuous. * Impute missing values in the target (via forward-fill) and feature columns (using median column values) * Create features based on time series identifiers to enable fixed effects across different series
-* Create time-based features to assist in learning seasonal patterns
+* Create time-based features to help learning seasonal patterns
* Encode categorical variables to numeric quantities
-* Detect the non-stationary time series and automatically differencing them to mitigate the impact of unit roots.
+* Detect the nonstationary time series and automatically differencing them to mitigate the impact of unit roots.
To view the full list of possible engineered features generated from time series data, see [TimeIndexFeaturizer Class](/python/api/azureml-automl-runtime/azureml.automl.runtime.featurizer.transformer.timeseries.time_index_featurizer).
Supported customizations for `forecasting` tasks include:
|Customization|Definition| |--|--|
-|**Column purpose update**|Override the auto-detected feature type for the specified column.|
+|**Column purpose update**|Override the autodetected feature type for the specified column.|
|**Transformer parameter update** |Update the parameters for the specified transformer. Currently supports *Imputer* (fill_value and median).| |**Drop columns** |Specifies columns to drop from being featurized.|
If you're using the Azure Machine Learning studio for your experiment, see [how
## Optional configurations
-Additional optional configurations are available for forecasting tasks, such as enabling deep learning and specifying a target rolling window aggregation. A complete list of additional parameters is available in the [ForecastingParameters SDK reference documentation](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters).
+More optional configurations are available for forecasting tasks, such as enabling deep learning and specifying a target rolling window aggregation. A complete list of more parameters is available in the [ForecastingParameters SDK reference documentation](/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters).
### Frequency & target data aggregation
-Leverage the frequency, `freq`, parameter to help avoid failures caused by irregular data, that is data that doesn't follow a set cadence, like hourly or daily data.
+Use the frequency, `freq`, parameter to help avoid failures caused by irregular data. Irregular data includes data that doesn't follow a set cadence, like hourly or daily data.
-For highly irregular data or for varying business needs, users can optionally set their desired forecast frequency, `freq`, and specify the `target_aggregation_function` to aggregate the target column of the time series. Leverage these two settings in your `AutoMLConfig` object can help save some time on data preparation.
+For highly irregular data or for varying business needs, users can optionally set their desired forecast frequency, `freq`, and specify the `target_aggregation_function` to aggregate the target column of the time series. Use these two settings in your `AutoMLConfig` object can help save some time on data preparation.
Supported aggregation operations for target column values include:
To enable DNN for an AutoML experiment created in the Azure Machine Learning stu
### Target rolling window aggregation
-Often the best information a forecaster can have is the recent value of the target. Target rolling window aggregations allow you to add a rolling aggregation of data values as features. Generating and using these features as extra contextual data helps with the accuracy of the train model.
+Often the best information for a forecaster is the recent value of the target. Target rolling window aggregations allow you to add a rolling aggregation of data values as features. Generating and using these features as extra contextual data helps with the accuracy of the train model.
For example, say you want to predict energy demand. You might want to add a rolling window feature of three days to account for thermal changes of heated spaces. In this example, create this window by setting `target_rolling_window_size= 3` in the `AutoMLConfig` constructor.
View a Python code example applying the [target rolling window aggregate feature
### Short series handling
-Automated ML considers a time series a **short series** if there are not enough data points to conduct the train and validation phases of model development. The number of data points varies for each experiment, and depends on the max_horizon, the number of cross validation splits, and the length of the model lookback, that is the maximum of history that's needed to construct the time-series features.
+Automated ML considers a time series a **short series** if there aren't enough data points to conduct the train and validation phases of model development. The number of data points varies for each experiment, and depends on the max_horizon, the number of cross validation splits, and the length of the model lookback, that is the maximum of history that's needed to construct the time-series features.
Automated ML offers short series handling by default with the `short_series_handling_configuration` parameter in the `ForecastingParameters` object.
-To enable short series handling, the `freq` parameter must also be defined. To define an hourly frequency, we will set `freq='H'`. View the frequency string options by visiting the [pandas Time series page DataOffset objects section](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects). To change the default behavior, `short_series_handling_configuration = 'auto'`, update the `short_series_handling_configuration` parameter in your `ForecastingParameter` object.
+To enable short series handling, the `freq` parameter must also be defined. To define an hourly frequency, we'll set `freq='H'`. View the frequency string options by visiting the [pandas Time series page DataOffset objects section](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects). To change the default behavior, `short_series_handling_configuration = 'auto'`, update the `short_series_handling_configuration` parameter in your `ForecastingParameter` object.
```python from azureml.automl.core.forecasting_parameters import ForecastingParameters
The following table summarizes the available settings for `short_series_handling
|| |`auto`| The default value for short series handling. <br> - _If all series are short_, pad the data. <br> - _If not all series are short_, drop the short series. |`pad`| If `short_series_handling_config = pad`, then automated ML adds random values to each short series found. The following lists the column types and what they're padded with: <br> - Object columns with NaNs <br> - Numeric columns with 0 <br> - Boolean/logic columns with False <br> - The target column is padded with random values with mean of zero and standard deviation of 1.
-|`drop`| If `short_series_handling_config = drop`, then automated ML drops the short series, and it will not be used for training or prediction. Predictions for these series will return NaN's.
+|`drop`| If `short_series_handling_config = drop`, then automated ML drops the short series, and it will not be used for training or prediction. Predictions for these series return NaN's.
|`None`| No series is padded or dropped >[!WARNING] >Padding may impact the accuracy of the resulting model, since we are introducing artificial data just to get past training without failures. If many of the series are short, then you may also see some impact in explainability results
-### Non-stationary time series detection and handling
+### Nonstationary time series detection and handling
-A time series whose moments (mean and variance) change over time is called a **non-stationary**. For example, time series that exhibit stochastic trends are non-stationary by nature. To visualize this, the below image plots a series that is generally trending upward. Now, compute and compare the mean (average) values for the first and the second half of the series. Are they the same? Here, the mean of the series in the first half of the plot is significantly smaller than in the second half. The fact that the mean of the series depends on the time interval one is looking at, is an example of the time-varying moments. Here, the mean of a series is the first moment.
+A time series whose moments (mean and variance) change over time is called a **non-stationary**. For example, time series that exhibit stochastic trends are non-stationary by nature. To visualize this, the below image plots a series that is generally trending upward. Now, compute and compare the mean (average) values for the first and the second half of the series. Are they the same? Here, the mean of the series in the first half of the plot is smaller than in the second half. The fact that the mean of the series depends on the time interval one is looking at, is an example of the time-varying moments. Here, the mean of a series is the first moment.
:::image type="content" source="../media/how-to-auto-train-forecast/non-stationary-retail-sales.png" alt-text="Diagram showing retail sales for a non-stationary time series.":::
-Next, let's examine the image below, which plots the original series in first differences, $x_t = y_t - y_{t-1}$ where $x_t$ is the change in retail sales and $y_t$ and $y_{t-1}$ represent the original series and its first lag, respectively. The mean of the series is roughly constant regardless the time frame one is looking at. This is an example of a first order stationary times series. The reason we added the first order term is because the first moment (mean) does not change with time interval, the same cannot be said about the variance, which is a second moment.
+Next, let's examine the image, which plots the original series in first differences, $x_t = y_t - y_{t-1}$ where $x_t$ is the change in retail sales and $y_t$ and $y_{t-1}$ represent the original series and its first lag, respectively. The mean of the series is roughly constant regardless the time frame one is looking at. This is an example of a first order stationary times series. The reason we added the first order term is because the first moment (mean) doesn't change with time interval, the same can't be said about the variance, which is a second moment.
:::image type="content" source="../media/how-to-auto-train-forecast/weakly-stationary-retail-sales.png" alt-text="Diagram showing retail sales for a weakly stationary time series.":::
-AutoML Machine learning models can not inherently deal with stochastic trends, or other well-known problems associated with non-stationary time series. As a result, their out of sample forecast accuracy will be "poor" if such trends are present.
+AutoML Machine learning models cannot inherently deal with stochastic trends, or other well-known problems associated with non-stationary time series. As a result, their out of sample forecast accuracy is "poor" if such trends are present.
-AutoML automatically analyzes time series dataset to check whether it is stationary or not. When non-stationary time series are detected, AutoML applies a differencing transform automatically to mitigate the impact of non-stationary time series.
+AutoML automatically analyzes time series dataset to check whether it's stationary or not. When non-stationary time series are detected, AutoML applies a differencing transform automatically to mitigate the affect of non-stationary time series.
## Run the experiment
Use the best model iteration to forecast values for data that wasn't used to tra
### Evaluating model accuracy with a rolling forecast
-Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a so-called rolling evaluation which rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows to obtain statistically robust estimates for some set of chosen metrics. Ideally, the test set for the evaluation is long relative to the model's forecast horizon. Estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable.
+Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a so-called rolling evaluation, which rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows to obtain statistically robust estimates for some set of chosen metrics. Ideally, the test set for the evaluation is long relative to the model's forecast horizon. Estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable.
-For example, suppose you train a model on daily sales to predict demand up to two weeks (14 days) into the future. If there is sufficient historic data available, you might reserve the final several months to even a year of the data for the test set. The rolling evaluation begins by generating a 14-day-ahead forecast for the first two weeks of the test set. Then, the forecaster is advanced by some number of days into the test set and you generate another 14-day-ahead forecast from the new position. The process continues until you get to the end of the test set.
+For example, suppose you train a model on daily sales to predict demand up to two weeks (14 days) into the future. If there's sufficient historic data available, you might reserve the final several months to even a year of the data for the test set. The rolling evaluation begins by generating a 14-day-ahead forecast for the first two weeks of the test set. Then, the forecaster is advanced by some number of days into the test set and you generate another 14-day-ahead forecast from the new position. The process continues until you get to the end of the test set.
To do a rolling evaluation, you call the `rolling_forecast` method of the `fitted_model`, then compute desired metrics on the result. For example, assume you have test set features in a pandas DataFrame called `test_features_df` and the test set actual values of the target in a numpy array called `test_target`. A rolling evaluation using the mean squared error is shown in the following code sample:
mse = mean_squared_error(
rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name]) ```
-In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+In this sample, the step size for the rolling forecast is set to one, which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples, see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
### Prediction into the future
-The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast, which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features.
Grouping is a concept in time series forecasting that allows time series to be c
### Many models
-The Azure Machine Learning many models solution with automated machine learning allows users to train and manage millions of models in parallel. Many models The solution accelerator leverages [Azure Machine Learning pipelines](../concept-ml-pipelines.md) to train the model. Specifically, a [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline%28class%29) object and [ParalleRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep) are used and require specific configuration parameters set through the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig).
+The Azure Machine Learning many models solution with automated machine learning allows users to train and manage millions of models in parallel. Many models The solution accelerator uses [Azure Machine Learning pipelines](../concept-ml-pipelines.md) to train the model. Specifically, a [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline%28class%29) object and [ParalleRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep) are used and require specific configuration parameters set through the [ParallelRunConfig](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig).
The following diagram shows the workflow for the many models solution.
mm_paramters = ManyModelsTrainParameters(automl_settings=automl_settings, partit
### Hierarchical time series forecasting
-In most applications, customers have a need to understand their forecasts at a macro and micro level of the business; whether that be predicting sales of products at different geographic locations, or understanding the expected workforce demand for different organizations at a company. The ability to train a machine learning model to intelligently forecast on hierarchy data is essential.
+In most applications, customers have a need to understand their forecasts at a macro and micro level of the business. Forcasts can be predicting sales of products at different geographic locations, or understanding the expected workforce demand for different organizations at a company. The ability to train a machine learning model to intelligently forecast on hierarchy data is essential.
-A hierarchical time series is a structure in which each of the unique series are arranged into a hierarchy based on dimensions such as, geography or product type. The following example shows data with unique attributes that form a hierarchy. Our hierarchy is defined by: the product type such as headphones or tablets, the product category which splits product types into accessories and devices, and the region the products are sold in.
+A hierarchical time series is a structure in which each of the unique series is arranged into a hierarchy based on dimensions such as, geography or product type. The following example shows data with unique attributes that form a hierarchy. Our hierarchy is defined by: the product type such as headphones or tablets, the product category, which splits product types into accessories and devices, and the region the products are sold in.
![Example raw data table for hierarchical data](../media/how-to-auto-train-forecast/hierarchy-data-table.svg)
machine-learning How To Auto Train Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md
Previously updated : 11/04/2022 Last updated : 01/25/2023
This process accepts training data and configuration settings, and automatically
![Flow diagram](./media/how-to-auto-train-models/flow2.png)
-You'll write code using the Python SDK in this article. You'll learn the following tasks:
+You write code using the Python SDK in this article. You learn the following tasks:
> [!div class="checklist"] > * Download, transform, and clean data using Azure Open Datasets
from datetime import datetime
from dateutil.relativedelta import relativedelta ```
-Begin by creating a dataframe to hold the taxi data. When working in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid `MemoryError` with large datasets.
+Begin by creating a dataframe to hold the taxi data. When you work in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid `MemoryError` with large datasets.
To download taxi data, iteratively fetch one month at a time, and before appending it to `green_taxi_df` randomly sample 2,000 records from each month to avoid bloating the dataframe. Then preview the data.
green_taxi_df.head(10)
|150436|2|2015-01-11 17:15:14|2015-01-11 17:22:57|1|1.19|None|None|-73.94|40.71|-73.95|...|1|7.00|0.00|0.50|0.3|1.75|0.00|nan|9.55| |432136|2|2015-01-22 23:16:33 2015-01-22 23:20:13 1 0.65|None|None|-73.94|40.71|-73.94|...|2|5.00|0.50|0.50|0.3|0.00|0.00|nan|6.30|
-Remove some of the columns that you won't need for training or additional feature building. Automate machine learning will automatically handle time-based features such as **lpepPickupDatetime**.
+Remove some of the columns that you won't need for training or other feature building. Automate machine learning will automatically handle time-based features such as **lpepPickupDatetime**.
```python columns_to_remove = ["lpepDropoffDatetime", "puLocationId", "doLocationId", "extra", "mtaTax",
green_taxi_df.describe()
|max|2.00|9.00|97.57|0.00|41.93|0.00|41.94|450.00|12.00|30.00|
-From the summary statistics, you see that there are several fields that have outliers or values that will reduce model accuracy. First filter the lat/long fields to be within the bounds of the Manhattan area. This will filter out longer taxi trips or trips that are outliers in respect to their relationship with other features.
+From the summary statistics, you see that there are several fields that have outliers or values that reduce model accuracy. First filter the lat/long fields to be within the bounds of the Manhattan area. This filters out longer taxi trips or trips that are outliers in respect to their relationship with other features.
Additionally filter the `tripDistance` field to be greater than zero but less than 31 miles (the haversine distance between the two lat/long pairs). This eliminates long outlier trips that have inconsistent trip cost.
To automatically train a model, take the following steps:
### Define training settings
-Define the experiment parameter and model settings for training. View the full list of [settings](how-to-configure-auto-train.md). Submitting the experiment with these default settings will take approximately 5-20 min, but if you want a shorter run time, reduce the `experiment_timeout_hours` parameter.
+Define the experiment parameter and model settings for training. View the full list of [settings](how-to-configure-auto-train.md). Submitting the experiment with these default settings take approximately 5-20 min, but if you want a shorter run time, reduce the `experiment_timeout_hours` parameter.
|Property| Value in this article |Description| |-|-|| |**iteration_timeout_minutes**|10|Time limit in minutes for each iteration. Increase this value for larger datasets that need more time for each iteration.| |**experiment_timeout_hours**|0.3|Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|
-|**enable_early_stopping**|True|Flag to enable early termination if the score is not improving in the short term.|
-|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model will be chosen based on this metric.|
+|**enable_early_stopping**|True|Flag to enable early termination if the score isn't improving in the short term.|
+|**primary_metric**| spearman_correlation | Metric that you want to optimize. The best-fit model is chosen based on this metric.|
|**featurization**| auto | By using **auto**, the experiment can preprocess the input data (handling missing data, converting text to numeric, etc.)| |**verbosity**| logging.INFO | Controls the level of logging.|
-|**n_cross_validations**|5|Number of cross-validation splits to perform when validation data is not specified.|
+|**n_cross_validations**|5|Number of cross-validation splits to perform when validation data isn't specified.|
```python import logging
The traditional machine learning model development process is highly resource-in
## Clean up resources
-Do not complete this section if you plan on running other Azure Machine Learning tutorials.
+Don't complete this section if you plan on running other Azure Machine Learning tutorials.
### Stop the compute instance
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-inferencing-vnet.md
Previously updated : 07/28/2022 Last updated : 01/31/2024
When your Azure Machine Learning workspace is configured with a private endpoint
To add AKS in a virtual network to your workspace, use the following steps: 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/), and then select your subscription and workspace.
-1. Select __Compute__ on the left, __Inference clusters__ from the center, and then select __+ New__.
+1. Select __Compute__ on the left, __Inference clusters__ from the center, and then select __+ New__. Finally, select __AksCompute__.
:::image type="content" source="./media/how-to-secure-inferencing-vnet/create-inference.png" alt-text="Screenshot of create inference cluster dialog.":::
-1. From the __Create inference cluster__ dialog, select __Create new__ and the VM size to use for the cluster. Finally, select __Next__.
+1. From the __Create AksCompute__ dialog, select __Create new__, the __Location__ and the VM size to use for the cluster. Finally, select __Next__.
:::image type="content" source="./media/how-to-secure-inferencing-vnet/create-inference-vm.png" alt-text="Screenshot of VM settings.":::
To add AKS in a virtual network to your workspace, use the following steps:
> [!IMPORTANT] > Keep the default outbound rules for the NSG. For more information, see the default security rules in [Security groups](../../virtual-network/network-security-groups-overview.md#default-security-rules).
- ![Screenshot that shows an inbound security rule.](./media/how-to-secure-inferencing-vnet/aks-vnet-inbound-nsg-scoring.png)](./media/how-to-secure-inferencing-vnet/aks-vnet-inbound-nsg-scoring.png#lightbox)
+ :::image type="content" source="./media/how-to-secure-inferencing-vnet/aks-vnet-inbound-nsg-scoring.png" alt-text="Screenshot that shows an inbound security rule." lightbox="./media/how-to-secure-inferencing-vnet/aks-vnet-inbound-nsg-scoring.png":::
> [!IMPORTANT] > The IP address shown in the image for the scoring endpoint will be different for your deployments. While the same IP is shared by all deployments to one AKS cluster, each AKS cluster will have a different IP address.
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
description: Learn about current technical or feature limitations you may encounter in the Azure Managed Grafana service. Previously updated : 10/18/2023 Last updated : 01/23/2024
Azure Managed Grafana has the following known limitations:
| Team sync with Microsoft Entra ID | &#x274C; | &#x274C; | | Enterprise plugins | &#x274C; | &#x274C; |
-## Quotas
+## Throttling limits and quotas
The following quotas apply to the Essential (preview) and Standard plans.
-| Limit | Description | Essential | Standard |
-|--|--|--||
-| Alert rules | Maximum number of alert rules that can be created | Not supported | 500 per instance |
-| Dashboards | Maximum number of dashboards that can be created | 20 per instance | Unlimited |
-| Data sources | Maximum number of datasources that can be created | 5 per instance | Unlimited |
-| API keys | Maximum number of API keys that can be created | 2 per instance | 100 per instance |
-| Data query timeout | Maximum wait duration for the reception of data query response headers, before Grafana times out | 200 seconds | 200 seconds |
-| Data source query size | Maximum number of bytes that are read/accepted from responses of outgoing HTTP requests | 80 MB | 80 MB |
-| Render image or PDF report wait time | Maximum duration for an image or report PDF rendering request to complete before Grafana times out. | Not supported | 220 seconds |
-| Instance count | Maximum number of instances in a single subscription per Azure region | 1 | 20 |
+| Limit | Description | Essential | Standard |
+|--|-|||
+| Alert rules | Maximum number of alert rules that can be created. | Not supported | 500 per instance |
+| Dashboards | Maximum number of dashboards that can be created. | 20 per instance | Unlimited |
+| Data sources | Maximum number of datasources that can be created. | 5 per instance | Unlimited |
+| API keys | Maximum number of API keys that can be created. | 2 per instance | 100 per instance |
+| Data query timeout | Maximum wait duration for the reception of data query response headers, before Grafana times out. | 200 seconds | 200 seconds |
+| Data source query size | Maximum number of bytes that are read/accepted from responses of outgoing HTTP requests. | 80 MB | 80 MB |
+| Render image or PDF report wait time | Maximum duration for an image or report PDF rendering request to complete before Grafana times out. | Not supported | 220 seconds |
+| Instance count | Maximum number of instances in a single subscription per Azure region. | 1 | 20 |
+| Requests per IP | Maximum number of requests per IP per second. | 90 requests per second | 90 requests per second |
+| Requests per HTTP host | Maximum number of requests per HTTP host per second. The HTTP host stands for the Host header in incoming HTTP requests, which can describe each unique host client. | 45 requests per second | 45 requests per second |
+
+Each data source also has its own limits that can be reflected in Azure Managed Grafana dashboards, alerts and reports. We recommend that you research these limits in the documentation of each data source provider. For instance:
+
+* Refer to [Azure Monitor](/azure/azure-monitor/service-limits) to learn about Azure Monitor service limits including alerts, Prometheus metrics, data collection, logs and more.
+* Refer to [Azure Data Explorer](/azure/data-explorer/kusto/concepts/querylimits) to learn about Azure Data Explorer service limits.
## Next steps
managed-grafana Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/overview.md
The [Azure Managed Grafana pricing page](https://azure.microsoft.com/pricing/det
## Quotas
-Different quotas apply to Azure Managed Grafana service instances depending on their service tiers. For a list of the quotas that apply to the Essential (preview) and Standard pricing plans, see [quotas](known-limitations.md#quotas).
+Different quotas apply to Azure Managed Grafana service instances depending on their service tiers. For a list of the quotas that apply to the Essential (preview) and Standard pricing plans, see [quotas](known-limitations.md#throttling-limits-and-quotas).
## Next steps
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
For example, if you have provisioned 1000 GB of storage, and the actual utilizat
Remember that storage once auto-scaled up, cannot be scaled down.
-## IOPS
+>[!NOTE]
+> Storage autogrow is default enabled for a High-Availability configured server and can not to be disabled.
-Azure Database for MySQL flexible server supports the provisioning of additional IOPS. This feature enables you to provision additional IOPS above the complimentary IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
+## IOPS
-The minimum IOPS are 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table](#service-tiers-size-and-server-types).
+Azure Database for MySQL flexible server supports pre-provisioned IOPS and autoscale IOPS. [Learn more.](./concepts-storage-iops.md) The minimum IOPS are 360 across all compute sizes and the maximum IOPS is determined by the selected compute size. To learn more about the maximum IOPS per compute size refer to the [table](#service-tiers-size-and-server-types).
> [!Important]
-> **Minimum IOPS are 360 across all compute sizes<br>
-> **Maximum IOPS are determined by the selected compute size.
+> **Minimum IOPS are 360 across all compute sizes <br>
+> **Maximum IOPS are determined by the selected compute size. <br>
You can monitor your I/O consumption in the Azure portal (with Azure Monitor) using [IO percent](./concepts-monitoring.md) metric. If you need more IOPS than the max IOPS based on compute, then you need to scale your server's compute.
+## Pre-provisioned IOPS
+Azure Database for MySQL flexible server offers pre-provisioned IOPS, allowing you to allocate a specific number of IOPS to your Azure Database for MySQL flexible server instance. This setting ensures consistent and predictable performance for your workloads. With pre-provisioned IOPS, you can define a specific IOPS limit for your storage volume, guaranteeing the ability to handle a certain number of requests per second. This results in a reliable and assured level of performance. Pre-provisioned IOPS enables you to provision **additional IOPS** above the IOPS limit. Using this feature, you can increase or decrease the number of IOPS provisioned based on your workload requirements at any time.
+ ## Autoscale IOPS
-The cornerstone of Azure Database for MySQL flexible server is its ability to achieve the best performance for tier 1 workloads, which can be improved by enabling server automatically scale performance (IO) of its database servers seamlessly depending on the workload needs. This is an opt-in feature that enables users to scale IOPS on demand without having to pre-provision a certain amount of IO per second. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs.ΓÇ»
+The cornerstone of Azure Database for MySQL flexible server is its ability to achieve the best performance for tier 1 workloads, which can be improved by enabling server automatically scale performance (IO) of its database servers seamlessly depending on the workload needs. This is an opt-in feature that enables users to scale IOPS on demand without having to pre-provision a certain amount of IO per second. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs.
With Autoscale IOPS, you pay only for the IO the server use and no longer need to provision and pay for resources they arenΓÇÖt fully using, saving both time and money. In addition, mission-critical Tier-1 applications can achieve consistent performance by making additional IO available to the workload at any time. Autoscale IOPS eliminates the administration required to provide the best performance at the least cost for Azure Database for MySQL flexible server customers.
+**Dynamic Scaling**: Autoscale IOPS dynamically adjust the IOPS limit of your database server based on the actual demand of your workload. This ensures optimal performance without manual intervention or configuration.
+
+**Handling Workload Spikes**: Autoscale IOPS enable your database to seamlessly handle workload spikes or fluctuations without compromising the performance of your applications. This feature ensures consistent responsiveness even during peak usage periods.
+
+**Cost Savings**: Unlike the Pre-provisioned IOPS where a fixed IOPS limit is specified and paid for regardless of usage, Autoscale IOPS lets you pay only for the number of I/O operations that you consume.
+++ ## Backup The service automatically takes backups of your server. You can select a retention period from a range of 1 to 35 days. Learn more about backups in the [backup and restore concepts article](concepts-backup-restore.md).
If you would like to optimize server cost, you can consider following tips:
## Next steps -- Learn how to [create a Azure Database for MySQL flexible server instance in the portal](quickstart-create-server-portal.md).
+- Learn how to [create an Azure Database for MySQL flexible server instance in the portal](quickstart-create-server-portal.md).
- Learn about [service limitations](concepts-limitations.md).
mysql Concepts Storage Iops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-storage-iops.md
Moreover, Additional IOPS with pre-provisioned refers to the flexibility of incr
Autoscale IOPS offer the flexibility to scale IOPS on demand, eliminating the need to pre-provision a specific amount of IO per second. By enabling Autoscale IOPS, your server will automatically adjust IOPS based on workload requirements. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs.
-**Dynamic Scaling**: Autoscale IOPS dynamically adjust the IOPS limit of your database server based on the actual demand of your workload. This ensures optimal performance without manual intervention or configuration
+**Dynamic Scaling**: Autoscale IOPS dynamically adjust the IOPS limit of your database server based on the actual demand of your workload. This ensures optimal performance without manual intervention or configuration.
+ **Handling Workload Spikes**: Autoscale IOPS enable your database to seamlessly handle workload spikes or fluctuations without compromising the performance of your applications. This feature ensures consistent responsiveness even during peak usage periods.+ **Cost Savings**: Unlike the Pre-provisioned IOPS where a fixed IOPS limit is specified and paid for regardless of usage, Autoscale IOPS lets you pay only for the number of I/O operations that you consume. With this feature, you'll only be charged for the IO your server actually utilizes, avoiding unnecessary provisioning and expenses for underutilized resources. This ensures both cost savings and optimal performance, making it a smart choice for managing your database workload efficiently.
mysql Tutorial Deploy Springboot On Aks Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-springboot-on-aks-vnet.md
In this tutorial, you'll learn how to deploy a [Spring Boot](https://spring.io/p
> [!NOTE] > This tutorial assumes a basic understanding of Kubernetes concepts, Java Spring Boot and MySQL.
+> For Spring Boot applications, we recommend using Azure Spring Apps. However, you can still use Azure Kubernetes Services as a destination.
## Prerequisites
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
The in-place migration provides a highly resilient and self-healing offline migr
> [!NOTE] > In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate.
-## What's new?
-* If you own a Single Server workload with Basic or GP SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u). (Sept 2023)
+## Eligibility
+* If you own a Single Server workload with Basic or GP SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u).
## Configure migration alerts and review migration schedule
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Azure Database for MySQL
nat-gateway Nat Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-metrics.md
Title: Metrics and alerts for Azure NAT Gateway-
-description: Understand Azure Monitor metrics and alerts available for NAT gateway.
+
+description: Get started learning about Azure Monitor metrics and alerts available for monitoring Azure NAT Gateway.
-# Customer intent: As an IT administrator, I want to understand available Azure Monitor metrics and alerts for Virtual Network NAT.
- Previously updated : 04/12/2022+ Last updated : 01/30/2024
+# Customer intent: As an IT administrator, I want to understand available Azure Monitor metrics and alerts for Virtual Network NAT.
-# Azure NAT Gateway metrics and alerts
+# What is Azure NAT Gateway metrics and alerts?
This article provides an overview of all NAT gateway metrics and diagnostic capabilities. This article provides general guidance on how to use metrics and alerts to monitor, manage, and [troubleshoot](troubleshoot-nat.md) your NAT gateway resource.
Azure NAT Gateway provides the following diagnostic capabilities:
- Network Insights: Azure Monitor Insights provides you with visual tools to view, monitor, and assist you in diagnosing issues with your NAT gateway resource. Insights provide you with a topological map of your Azure setup and metrics dashboards. *Figure: Azure NAT Gateway for outbound to Internet*
NAT gateway provides the following multi-dimensional metrics in Azure Monitor:
| Metric | Description | Recommended aggregation | Dimensions | |||||
-| Bytes | Bytes processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) |
-| Packets | Packets processed inbound and outbound | Sum | Direction (In; Out), Protocol (6 TCP; 17 UDP) |
+| Bytes | Bytes processed inbound and outbound | Sum | **Direction (In; Out)**, **Protocol (6 TCP; 17 UDP)** |
+| Packets | Packets processed inbound and outbound | Sum | **Direction (In; Out)**, **Protocol (6 TCP; 17 UDP)** |
| Dropped Packets | Packets dropped by the NAT gateway | Sum | / |
-| SNAT Connection Count | Number of new SNAT connections over a given interval of time | Sum | Connection State (Attempted, Failed), Protocol (6 TCP; 17 UDP) |
-| Total SNAT Connection Count | Total number of active SNAT connections | Sum | Protocol (6 TCP; 17 UDP) |
-| Datapath Availability | Availability of the data path of the NAT gateway. Used to determine whether the NAT gateway endpoints are available for outbound traffic flow. | Avg | Availability (0, 100) |
+| SNAT Connection Count | Number of new SNAT connections over a given interval of time | Sum | **Connection State (Attempted, Failed)**, **Protocol (6 TCP; 17 UDP)** |
+| Total SNAT Connection Count | Total number of active SNAT connections | Sum | **Protocol (6 TCP; 17 UDP)** |
+| Datapath Availability | Availability of the data path of the NAT gateway. Used to determine whether the NAT gateway endpoints are available for outbound traffic flow. | Avg | **Availability (0, 100)** |
>[!NOTE] > Count aggregation is not recommended for any of the NAT gateway metrics. Count aggregation adds up the number of metric values and not the metric values themselves. Use Sum aggregation instead to get the best representation of data values for connection count, bytes, and packets metrics. > > Use average for best represented health data for the datapath availability metric. >
-> See [aggregation types](/azure/azure-monitor/essentials/metrics-aggregation-explained#aggregation-types) for more information.
+> For information about aggregation types, see [aggregation types](/azure/azure-monitor/essentials/metrics-aggregation-explained#aggregation-types).
## Where to find my NAT gateway metrics
To view any one of your metrics for a given NAT gateway resource:
1. Select the NAT gateway resource you would like to monitor.
-2. In the **Metric** drop-down menu, select one of the provided metrics.
+1. In the **Metric** drop-down menu, select one of the provided metrics.
-3. In the **Aggregation** drop-down menu, select the recommended aggregation listed in the [metrics overview](#metrics-overview) table.
+1. In the **Aggregation** drop-down menu, select the recommended aggregation listed in the [metrics overview](#metrics-overview) table.
:::image type="content" source="./media/nat-metrics/nat-metrics-1.png" alt-text="Screenshot of the metrics set up in NAT gateway resource.":::
-4. To adjust the time frame over which the chosen metric is presented on the metrics graph or to adjust how frequently the chosen metric is measured, select the **Time** window in the top right corner of the metrics page and make your adjustments.
+1. To adjust the time frame over which the chosen metric is presented on the metrics graph or to adjust how frequently the chosen metric is measured, select the **Time** window in the top right corner of the metrics page and make your adjustments.
:::image type="content" source="./media/nat-metrics/nat-metrics-2.png" alt-text="Screenshot of the metrics time setup configuration in NAT gateway resource."::: ## How to use NAT gateway metrics
+The following sections detail how to use each NAT gateway metric to monitor, manage, and troubleshoot your NAT gateway resource.
+ ### Bytes The **Bytes** metric shows you the amount of data going outbound through NAT gateway and returning inbound in response to an outbound connection.
To view the amount of data passing through NAT gateway:
1. Select the NAT gateway resource you would like to monitor.
-2. In the **Metric** drop-down menu, select the **Bytes** metric.
+1. In the **Metric** drop-down menu, select the **Bytes** metric.
-3. In the **Aggregation** drop-down menu, select **Sum**.
+1. In the **Aggregation** drop-down menu, select **Sum**.
-4. Select to **Add filter**.
+1. Select to **Add filter**.
-5. In the **Property** drop-down menu, select **Direction (Out | In)**.
+1. In the **Property** drop-down menu, select **Direction (Out | In)**.
-6. In the **Values** drop-down menu, select **Out**, **In**, or both.
+1. In the **Values** drop-down menu, select **Out**, **In**, or both.
-7. To see data processed inbound or outbound as their own individual lines in the metric graph, select **Apply splitting**.
+1. To see data processed inbound or outbound as their own individual lines in the metric graph, select **Apply splitting**.
-8. In the **Values** drop-down menu, select **Direction (Out | In)**.
+1. In the **Values** drop-down menu, select **Direction (Out | In)**.
### Packets
To view the connection state of your connections:
1. Select the NAT gateway resource you would like to monitor.
-2. In the **Metric** drop-down menu, select the **SNAT Connection Count** metric.
+1. In the **Metric** drop-down menu, select the **SNAT Connection Count** metric.
-3. In the **Aggregation** drop-down menu, select **Sum**.
+1. In the **Aggregation** drop-down menu, select **Sum**.
-4. Select to **Add filter**.
+1. Select to **Add filter**.
-5. In the **Property** drop-down menu, select **Connection State**.
+1. In the **Property** drop-down menu, select **Connection State**.
-6. In the **Values** drop-down menu, select **Attempted**, **Failed**, or both.
+1. In the **Values** drop-down menu, select **Attempted**, **Failed**, or both.
-7. To see attempted and failed connections as their own individual lines in the metric graph, select **Apply splitting**.
+1. To see attempted and failed connections as their own individual lines in the metric graph, select **Apply splitting**.
-8. In the **Values** drop-down menu, select **Connection State**.
+1. In the **Values** drop-down menu, select **Connection State**.
:::image type="content" source="./media/nat-metrics/nat-metrics-3.png" alt-text="Screenshot of the metrics configuration.":::
You can use this metric to:
Possible reasons for a drop in data path availability include: -- An infrastructure outage has occurred.
+- An infrastructure outage.
- There aren't healthy VMs available in your NAT gateway configured subnet. For more information, see the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity).
To set up a datapath availability alert, follow these steps:
1. From the NAT gateway resource page, select **Alerts**.
-2. Select **Create alert rule**.
+1. Select **Create alert rule**.
-3. From the signal list, select **Datapath Availability**.
+1. From the signal list, select **Datapath Availability**.
-4. From the **Operator** drop-down menu, select **Less than**.
+1. From the **Operator** drop-down menu, select **Less than**.
-5. From the **Aggregation type** drop-down menu, select **Average**.
+1. From the **Aggregation type** drop-down menu, select **Average**.
-6. In the **Threshold value** box, enter **90%**.
+1. In the **Threshold value** box, enter **90%**.
-7. From the **Unit** drop-down menu, select **Count**.
+1. From the **Unit** drop-down menu, select **Count**.
-8. From the **Aggregation granularity (Period)** drop-down menu, select **15 minutes**.
+1. From the **Aggregation granularity (Period)** drop-down menu, select **15 minutes**.
-9. Create an **Action** for your alert by providing a name, notification type, and type of action that is performed when the alert is triggered.
+1. Create an **Action** for your alert by providing a name, notification type, and type of action that is performed when the alert is triggered.
-10. Before deploying your action, **test the action group**.
+1. Before deploying your action, **test the action group**.
-11. Select **Create** to create the alert rule.
+1. Select **Create** to create the alert rule.
>[!NOTE] >Aggregation granularity is the period of time over which the datapath availability is measured to determine if it has dropped below the threshold value.
Setting the aggregation granularity to less than 5 minutes may trigger false pos
### Alerts for SNAT port exhaustion
-Set up an alert on the **SNAT connection count** metric to notify you of connection failures on your NAT gateway. A failed connection volume greater than zero can indicate that either you have reached the connection limit on your NAT gateway or that you have hit SNAT port exhaustion. Investigate further to determine the root cause of these failures.
+Set up an alert on the **SNAT connection count** metric to notify you of connection failures on your NAT gateway. A failed connection volume greater than zero can indicate that you reached the connection limit on your NAT gateway or that you hit SNAT port exhaustion. Investigate further to determine the root cause of these failures.
To create the alert, use the following steps: 1. From the NAT gateway resource page, select **Alerts**.
-2. Select **Create alert rule**.
+1. Select **Create alert rule**.
-3. From the signal list, select **SNAT Connection Count**.
+1. From the signal list, select **SNAT Connection Count**.
-4. From the **Aggregation type** drop-down menu, select **Total**.
+1. From the **Aggregation type** drop-down menu, select **Total**.
-5. From the **Operator** drop-down menu, select **Greater than**.
+1. From the **Operator** drop-down menu, select **Greater than**.
-6. From the **Unit** drop-down menu, select **Count**.
+1. From the **Unit** drop-down menu, select **Count**.
-7. In the **Threshold value** box, enter 0.
+1. In the **Threshold value** box, enter 0.
-8. In the Split by dimensions section, select **Connection State** under Dimension name.
+1. In the Split by dimensions section, select **Connection State** under Dimension name.
-9. Under Dimension values, select **Failed** connections.
+1. Under Dimension values, select **Failed** connections.
-8. From the When to evaluate section, select **1 minute** under the **Check every** drop-down menu.
+1. From the When to evaluate section, select **1 minute** under the **Check every** drop-down menu.
-9. For the lookback period, select **5 minutes** from the drop-down menu options.
+1. For the lookback period, select **5 minutes** from the drop-down menu options.
-9. Create an **Action** for your alert by providing a name, notification type, and type of action that is performed when the alert is triggered.
+1. Create an **Action** for your alert by providing a name, notification type, and type of action that is performed when the alert is triggered.
-10. Before deploying your action, **test the action group**.
+1. Before deploying your action, **test the action group**.
-11. Select **Create** to create the alert rule.
+1. Select **Create** to create the alert rule.
>[!NOTE] >SNAT port exhaustion on your NAT gateway resource is uncommon. If you see SNAT port exhaustion, check if NAT gateway's idle timeout timer is set higher than the default amount of 4 minutes. A long idle timeout timer seeting can cause SNAT ports too be in hold down for longer, which results in exhausting SNAT port inventory sooner. You can also scale your NAT gateway with additional public IPs to increase NAT gateway's overall SNAT port inventory. To troubleshoot these kinds of issues, refer to the [NAT gateway connectivity troubleshooting guide](/azure/nat-gateway/troubleshoot-nat-connectivity#snat-exhaustion-due-to-nat-gateway-configuration).
To create the alert, use the following steps:
### Alerts for NAT gateway resource health [Azure Resource Health](/azure/service-health/overview) provides information on the health state of your NAT gateway resource. The resource health of your NAT gateway is evaluated by measuring the datapath availability of your NAT gateway endpoint. You can set up alerts to notify you when the health state of your NAT gateway resource changes. To learn more about NAT gateway resource health and setting up alerts, see: + * [Azure NAT Gateway Resource Health](/azure/nat-gateway/resource-health)+ * [NAT Gateway Resource Health Alerts](/azure/nat-gateway/resource-health#resource-health-alerts)+ * [How to create Resource Health Alerts in the Azure portal](/azure/service-health/resource-health-alert-monitor-guide) ## Network Insights
To view a topological map of your setup in Azure:
1. From your NAT gatewayΓÇÖs resource page, select **Insights** from the **Monitoring** section.
-2. On the landing page for **Insights**, there's a topology map of your NAT gateway setup. This map shows the relationship between the different components of your network (subnets, virtual machines, public IP addresses).
+1. On the landing page for **Insights**, there's a topology map of your NAT gateway setup. This map shows the relationship between the different components of your network (subnets, virtual machines, public IP addresses).
-3. Hover over any component in the topology map to view configuration information.
+1. Hover over any component in the topology map to view configuration information.
:::image type="content" source="./media/nat-metrics/nat-insights.png" alt-text="Screenshot of the Insights section of NAT gateway.":::
The metrics dashboard can be used to better understand the performance and healt
For more information on what each metric is showing you and how to analyze these metrics, see [How to use NAT gateway metrics](#how-to-use-nat-gateway-metrics).
-## More NAT gateway metrics guidance
### What type of metrics are available for NAT gateway?
-NAT gateway has [multi-dimensional metrics](/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics). Multi-dimensional metrics can be filtered by different dimensions in order to provide greater insight on the data provided. The [SNAT connection count](#snat-connection-count) metric can be filtered by Attempted and Failed connections in order to distinguish between the different types of connections being made by NAT gateway.
+The NAT gateway supports [multi-dimensional metrics](/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics). You can filter the multi-dimensional metrics by different dimensions to gain greater insight into the provided data. The [SNAT connection count](#snat-connection-count) metric allows you to filter the connections by Attempted and Failed connections, enabling you to distinguish between different types of connections made by the NAT gateway.
Refer to the dimensions column in the [metrics overview](#metrics-overview) table to see which dimensions are available for each NAT gateway metric.
All [platform metrics are stored](/azure/azure-monitor/essentials/data-platform-
> >To retrieve NAT gateway metrics, use the metrics REST API.
-### How interpret metrics charts
+### How to interpret metrics charts
Refer to [troubleshooting metrics charts](/azure/azure-monitor/essentials/metrics-troubleshoot) if you run into issues with creating, customizing or interpreting charts in Azure metrics explorer.
nat-gateway Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/resource-health.md
Title: Azure NAT Gateway Resource Health-
-description: Understand how to use resource health for NAT gateway.
+
+description: Get started learning how to use resource health to provide information about the health of your Azure NAT gateway.
# Customer intent: As an IT administrator, I want to understand how to use resource health to monitor NAT gateway.- Previously updated : 04/25/2022+ Last updated : 01/30/2024 # Azure NAT Gateway Resource Health
The health of your NAT gateway resource is displayed as one of the following sta
| Resource health status | Description | ||| | Available | Your NAT gateway resource is healthy and available. |
-| Degraded | Your NAT gateway resource has platform or user initiated events impacting the health of your NAT gateway. The metric for the data-path availability reports less than 80% but greater than 25% health for the last 15 minutes. You experience moderate to severe performance impact. |
-| Unavailable | Your NAT gateway resource isn't healthy. The metric for the data-path availability reports less than 25% for the past 15 minutes. You experience significant performance impact or unavailability of your NAT gateway resource for outbound connectivity. There may be user or platform events causing unavailability. |
-| Unknown | Health status for your NAT gateway resource hasnΓÇÖt updated or received information for data-path availability for more than 5 minutes. This state should be transient and will reflect the correct status as soon as data is received. |
+| Degraded | Your NAT gateway resource has platform or user initiated events affect the health of your NAT gateway. The metric for the data-path availability reports less than 80% but greater than 25% health for the last 15 minutes. You experience moderate to severe performance effect. |
+| Unavailable | Your NAT gateway resource isn't healthy. The metric for the data-path availability reports less than 25% for the past 15 minutes. You experience significant performance effect or unavailability of your NAT gateway resource for outbound connectivity. There might be user or platform events causing unavailability. |
+| Unknown | Health status for your NAT gateway resource isn't updated or received information for data-path availability for more than 5 minutes. This state should be transient and reflects the correct status as soon as data is received. |
For more information about Azure Resource Health, see [Resource Health overview](../service-health/resource-health-overview.md).
To view the health of your NAT gateway resource:
1. From the NAT gateway resource page, under **Support + troubleshooting**, select **Resource health**.
-2. In the health history section, select the drop-down arrows next to dates to get more information on health history events of your NAT gateway resource. You can view up to 30 days of history in the health history section.
+1. In the health history section, select the drop-down arrows next to dates to get more information on health history events of your NAT gateway resource. You can view up to 30 days of history in the health history section.
-3. Select the **+ Add resource health alert** at the top of the page to set up an alert for a specific health status of your NAT gateway resource.
+1. Select the **+ Add resource health alert** at the top of the page to set up an alert for a specific health status of your NAT gateway resource.
## Resource health alerts
-Azure Resource Health alerts can notify you in near real-time when the health state of your NAT gateway resource changes. It's recommended that you set resource health alerts to notify you when your NAT gateway resource changes to a **Degraded** or **Unavailable** health state.
+Azure Resource Health alerts can notify you in near real-time when the health state of your NAT gateway resource changes. Set resource health alerts to notify you when your NAT gateway resource changes to a **Degraded** or **Unavailable** health state.
After you create Azure resource health alerts for NAT gateway, Azure sends resource health notifications to your Azure subscription when the health state of NAT gateway changes. You can create and customize alerts based on:+ * The subscription affected+ * The resource group affected+ * The resource type affected (Microsoft.Network/NATGateways)+ * The specific resource (any NAT gateway resource you choose to set up an alert for)+ * The event status of the NAT gateway resource affected+ * The current status of the NAT gateway resource affected+ * The previous status of the NAT gateway resource affected+ * The reason type of the NAT gateway resource affected You can also configure who the alert should be sent to:+ * A new action group (that can be used for future alerts)+ * An existing action group For more information on how to set up these resource health alerts, see:+ * [Resource health alerts using Azure portal](/azure/service-health/resource-health-alert-monitor-guide#create-a-resource-health-alert-rule-in-the-azure-portal)+ * [Resource health alerts using Resource Manager templates](/azure/service-health/resource-health-alert-arm-template-guide) ## Next steps
nat-gateway Tutorial Nat Gateway Load Balancer Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-public-portal.md
Previously updated : 05/24/2022 Last updated : 01/30/2024
network-watcher Network Watcher Alert Triggered Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-alert-triggered-packet-capture.md
Title: Use packet capture to do proactive network monitoring with alerts - Azure Functions
-description: This article describes how to create an alert triggered packet capture with Azure Network Watcher
-
+description: Learn how to create an alert triggered packet capture with Azure Network Watcher and Azure Functions.
+ - Previously updated : 01/09/2023-- Last updated : 01/31/2024+ # Monitor networks proactively with alerts and Azure Functions using Packet Capture
The client ID is the Application ID of an application in the Azure Active Direct
``` > [!NOTE]
- > The password that you use when creating the application should be the same password that you created earlier when saving the key file.
+ > The password that you used when creating the application should be the same password that you created earlier when saving the key file.
1. In the Azure portal, select **Subscriptions**. Select the subscription to use and select **Access control (IAM)**.
Go to an existing virtual machine and [add an alert rule](../azure-monitor/alert
:::image type="content" source="./media/network-watcher-alert-triggered-packet-capture/action-group.png" alt-text="Screenshot of the Create action group screen."::: 7. Select **No** in **Enable the common alert schema** slider and select **OK**. - ## Review the results After the criteria for the alert triggers, a packet capture is created. Go to Network Watcher and select **Packet capture**. On this page, you can select the packet capture file link to download the packet capture.
For instructions on downloading files from Azure storage accounts, see [Get star
After your capture has been downloaded, you can view it using tools like [Microsoft Message Analyzer](/message-analyzer/microsoft-message-analyzer-operating-guide) and [WireShark](https://www.wireshark.org/)that can read a **.cap** file.
-## Next steps
+## Next step
Learn how to view your packet captures by visiting [Packet capture analysis with Wireshark](network-watcher-deep-packet-inspection.md).
network-watcher Network Watcher Packet Capture Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-cli.md
Title: Manage packet captures in VMs with Azure Network Watcher - Azure CLI
-description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using the Azure CLI.
-
+ Title: Manage packet captures for VMs - Azure CLI
+
+description: Learn how to start, stop, download, and delete Azure virtual machines packet captures with the packet capture feature of Network Watcher using the Azure CLI.
+ - Previously updated : 12/09/2021-- Last updated : 01/31/2024+
+#CustomerIntent: As an administrator, I want to capture IP packets to and from a virtual machine (VM) so I can review and analyze the data to help diagnose and solve network problems.
-# Manage packet captures with Azure Network Watcher using the Azure CLI
+# Manage packet captures for virtual machines with Azure Network Watcher using the Azure CLI
-> [!div class="op_single_selector"]
-> - [Azure portal](network-watcher-packet-capture-manage-portal.md)
-> - [PowerShell](network-watcher-packet-capture-manage-powershell.md)
-> - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
+The Network Watcher packet capture tool allows you to create capture sessions to record network traffic to and from an Azure virtual machine (VM). Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps in diagnosing network anomalies both reactively and proactively. Its applications extend beyond anomaly detection to include gathering network statistics, acquiring insights into network intrusions, debugging client-server communication, and addressing various other networking challenges. Network Watcher packet capture enables you to initiate packet captures remotely, alleviating the need for manual execution on a specific virtual machine.
-Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more. By being able to remotely trigger packet captures, this capability eases the burden of running a packet capture manually and on the desired machine, which saves valuable time.
+In this article, you learn how to remotely configure, start, stop, download, and delete a virtual machine packet capture using Azure PowerShell. To learn how to manage packet captures using the Azure portal or Azure CLI, see [Manage packet captures for virtual machines using the Azure portal](network-watcher-packet-capture-manage-portal.md) or [Manage packet captures for virtual machines using PowerShell](network-watcher-packet-capture-manage-powershell.md).
-To perform the steps in this article, you need to [install the Azure CLI](/cli/azure/install-azure-cli) for Windows, Linux, or macOS.
-This article takes you through the different management tasks that are currently available for packet capture.
+## Prerequisites
-- [**Start a packet capture**](#start-a-packet-capture)-- [**Stop a packet capture**](#stop-a-packet-capture)-- [**Delete a packet capture**](#delete-a-packet-capture)-- [**Download a packet capture**](#download-a-packet-capture)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-## Before you begin
+- Azure Cloud Shell or Azure CLI.
-This article assumes you have the following resources:
+ The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
-- An instance of Network Watcher in the region you want to create a packet capture-- A virtual machine with the packet capture extension enabled.
+ You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command.
-> [!IMPORTANT]
-> Packet capture requires an agent to be running on the virtual machine. The Agent is installed as an extension. For instructions on VM extensions, visit [Virtual Machine extensions and features](../virtual-machines/extensions/features-windows.md).
+- A virtual machine with the following outbound TCP connectivity:
+ - to the storage account over port 443
+ - to 169.254.169.254 over port 80
+ - to 168.63.129.16 over port 8037
+
+> [!NOTE]
+> - Azure creates a Network Watcher instance in the the virtual machine's region if Network Watcher wasn't enabled for that region. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md).
+> - Network Watcher packet capture requires Network Watcher agent VM extension to be installed on the target virtual machine. For more information, see [Install Network Watcher agent](#install-network-watcher-agent).
+> - The last two IP addresses and ports listed in the **Prerequisites** are common across all Network Watcher tools that use the Network Watcher agent and might occasionally change.
-## Install VM extension
+If a network security group is associated to the network interface, or subnet that the network interface is in, ensure that rules exist to allow outbound connectivity over the previous ports. Similarly, ensure outbound connectivity over the previous ports when adding user-defined routes to your network.
+
+## Install Network Watcher agent
### Step 1
If a storage account is specified, packet capture files are saved to a storage a
https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{VMName}/{year}/{month}/{day}/packetCapture_{creationTime}.cap ```
-## Next steps
-
-Learn how to automate packet captures with Virtual machine alerts by viewing [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md)
-
-Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md)
+## Related content
-<!-- Image references -->
+- To learn how to automate packet captures with virtual machine alerts, see [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md).
+- To determine whether specific traffic is allowed in or out of a virtual machine, see [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
network-watcher Network Watcher Packet Capture Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell.md
Title: Manage packet captures in VMs with Azure Network Watcher - Azure PowerShell
-description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using PowerShell.
-
+ Title: Manage packet captures for VMs - Azure PowerShell
+
+description: Learn how to start, stop, download, and delete Azure virtual machines packet captures with the packet capture feature of Network Watcher using Azure PowerShell.
+ - Previously updated : 02/01/2021-- Last updated : 01/31/2024+
+#CustomerIntent: As an administrator, I want to capture IP packets to and from a virtual machine (VM) so I can review and analyze the data to help diagnose and solve network problems.
-# Manage packet captures with Azure Network Watcher using PowerShell
+# Manage packet captures for virtual machines with Azure Network Watcher using PowerShell
-> [!div class="op_single_selector"]
-> - [Azure portal](network-watcher-packet-capture-manage-portal.md)
-> - [PowerShell](network-watcher-packet-capture-manage-powershell.md)
-> - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
+The Network Watcher packet capture tool allows you to create capture sessions to record network traffic to and from an Azure virtual machine (VM). Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps in diagnosing network anomalies both reactively and proactively. Its applications extend beyond anomaly detection to include gathering network statistics, acquiring insights into network intrusions, debugging client-server communication, and addressing various other networking challenges. Network Watcher packet capture enables you to initiate packet captures remotely, alleviating the need for manual execution on a specific virtual machine.
-Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more. By being able to remotely trigger packet captures, this capability eases the burden of running a packet capture manually and on the desired machine, which saves valuable time.
+In this article, you learn how to remotely configure, start, stop, download, and delete a virtual machine packet capture using Azure PowerShell. To learn how to manage packet captures using the Azure portal or Azure CLI, see [Manage packet captures for virtual machines using the Azure portal](network-watcher-packet-capture-manage-portal.md) or [Manage packet captures for virtual machines using the Azure CLI](network-watcher-packet-capture-manage-cli.md).
-This article takes you through the different management tasks that are currently available for packet capture.
+## Prerequisites
-- [**Start a packet capture**](#start-a-packet-capture)-- [**Stop a packet capture**](#stop-a-packet-capture)-- [**Delete a packet capture**](#delete-a-packet-capture)-- [**Download a packet capture**](#download-a-packet-capture)
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure Cloud Shell or Azure PowerShell.
+ The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
-## Before you begin
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. This article requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-This article assumes you have the following resources:
+- A virtual machine with the following outbound TCP connectivity:
+ - to the storage account over port 443
+ - to 169.254.169.254 over port 80
+ - to 168.63.129.16 over port 8037
-* An instance of Network Watcher in the region you want to create a packet capture
+> [!NOTE]
+> - Azure creates a Network Watcher instance in the the virtual machine's region if Network Watcher wasn't enabled for that region. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md).
+> - Network Watcher packet capture requires Network Watcher agent VM extension to be installed on the target virtual machine. For more information, see [Install Network Watcher agent](#install-network-watcher-agent).
+> - The last two IP addresses and ports listed in the **Prerequisites** are common across all Network Watcher tools that use the Network Watcher agent and might occasionally change.
-* A virtual machine with the packet capture extension enabled.
+If a network security group is associated to the network interface, or subnet that the network interface is in, ensure that rules exist to allow outbound connectivity over the previous ports. Similarly, ensure outbound connectivity over the previous ports when adding user-defined routes to your network.
-> [!IMPORTANT]
-> Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
+## Install Network Watcher agent
-## Install VM extension
+To use packet capture, the Network Watcher agent virtual machine extension must be installed on the virtual machine.
-### Step 1
+Use [Get-AzVMExtension](/powershell/module/az.compute/get-azvmextension) cmdlet to check if the extension is installed on the virtual machine:
-```powershell
-$VM = Get-AzVM -ResourceGroupName testrg -Name VM1
+```azurepowershell-interactive
+# List the installed extensions on the virtual machine.
+Get-AzVMExtension -VMName 'myVM' -ResourceGroupName 'myResourceGroup' | format-table Name, Publisher, ExtensionType, EnableAutomaticUpgrade
```
-### Step 2
-
-The following example retrieves the extension information needed to run the `Set-AzVMExtension` cmdlet. This cmdlet installs the packet capture agent on the guest virtual machine.
+If the extension is installed on the virtual machine, then you can see it listed in the output of the preceding command:
-> [!NOTE]
-> The `Set-AzVMExtension` cmdlet may take several minutes to complete.
+```output
+Name Publisher ExtensionType EnableAutomaticUpgrade
+- - -
+AzureNetworkWatcherExtension Microsoft.Azure.NetworkWatcher NetworkWatcherAgentLinux True
+```
-For Windows virtual machines:
+If the extension isn't installed, then use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) cmdlet to install it:
-```powershell
-$AzureNetworkWatcherExtension = Get-AzVMExtensionImage -Location WestCentralUS -PublisherName Microsoft.Azure.NetworkWatcher -Type NetworkWatcherAgentWindows -Version 1.4.585.2
-$ExtensionName = "AzureNetworkWatcherExtension"
-Set-AzVMExtension -ResourceGroupName $VM.ResourceGroupName -Location $VM.Location -VMName $VM.Name -Name $ExtensionName -Publisher $AzureNetworkWatcherExtension.PublisherName -ExtensionType $AzureNetworkWatcherExtension.Type -TypeHandlerVersion $AzureNetworkWatcherExtension.Version.Substring(0,3)
+```azurepowershell-interactive
+# Install Network Watcher agent on a Linux virtual machine.
+Set-AzVMExtension -Publisher 'Microsoft.Azure.NetworkWatcher' -ExtensionType 'NetworkWatcherAgentLinux' -Name 'AzureNetworkWatcherExtension' -VMName 'myVM' -ResourceGroupName 'myResourceGroup' -TypeHandlerVersion '1.4' -EnableAutomaticUpgrade 1
```
-For Linux virtual machines:
-
-```powershell
-$AzureNetworkWatcherExtension = Get-AzVMExtensionImage -Location WestCentralUS -PublisherName Microsoft.Azure.NetworkWatcher -Type NetworkWatcherAgentLinux -Version 1.4.13.0
-$ExtensionName = "AzureNetworkWatcherExtension"
-Set-AzVMExtension -ResourceGroupName $VM.ResourceGroupName -Location $VM.Location -VMName $VM.Name -Name $ExtensionName -Publisher $AzureNetworkWatcherExtension.PublisherName -ExtensionType $AzureNetworkWatcherExtension.Type -TypeHandlerVersion $AzureNetworkWatcherExtension.Version.Substring(0,3)
+```azurepowershell-interactive
+# Install Network Watcher agent on a Windows virtual machine.
+Set-AzVMExtension -Publisher 'Microsoft.Azure.NetworkWatcher' -ExtensionType 'NetworkWatcherAgentWindows' -Name 'AzureNetworkWatcherExtension' -VMName 'myVM' -ResourceGroupName 'myResourceGroup' -TypeHandlerVersion '1.4' -EnableAutomaticUpgrade 1
```
-The following example is a successful response after running the `Set-AzVMExtension` cmdlet.
+After a successful installation of the extension, you see the following output:
-```
+```output
RequestId IsSuccessStatusCode StatusCode ReasonPhrase - -
- True OK OK
+ True OK
```
-### Step 3
+## Start a packet capture
-To ensure that the agent is installed, run the `Get-AzVMExtension` cmdlet and pass it the virtual machine name and the extension name.
+To start a capture session, use [New-AzNetworkWatcherPacketCapture](/powershell/module/az.network/new-aznetworkwatcherpacketcapture) cmdlet:
-```powershell
-Get-AzVMExtension -ResourceGroupName $VM.ResourceGroupName -VMName $VM.Name -Name $ExtensionName
-```
+```azurepowershell-interactive
+# Place the virtual machine configuration into a variable.
+$vm = Get-AzVM -ResourceGroupName 'myResourceGroup' -Name 'myVM'
-The following sample is an example of the response from running `Get-AzVMExtension`
+# Place the storage account configuration into a variable.
+$storageAccount = Get-AzStorageAccount -ResourceGroupName 'myResourceGroup' -Name 'mystorageaccount'
+# Start the Network Watcher capture session.
+New-AzNetworkWatcherPacketCapture -Location 'eastus' -PacketCaptureName 'myVM_1' -TargetVirtualMachineId $vm.Id -StorageAccountId $storageAccount.Id
```
-ResourceGroupName : testrg
-VMName : testvm1
-Name : AzureNetworkWatcherExtension
-Location : westcentralus
-Etag : null
-Publisher : Microsoft.Azure.NetworkWatcher
-ExtensionType : NetworkWatcherAgentWindows
-TypeHandlerVersion : 1.4
-Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Compute/virtualMachines/testvm1/
- extensions/AzureNetworkWatcherExtension
-PublicSettings :
-ProtectedSettings :
-ProvisioningState : Succeeded
-Statuses :
-SubStatuses :
-AutoUpgradeMinorVersion : True
-ForceUpdateTag :
-```
-
-## Start a packet capture
-
-Once the preceding steps are complete, the packet capture agent is installed on the virtual machine.
-### Step 1
+Once the capture session is started, you see the following output:
-The next step is to retrieve the Network Watcher instance. This variable is passed to the `New-AzNetworkWatcherPacketCapture` cmdlet in step 4.
-
-```powershell
-$networkWatcher = Get-AzNetworkWatcher | Where {$_.Location -eq "westcentralus" }
+```output
+ProvisioningState Name BytesToCapturePerPacket TotalBytesPerSession TimeLimitInSeconds
+-- - -- --
+Succeeded myVM_1 0 1073741824 18000
```
-### Step 2
-
-Retrieve a storage account. This storage account is used to store the packet capture file.
+The following table describes the optional parameters that you can use with the `New-AzNetworkWatcherPacketCapture` cmdlet:
-```powershell
-$storageAccount = Get-AzStorageAccount -ResourceGroupName testrg -Name testrgsa123
-```
+| Parameter | description |
+| | |
+| `-Filter` | Add filter(s) to capture only the traffic you want. For example, you can capture only TCP traffic from a specific IP address to a specific port. |
+| `-TimeLimitInSeconds` | Set the maximum duration of the capture session. The default value is 18000 seconds (5 hours). |
+| `-BytesToCapturePerPacket` | Set the maximum number of bytes to be captured per each packet. All bytes are captured if not used or 0 entered. |
+| `-TotalBytesPerSession` | Set the total number of bytes that are captured. Once the value is reached the packet capture stops. Up to 1 GB (1,073,741,824 bytes) is captured if not used. |
+| `-LocalFilePath` | Enter a valid local file path if you want the capture to be saved in the target virtual machine (For example, C:\Capture\myVM_1.cap). If you're using a Linux machine, the path must start with /var/captures. |
-### Step 3
+## Stop a packet capture
-Filters can be used to limit the data that is stored by the packet capture. The following example sets up two filters. One filter collects outgoing TCP traffic only from local IP 10.0.0.3 to destination ports 20, 80 and 443. The second filter collects only UDP traffic.
+Use [Stop-AzNetworkWatcherPacketCapture](/powershell/module/az.network/stop-aznetworkwatcherpacketcapture) cmdlet to manually stop a running packet capture session.
-```powershell
-$filter1 = New-AzPacketCaptureFilterConfig -Protocol TCP -RemoteIPAddress "1.1.1.1-255.255.255.255" -LocalIPAddress "10.0.0.3" -LocalPort "1-65535" -RemotePort "20;80;443"
-$filter2 = New-AzPacketCaptureFilterConfig -Protocol UDP
+```azurepowershell-interactive
+# Manually stop a packet capture session.
+Stop-AzNetworkWatcherPacketCapture -Location 'eastus' -PacketCaptureName 'myVM_1'
``` > [!NOTE]
-> Multiple filters can be defined for a packet capture.
+> The cmdlet doesn't return a response whether ran on a currently running capture session or a session that has already stopped.
-### Step 4
+## Get a packet capture
-Run the `New-AzNetworkWatcherPacketCapture` cmdlet to start the packet capture process, passing the required values retrieved in the preceding steps.
-```powershell
+Use [Get-AzNetworkWatcherPacketCapture](/powershell/module/az.network/get-aznetworkwatcherpacketcapture) cmdlet to retrieve the status of a packet capture (running or completed).
-New-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher -TargetVirtualMachineId $vm.Id -PacketCaptureName "PacketCaptureTest" -StorageAccountId $storageAccount.id -TimeLimitInSeconds 60 -Filter $filter1, $filter2
+```azurepowershell-interactive
+# Get information, properties, and status of a packet capture.
+Get-AzNetworkWatcherPacketCapture -Location 'eastus' -PacketCaptureName 'myVM_1'
```
-The following example is the expected output from running the `New-AzNetworkWatcherPacketCapture` cmdlet.
+The following output is an example of the output from the `Get-AzNetworkWatcherPacketCapture` cmdlet. The following example is after the capture is complete. The PacketCaptureStatus value is Stopped, with a StopReason of TimeExceeded. This value shows that the packet capture was successful and ran its time.
+```output
+ProvisioningState Name Target BytesToCapturePerPacket TotalBytesPerSession TimeLimitInSeconds
+-
+Succeeded myVM_1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM 0 1073741824 18000
```
-Name : PacketCaptureTest
-Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatcher
- s/NetworkWatcher_westcentralus/packetCaptures/PacketCaptureTest
-Etag : W/"3bf27278-8251-4651-9546-c7f369855e4e"
-ProvisioningState : Succeeded
-Target : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Compute/virtualMachines/testvm1
-BytesToCapturePerPacket : 0
-TotalBytesPerSession : 1073741824
-TimeLimitInSeconds : 60
-StorageLocation : {
- "StorageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Storage/storageA
- ccounts/examplestorage",
- "StoragePath": "https://examplestorage.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-00000
- 0000000/resourcegroups/testrg/providers/microsoft.compute/virtualmachines/testvm1/2017/02/01/packetcapture_22_42_48_238.cap"
- }
-Filters : [
- {
- "Protocol": "TCP",
- "RemoteIPAddress": "1.1.1.1-255.255.255",
- "LocalIPAddress": "10.0.0.3",
- "LocalPort": "1-65535",
- "RemotePort": "20;80;443"
- },
- {
- "Protocol": "UDP",
- "RemoteIPAddress": "",
- "LocalIPAddress": "",
- "LocalPort": "",
- "RemotePort": ""
- }
- ]
+> [!NOTE]
+> To get more details in the output, add `| Format-List` to the end of the command.
-```
-
-## Get a packet capture
+## Download a packet capture
-Running the `Get-AzNetworkWatcherPacketCapture` cmdlet, retrieves the status of a currently running, or completed packet capture.
+After concluding your packet capture session, the resulting capture file is saved to Azure storage, a local file on the target virtual machine or both. The storage destination for the packet capture is specified during its creation. For more information, see [Start a packet capture](#start-a-packet-capture).
-```powershell
-Get-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher -PacketCaptureName "PacketCaptureTest"
-```
+If a storage account is specified, capture files are saved to the storage account at the following path:
-The following example is the output from the `Get-AzNetworkWatcherPacketCapture` cmdlet. The following example is after the capture is complete. The PacketCaptureStatus value is Stopped, with a StopReason of TimeExceeded. This value shows that the packet capture was successful and ran its time.
-```
-Name : PacketCaptureTest
-Id : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatcher
- s/NetworkWatcher_westcentralus/packetCaptures/PacketCaptureTest
-Etag : W/"4b9a81ed-dc63-472e-869e-96d7166ccb9b"
-ProvisioningState : Succeeded
-Target : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Compute/virtualMachines/testvm1
-BytesToCapturePerPacket : 0
-TotalBytesPerSession : 1073741824
-TimeLimitInSeconds : 60
-StorageLocation : {
- "StorageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/Microsoft.Storage/storageA
- ccounts/examplestorage",
- "StoragePath": "https://examplestorage.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-00000
- 0000000/resourcegroups/testrg/providers/microsoft.compute/virtualmachines/testvm1/2017/02/01/packetcapture_22_42_48_238.cap"
- }
-Filters : [
- {
- "Protocol": "TCP",
- "RemoteIPAddress": "1.1.1.1-255.255.255",
- "LocalIPAddress": "10.0.0.3",
- "LocalPort": "1-65535",
- "RemotePort": "20;80;443"
- },
- {
- "Protocol": "UDP",
- "RemoteIPAddress": "",
- "LocalIPAddress": "",
- "LocalPort": "",
- "RemotePort": ""
- }
- ]
-CaptureStartTime : 2/1/2017 10:43:01 PM
-PacketCaptureStatus : Stopped
-StopReason : TimeExceeded
-PacketCaptureError : []
+```url
+https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{virtualMachineName}/{year}/{month}/{day}/packetcapture_{UTCcreationTime}.cap
```
-## Stop a packet capture
+To download a packet capture file saved to Azure storage, use [Get-AzStorageBlobContent](/powershell/module/az.storage/get-azstorageblobcontent) cmdlet:
-By running the `Stop-AzNetworkWatcherPacketCapture` cmdlet, if a capture session is in progress it is stopped.
-
-```powershell
-Stop-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher -PacketCaptureName "PacketCaptureTest"
+```azurepowershell-interactive
+# Download the packet capture file from Azure storage container.
+Get-AzStorageBlobContent -Container 'network-watcher-logs' -Blob 'subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/myresourcegroup/providers/microsoft.compute/virtualmachines/myvm/2024/01/25/packetcapture_22_44_54_342.cap' -Destination 'C:\Capture\myVM_1.cap'
``` > [!NOTE]
-> The cmdlet returns no response when ran on a currently running capture session or an existing session that has already stopped.
+> You can also download the capture file from the storage account container using the Azure Storage Explorer. Storage Explorer is a standalone app that you can conveniently use to access and work with Azure Storage data. For more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
## Delete a packet capture
-```powershell
-Remove-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher -PacketCaptureName "PacketCaptureTest"
-```
-
-> [!NOTE]
-> Deleting a packet capture does not delete the file in the storage account.
-
-## Download a packet capture
-
-Once your packet capture session has completed, the capture file can be uploaded to blob storage or to a local file on the VM. The storage location of the packet capture is defined at creation of the session. A convenient tool to access these capture files saved to a storage account is Microsoft Azure Storage Explorer, which can be downloaded here: https://storageexplorer.com/
-
-If a storage account is specified, packet capture files are saved to a storage account at the following location:
-
-```
-https://{storageAccountName}.blob.core.windows.net/network-watcher-logs/subscriptions/{subscriptionId}/resourcegroups/{storageAccountResourceGroup}/providers/microsoft.compute/virtualmachines/{VMName}/{year}/{month}/{day}/packetCapture_{creationTime}.cap
+```azurepowershell-interactive
+# Remove a packet capture resource.
+Remove-AzNetworkWatcherPacketCapture -Location 'eastus' -PacketCaptureName 'myVM_1'
```
-## Next steps
-
-Learn how to automate packet captures with Virtual machine alerts by viewing [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md)
+> [!IMPORTANT]
+> Deleting a packet capture in Network Watcher doesn't delete the capture file from the storage account or the virtual machine. If you don't need the capture file anymore, you must manually delete it from the storage account to avoid incurring storage costs.
-Find if certain traffic is allowed in or out of your VM by visiting [Check IP flow verify](diagnose-vm-network-traffic-filtering-problem.md)
+## Related content
-<!-- Image references -->
+- To learn how to automate packet captures with virtual machine alerts, see [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md).
+- To determine whether specific traffic is allowed in or out of a virtual machine, see [Diagnose a virtual machine network traffic filter problem](diagnose-vm-network-traffic-filtering-problem.md).
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
description: Shows you how to quickly stand up IBM WebSphere Liberty and Open Li
Previously updated : 06/24/2023 Last updated : 01/31/2024
This article shows you how to quickly stand up IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift (ARO) using the Azure portal.
-This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to ARO. The offer automatically provisions a number of resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aro). If you prefer manual step-by-step guidance for running Liberty on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro).
+This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to ARO. The offer automatically provisions several resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aro). If you prefer manual step-by-step guidance for running Liberty on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro).
This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty).
-## Prerequisites
--- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]--- A Red Hat account with complete profile. If you don't have one, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). -- Use [Azure Cloud Shell](/azure/cloud-shell/quickstart) using the Bash environment; make sure the Azure CLI version is 2.43.0 or higher.
- [![Image of button to launch Cloud Shell in a new window.](../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
-
- > [!NOTE]
- > You can also execute this guidance from a local developer command line with the Azure CLI installed. To learn how to install the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
--- Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role and the [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
+## Prerequisites
-- Azure Red Hat OpenShift requires a minimum of 40 cores to create and run an OpenShift cluster. Ensure your subscription has sufficient quota.
+- A local machine with a Unix-like operating system installed (for example, Ubuntu, Azure Linux, or macOS, Windows Subsystem for Linux).
+- A Java SE implementation, version 17 or later (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)).
+- [Maven](https://maven.apache.org/download.cgi) version 3.5.0 or higher.
+- [Docker](https://docs.docker.com/get-docker/) for your OS.
+- [Azure CLI](/cli/azure/install-azure-cli) version 2.31.0 or higher.
+- The Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role and the [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview)
## Get a Red Hat pull secret The Azure Marketplace offer you're going to use in this article requires a Red Hat pull secret. This section shows you how to get a Red Hat pull secret for Azure Red Hat OpenShift. To learn about what a Red Hat pull secret is and why you need it, see the [Get a Red Hat pull secret](/azure/openshift/tutorial-create-cluster?WT.mc_id=Portal-fx#get-a-red-hat-pull-secret-optional) section of [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](/azure/openshift/tutorial-create-cluster?WT.mc_id=Portal-fx). To get the pull secret for use, follow the steps in this section.
-Use your Red Hat account to sign in to the OpenShift cluster manager portal, by visiting the [Red Hat OpenShift Hybrid Cloud Console](https://console.redhat.com/openshift/install/azure/aro-provisioned). You may need to accept more terms and update your account as shown in the following screenshot. Use the same password as when you created the account.
+Use your Red Hat account to sign in to the OpenShift cluster manager portal, by visiting the [Red Hat OpenShift Hybrid Cloud Console](https://console.redhat.com/openshift/install/azure/aro-provisioned). You might need to accept more terms and update your account as shown in the following screenshot. Use the same password as when you created the account.
:::image type="content" source="media/howto-deploy-java-liberty-app/red-hat-account-complete-profile.png" alt-text="Screenshot of Red Hat Update Your Account page." lightbox="media/howto-deploy-java-liberty-app/red-hat-account-complete-profile.png":::
Save the secret to a file so you can use it later.
## Create a Microsoft Entra service principal from the Azure portal
-The Azure Marketplace offer you're going to use in this article requires a Microsoft Entra service principal to deploy your Azure Red Hat OpenShift cluster. The offer assigns the service principal with proper privileges during deployment time, with no role assignment needed. If you have a service principal ready to use, skip this section and move on to the next section, where you'll deploy the offer.
+The Azure Marketplace offer you're going to use in this article requires a Microsoft Entra service principal to deploy your Azure Red Hat OpenShift cluster. The offer assigns the service principal with proper privileges during deployment time, with no role assignment needed. If you have a service principal ready to use, skip this section and move on to the next section, where you deploy the offer.
Use the following steps to deploy a service principal and get its Application (client) ID and secret from the Azure portal. For more information, see [Create and use a service principal to deploy an Azure Red Hat OpenShift cluster](/azure/openshift/howto-create-service-principal?pivots=aro-azureportal).
Use the following steps to deploy a service principal and get its Application (c
:::image type="content" source="media/howto-deploy-java-liberty-app/azure-portal-create-service-principal.png" alt-text="Screenshot of Azure portal showing the Register an application page." lightbox="media/howto-deploy-java-liberty-app/azure-portal-create-service-principal.png":::
-1. Save the Application (client) ID from the overview page, as shown in the following screenshot. Hover the pointer over the value (redacted in the screenshot) and select the copy icon that appears. The tooltip will say **Copy to clipboard**. Be careful to copy the correct value, since the other values in that section also have copy icons. Save the Application ID to a file so you can use it later.
+1. Save the Application (client) ID from the overview page, as shown in the following screenshot. Hover the pointer over the value (redacted in the screenshot) and select the copy icon that appears. The tooltip says **Copy to clipboard**. Be careful to copy the correct value, since the other values in that section also have copy icons. Save the Application ID to a file so you can use it later.
:::image type="content" source="media/howto-deploy-java-liberty-app/azure-portal-obtain-service-principal-client-id.png" alt-text="Screenshot of Azure portal showing service principal client ID." lightbox="media/howto-deploy-java-liberty-app/azure-portal-obtain-service-principal-client-id.png":::
Use the following steps to deploy a service principal and get its Application (c
1. Select **Certificates & secrets**. 1. Select **Client secrets**, then **New client secret**. 1. Provide a description of the secret and a duration. When you're done, select **Add**.
- 1. After the client secret is added, the value of the client secret is displayed. Copy this value because you won't be able to retrieve it later.
+ 1. After the client secret is added, the value of the client secret is displayed. Copy this value because you can't retrieve it later.
-You've now created your Microsoft Entra application, service principal, and client secret.
+You now have a Microsoft Entra application, service principal, and client secret.
## Deploy IBM WebSphere Liberty or Open Liberty on Azure Red Hat OpenShift
The following steps show you how to fill out the **Operator and application** pa
1. Leave the default option of **No** for **Deploy an application?**. > [!NOTE]
- > This quickstart doesn't deploy an application, but you can select **Yes** for **Deploy an application?** if you prefer.
+ > This quickstart manually deploys a sample application later, but you can select **Yes** for **Deploy an application?** if you prefer.
1. Select **Review + create**. Ensure that the green **Validation Passed** message appears at the top. If the message doesn't appear, fix any validation problems and then select **Review + create** again.
The following steps show you how to fill out the **Operator and application** pa
1. Track the progress of the deployment on the **Deployment is in progress** page.
-Depending on network conditions and other activity in your selected region, the deployment may take up to 40 minutes to complete.
+Depending on network conditions and other activity in your selected region, the deployment might take up to 40 minutes to complete.
## Verify the functionality of the deployment
-The steps in this section show you how to verify that the deployment has successfully completed.
+The steps in this section show you how to verify that the deployment completed successfully.
-If you navigated away from the **Deployment is in progress** page, the following steps will show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to step 5.
+If you navigated away from the **Deployment is in progress** page, the following steps show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to step 5.
-1. In the upper left corner of any portal page, select the hamburger menu and then select **Resource groups**.
+1. In the corner of any portal page, select the hamburger menu and then select **Resource groups**.
1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group.
-1. In the navigation pane, in the **Settings** section, select **Deployments**. You'll see an ordered list of the deployments to this resource group, with the most recent one first.
+1. In the navigation pane, in the **Settings** section, select **Deployments**. You see an ordered list of the deployments to this resource group, with the most recent one first.
1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown in the following screenshot.
If you navigated away from the **Deployment is in progress** page, the following
1. In the navigation pane, select **Outputs**. This list shows the output values from the deployment, which includes some useful information.
-1. Open Azure Cloud Shell and paste the value from the **cmdToGetKubeadminCredentials** field. You'll see the admin account and credential for logging in to the OpenShift cluster console portal. The following content is an example of an admin account.
+1. Open your terminal and paste the value from the **cmdToGetKubeadminCredentials** field. You see the admin account and credential for logging in to the OpenShift cluster console portal. The following content is an example of an admin account.
- ```azurecli-interactive
+ ```bash
az aro list-credentials --resource-group abc1228rg --name clusterf9e8b9 { "kubeadminPassword": "xxxxx-xxxxx-xxxxx-xxxxx",
If you navigated away from the **Deployment is in progress** page, the following
} ```
-1. Paste the value from the **clusterConsoleUrl** field into an Internet-connected web browser, and then press <kbd>Enter</kbd>. Fill in the admin user name and password, which you can find in the list of installed IBM WebSphere Liberty operators, as shown in the following screenshot.
+1. Paste the value from the **clusterConsoleUrl** field into an Internet-connected web browser, and then press <kbd>Enter</kbd>. Fill in the admin user name and password and sign in.
+
+1. Verify that the appropriate Kubernetes operator for Liberty is installed. In the navigation pane, select **Operators**, then **Installed Operators**, as shown in the following screenshot:
:::image type="content" source="media/howto-deploy-java-liberty-app/red-hat-openshift-cluster-console-portal.png" alt-text="Screenshot of Red Hat OpenShift cluster console portal showing Installed Operators page." lightbox="media/howto-deploy-java-liberty-app/red-hat-openshift-cluster-console-portal.png":::
-You can use the output commands to create an application or manage the cluster.
+ Take note if you installed the WebSphere Liberty operator or the Open Liberty operator. The operator variant matches what you selected at deployment time. If you selected **IBM Supported**, you have the WebSphere Liberty operator. Otherwise you have the Open Liberty operator. This information is important to know in later steps.
+
+1. Download and install the OpenShift CLI `oc` by following steps in tutorial [Install the OpenShift CLI](tutorial-connect-cluster.md#install-the-openshift-cli), then return to this documentation.
+
+1. Switch to **Outputs** pane, copy the value from the **cmdToLoginWithKubeadmin** field, and then paste it in your terminal. Run the command to sign in to the OpenShift cluster's API server. You should see output similar to the following example in the console:
+
+ ```output
+ Login successful.
+
+ You have access to 71 projects, the list has been suppressed. You can list all projects with 'oc projects'
+
+ Using project "default".
+ ```
+
+## Create an Azure SQL Database
+
+The following steps guide you through creating an Azure SQL Database single database for use with your app:
+
+1. Create a single database in Azure SQL Database by following the steps in [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart), carefully noting the differences described in the following note. Return to this article after creating and configuring the database server.
+
+ > [!NOTE]
+ > At the **Basics** step, write down the values for **Resource group**, **Database name**, **_\<server-name>_.database.windows.net**, **Server admin login**, and **Password**. The database **Resource group** is referred to as `<db-resource-group>` later in this article.
+ >
+ > At the **Networking** step, set **Connectivity method** to **Public endpoint**, **Allow Azure services and resources to access this server** to **Yes**, and **Add current client IP address** to **Yes**.
+ >
+ > :::image type="content" source="media/howto-deploy-java-liberty-app/create-sql-database-networking.png" alt-text="Screenshot of the Azure portal that shows the Networking tab of the Create SQL Database page with the Connectivity method and Firewall rules settings highlighted." lightbox="media/howto-deploy-java-liberty-app/create-sql-database-networking.png":::
+
+Now that you created the database and ARO cluster, you can prepare the ARO to host your WebSphere Liberty application.
+
+## Configure and deploy the sample application
+
+Follow the steps in this section to deploy the sample application on the Liberty runtime. These steps use Maven.
+
+### Check out the application
+
+Clone the sample code for this guide by using the following commands. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aro).
+
+```bash
+git clone https://github.com/Azure-Samples/open-liberty-on-aro.git
+cd open-liberty-on-aro/3-integration/connect-db/mssql
+git checkout 20240116
+```
+
+If you see a message about being in "detached HEAD" state, this message is safe to ignore. It just means you checked out a tag.
+
+There are a few samples in the repository. We use *3-integration/connect-db/mssql/*. Here's the file structure of the application:
+
+```
+mssql
+Γö£ΓöÇ src/main/
+Γöé Γö£ΓöÇ aro/
+Γöé Γöé Γö£ΓöÇ db-secret.yaml
+Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
+Γöé Γöé Γö£ΓöÇ webspherelibertyapplication.yaml
+Γöé Γö£ΓöÇ docker/
+Γöé Γöé Γö£ΓöÇ Dockerfile
+Γöé Γöé Γö£ΓöÇ Dockerfile-ol
+Γöé Γö£ΓöÇ liberty/config/
+Γöé Γöé Γö£ΓöÇ server.xml
+Γöé Γö£ΓöÇ java/
+Γöé Γö£ΓöÇ resources/
+Γöé Γö£ΓöÇ webapp/
+Γö£ΓöÇ pom.xml
+```
+
+The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
+
+In the *aro* directory, there are three deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *webspherelibertyapplication.yaml* is used in this quickstart to deploy the WebSphere Liberty Application. Use the file *openlibertyapplication.yaml* to deploy the Open Liberty Application if you deployed Open Liberty Operator in section [Deploy IBM WebSphere Liberty or Open Liberty on Azure Red Hat OpenShift](#deploy-ibm-websphere-liberty-or-open-liberty-on-azure-red-hat-openshift).
+
+In the *docker* directory, there are two files to create the application image with either Open Liberty or WebSphere Liberty. These files are *Dockerfile* and *Dockerfile-ol*, respectively. You use the file *Dockerfile* to build the application image with WebSphere Liberty in this quickstart. Similarly, use the file *Dockerfile-ol* to build the application image with Open Liberty if you deployed Open Liberty Operator in section [Deploy IBM WebSphere Liberty or Open Liberty on Azure Red Hat OpenShift](#deploy-ibm-websphere-liberty-or-open-liberty-on-azure-red-hat-openshift).
+
+In directory *liberty/config*, the *server.xml* file is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
+
+### Build the project
+
+Now that you gathered the necessary properties, you can build the application by using the following commands. The POM file for the project reads many variables from the environment. As part of the Maven build, these variables are used to populate values in the YAML files located in *src/main/aro*. You can do something similar for your application outside Maven if you prefer.
+
+```bash
+cd <path-to-your-repo>/3-integration/connect-db/mssql
+
+# The following variables are used for deployment file generation into target.
+export DB_SERVER_NAME=<server-name>.database.windows.net
+export DB_NAME=<database-name>
+export DB_USER=<server-admin-login>@<server-name>
+export DB_PASSWORD=<server-admin-password>
+
+mvn clean install
+```
+
+### (Optional) Test your project locally
+
+You can now run and test the project locally before deploying to Azure by using the following steps. For convenience, we use the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin`, see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html). For your application, you can do something similar using any other mechanism, such as your local IDE. You can also consider using the `liberty:devc` option intended for development with containers. You can read more about `liberty:devc` in the [Liberty docs](https://openliberty.io/docs/latest/development-mode.html#_container_support_for_dev_mode).
+
+1. Start the application by using `liberty:run`, as shown in the following example. `liberty:run` also uses the environment variables defined in the previous section.
+
+ ```bash
+ cd <path-to-your-repo>/3-integration/connect-db/mssql
+ mvn liberty:run
+ ```
+
+1. Verify that the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` or `https://localhost:9443/` in your browser and verify the application is accessible and all functions are working.
+
+1. Press <kbd>Ctrl</kbd>+<kbd>C</kbd> to stop.
+
+Next, use the following steps to containerize your project using Docker and run it as a container locally before deploying to Azure:
+
+1. Run the `docker build` command to build the image.
+
+ ```bash
+ cd <path-to-your-repo>/3-integration/connect-db/mssql/target
+ docker build -t javaee-cafe:v1 --pull --file=Dockerfile .
+ ```
+
+1. Run the image using the following command. Note we're using the environment variables defined previously.
+
+ ```bash
+ docker run -it --rm -p 9080:9080 -p 9443:9443 \
+ -e DB_SERVER_NAME=${DB_SERVER_NAME} \
+ -e DB_NAME=${DB_NAME} \
+ -e DB_USER=${DB_USER} \
+ -e DB_PASSWORD=${DB_PASSWORD} \
+ javaee-cafe:v1
+ ```
+
+1. Once the container starts, go to `http://localhost:9080/` or `https://localhost:9443/` in your browser to access the application.
+
+1. Press <kbd>Ctrl</kbd>+<kbd>C</kbd> to stop.
+
+### Build image and push to the image stream
+
+When you're satisfied with the state of the application, you build the image remotely on the cluster by using the following steps.
+
+1. Use the following commands to identity the source directory and the Dockerfile:
+
+ ```bash
+ cd <path-to-your-repo>/3-integration/connect-db/mssql/target
+
+ # If you are deploying the application with WebSphere Liberty Operator, the existing Dockerfile is ready for you
+
+ # If you are deploying the application with Open Liberty Operator, uncomment and execute the following two commands to rename Dockerfile-ol to Dockerfile
+ # mv Dockerfile Dockerfile.backup
+ # mv Dockerfile-ol Dockerfile
+ ```
+
+1. Use the following command to create an image stream:
+
+ ```bash
+ oc create imagestream javaee-cafe
+ ```
+
+1. Use the following command to create a build configuration that specifies the image stream tag of the build output:
+
+ ```bash
+ oc new-build --name javaee-cafe-config --binary --strategy docker --to javaee-cafe:v1
+ ```
+
+1. Use the following command to start the build to upload local contents, containerize, and output to the image stream tag specified before:
+
+ ```bash
+ oc start-build javaee-cafe-config --from-dir . --follow
+ ```
+
+### Deploy and test the application
+
+Use the following steps to deploy and test the application:
+
+1. Use the following command to apply the DB secret:
+
+ ```bash
+ cd <path-to-your-repo>/3-integration/connect-db/mssql/target
+ oc apply -f db-secret.yaml
+ ```
+
+ You should see the output `secret/db-secret-mssql created`.
+
+1. Use the following command to apply the deployment file:
+
+ ```bash
+ oc apply -f webspherelibertyapplication.yaml
+ ```
+
+1. Wait until all pods are started and running successfully by using the following command:
+
+ ```bash
+ oc get pods -l app.kubernetes.io/name=javaee-cafe --watch
+ ```
+
+ You should see output similar to the following example to indicate that all the pods are running:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ javaee-cafe-67cdc95bc-2j2gr 1/1 Running 0 29s
+ javaee-cafe-67cdc95bc-fgtt8 1/1 Running 0 29s
+ javaee-cafe-67cdc95bc-h47qm 1/1 Running 0 29s
+ ```
+
+1. Use the following steps to verify the results:
+
+ 1. Use the following command to get the *host* of the Route resource deployed with the application:
+
+ ```bash
+ echo "route host: https://$(oc get route javaee-cafe --template='{{ .spec.host }}')"
+ ```
+
+ 1. Copy the value of `route host` from the output, then open it in your browser to test the application. If the web page doesn't render correctly, that's because the app is still starting in the background. Wait for a few minutes and then try again.
+
+ 1. Add and delete a few coffees to verify the functionality of the app and the database connection.
+
+ :::image type="content" source="media/howto-deploy-java-liberty-app/cafe-app-running.png" alt-text="Screenshot of the running app." lightbox="media/howto-deploy-java-liberty-app/cafe-app-running.png":::
## Clean up resources
-If you're not going to continue to use the OpenShift cluster, navigate back to your working resource group. At the top of the page, under the text **Resource group**, select the resource group. Then, select **Delete resource group**.
+To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, ARO cluster, Azure SQL Database, and all related resources.
+
+```bash
+az group delete --name abc1228rg --yes --no-wait
+az group delete --name <db-resource-group> --yes --no-wait
+```
## Next steps
operator-nexus Concepts Rack Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-rack-resiliency.md
The Nexus service is engineered to uphold control plane resiliency across variou
## Instances with three or more compute racks
-Operator Nexus ensures the availability of three active control plane nodes in instances with three or more compute racks. For configurations exceeding two compute racks, an extra spare node is also maintained. These nodes are strategically distributed across different racks to guarantee control plane resiliency, when possible.
+Operator Nexus ensures the availability of three active Kubernetes control plane (KCP) nodes in instances with three or more compute racks. For configurations exceeding two compute racks, an extra spare node is also maintained. These nodes are strategically distributed across different racks to guarantee control plane resiliency, when possible.
+
+> [!TIP]
+> The Kubernetes control plane is a set of components that manage the state of a Kubernetes cluster, schedule workloads, and respond to cluster events. It includes the API server, etcd storage, scheduler, and controller managers.
+>
+> The remaining management nodes contain various operators which run the platform software as well as other components performing support capabilities for monitoring, storage and networking.
During runtime upgrades, Operator Nexus implements a sequential upgrade of the control plane nodes, thereby preserving resiliency throughout the upgrade process.
Three compute racks:
| Rack 1 | Rack 2 | Rack 3 | |||-|
-| KCP | KCP | KCP |
-| KCP-spare | MGMT | MGMT |
+| KCP | KCP | KCP |
+| KCP-spare | MGMT | MGMT |
Four or more compute racks:
Two compute racks:
| KCP | KCP-spare| | MGMT | MGMT |
-> [!NOTE]
-> Operator Nexus supports control plane resiliency in single rack configurations by having three management nodes within the rack. For example, a single rack configuration with three management servers will provide an equivalent number of active control planes to ensure resiliency within a rack.
+Single compute rack:
+
+Operator Nexus supports control plane resiliency in single rack configurations by having three management nodes within the rack. For example, a single rack configuration with three management servers will provide an equivalent number of active control planes to ensure resiliency within a rack.
+
+| Rack 1 |
+||
+| KCP |
+| KCP |
+| KCP |
-## Impacts to on-premises instance
+## Resiliency implications of lost quorum
-In disaster situations when the control plane loses quorum, there are impacts to the Kubernetes API across the instance. This scenario can impact a workload's ability to read and write Customer Resources (CRs) and talk across racks.
+In disaster situations when the control plane loses quorum, there are impacts to the Kubernetes API across the instance. This scenario can affect a workload's ability to read and write Custom Resources (CRs) and talk across racks.
## Related Links
operator-nexus Reference Nexus Platform Runtime Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-platform-runtime-upgrades.md
+
+ Title: Operator Nexus Platform Cluster runtime upgrades
+description: Detail the cadence and process Nexus uses to release new runtime versions to customers
+ Last updated : 12/29/2023+++++
+# Nexus Platform Cluster runtime upgrade governance
+
+This document details how Nexus releases, manages, and supports various platform runtime upgrades for near edge customers.
+
+Operator Nexus will release platform cluster runtime versions three minor versions per year and monthly patch versions in between.
+
+Operator Nexus supports n-2 platform cluster runtime releases for customers, providing approximately one year of support upon release.
+
+## Understanding Nexus Cluster versioning
+
+Nexus Platform Cluster versions use semantically versioning-based principles (https://semver.org/), which ensures that users can make informed decisions about version selection with the following rules about changes allowed in a version:
+
+- Major version to introduce fundamentally incompatible functionality or interface changes.
+- Minor version to introduce functionality while retaining a backwards compatible interface.
+- Patch version to make backwards compatible modifications, such as bug or security vulnerability fixes.
+
+Nexus Cluster versions utilize the same Major.Minor.Patch scheme. Using semantic versioning includes a critical principle of immutability. **Once a versioned package has been released, the contents of that version WILL NOT be modified. Any modifications MUST be released as a new version.**
+
+The Platform Cluster version is represented in the Nexus Cluster resource in the ΓÇ£clusterVersionΓÇ¥ property. At the time of cluster creation, the version is specified in the cluster resource, and must contain a supported version. To update, the cluster updateVersion action is called with a payload specifying the desired version, which must be one of the supported update versions for that cluster. The cluster property ΓÇ£availableUpgradeVersionsΓÇ¥ contains the list of eligible versions specific to that clusterΓÇÖs hardware and current version.
+
+## Nexus Platform Cluster release cadence
+
+Operator Nexus targets a new minor version platform cluster release in February, June, and October every year. A customer can decide when to apply the minor version to a Nexus instance. However, these minor releases aren't optional and need to be taken to stay in support.
+
+These platform cluster releases consist of new minor Kubernetes releases for the infrastructure, new versions of Azure Linux, and other critical components to the underlying platform.
+
+In addition to minor releases, Operator Nexus releases patch platform cluster releases in between minor releases. In general, these releases are optional to apply.
+
+## Patch Platform Cluster runtime releases
+
+Platform Cluster patch releases will be scheduled monthly to provide customers with an updated, secure version of Azure Linux. These releases will be applied to the latest minor release.
+
+Operator Nexus will also release patch platform cluster runtime releases addressing critical functional or high severity security issues to the latest minor release.
+
+## Platform Cluster runtime releases out of support
+
+When a customer is on a release that has moved out of support, Microsoft will attempt to mitigate the customer tickets but it may not be possible to address. When a runtime minor release has dropped support, it will no longer be an option to deploy to a new instance.
+
+
+
+When an instance is running an n-3 version:
+
+- The cluster will continue to run; however, normal operations may start to degrade as newer versions of software aren't given the same testing and integration
+- Support tickets raised will continue to get support, but the issues may not be able to be mitigated.
+- The n-3 release will no longer be available to customers to deploy a new instance.
+- There's no upgrade path supported (more details below), requiring customers to repave instances.
+- Platform Cluster runtime versions past support may continue to run but Microsoft doesn't guarantee all functionality to be compatible with the newest version of software in the Cluster Manager. An upgrade path will be supported for customers on supported releases. Upgrading from an n-3 version or greater aren't supported and will require a repave of the site. Customers need to execute a platform cluster runtime upgrade before a site gets to n-3, this is usually within four months of the EOS date.
+- From a certificate perspective, there's currently a requirement for the customer to update their platform cluster runtime within a year of the most recent platform cluster runtime upgrade or first deployment to ensure certificates are kept valid and can connect to Azure. Instances with invalid certificates will require a new deployment.
+
+## Skipping minor releases
+
+Platform Cluster runtime minor releases can't be skipped due to the upgrade requirements of Kubernetes. A customer wanting to go from an n-2 version to an n version needs to perform multiple platform cluster runtime upgrades.
+
+## Related links
+
+[How to perform a Platform Cluster runtime upgrade](./howto-cluster-runtime-upgrade.md)
operator-service-manager Publisher Resource Preview Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/publisher-resource-preview-management.md
Last updated 09/11/2023 -+ # Publisher Tenants, subscriptions, regions and preview management
partner-solutions Qumulo Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-virtual-desktop.md
The solution architecture comprises the following components:
- [FSLogix Profile](/fslogix/overview-what-is-fslogix) [Containers](/fslogix/concepts-container-types#profile-container) to connect each AVD user to their assigned profile on the ANQ storage as part of the sign-in process. - [Microsoft Entra Domain Services](/azure/active-directory-domain-services/overview) to provide user authentication and manage access to Azure-based resources. - [Azure Virtual Networking](/azure/virtual-network/virtual-networks-overview)-- [VNet Injection](/azure/spring-apps/how-to-deploy-in-azure-virtual-network?tabs=azure-portal) to connect each regionΓÇÖs ANQ instance to the customerΓÇÖs own Azure subscription resources.
+- [VNet Injection](../../spring-apps/enterprise/how-to-deploy-in-azure-virtual-network.md?tabs=azure-portal) to connect each regionΓÇÖs ANQ instance to the customerΓÇÖs own Azure subscription resources.
## Considerations
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
Utilizing an application side pool together with PgBouncer on the database serve
* Whenever the server is restarted during scale operations, HA failover, or a restart, the PgBouncer is also restarted along with the server virtual machine. Hence the existing connections have to be re-established. * Due to a known issue, the portal doesn't show all PgBouncer parameters. Once you enable PgBouncer and save the parameter, you have to exit Parameter screen (for example, click Overview) and then get back to Parameters page. * Transaction and statement pool modes can't be used along with prepared statements. Refer to the [PgBouncer documentation](https://www.pgbouncer.org/features.html) to check other limitations of chosen pool mode.
+* If PgBouncer is deployed as a feature, it becomes a potential single point of failure. If the PgBouncer feature is down, it can disrupt the entire database connection pool, causing downtime for the application. To mitigate Single point of failure, you can set up multiple PgBouncer instances behind a load balancer for high availability on Azure VM.
+* PgBouncer is a very lightweight application, which utilizes single-threaded architecture. While this is great for majority of application workloads, in applications that create very large number of short lived connections this aspect may affect pgBouncer performance, limiting the ability to scale your application. You may need to distribute the connection load across multiple PgBouncer instances on Azure VM or consider alternative solutions like multithreaded solutions, such as [PgCat](https://github.com/postgresml/pgcat) on Azure VM.
> [!IMPORTANT] > Parameter pgbouncer.client_tls_sslmode for built-in PgBouncer feature has been deprecated in Azure Database for PostgreSQL flexible server with built-in PgBouncer feature enabled. When TLS/SSL for connections to Azure Database for PostgreSQL flexible server is enforced via setting the **require_secure_transport** server parameter to ON, TLS/SSL is automatically enforced for connections to built-in PgBouncer. This setting to enforce SSL/TLS is on by default on creation of a new Azure Database for PostgreSQL flexible server instance and enabling the built-in PgBouncer feature. For more on SSL/TLS in Azure Database for PostgreSQL flexible server see this [doc.](./concepts-networking.md#tls-and-ssl)
-For those customers that are looking for simplified management, built-in high availability, easy connectivity with containerized applications and are interested in utilizing most popular configuration parameters with PGBouncer built-in PGBouncer feature is good choice. For customers looking for full control of all parameters and debugging experience another choice could be setting up PGBouncer on Azure VM as an alternative.
+For those customers that are looking for simplified management, built-in high availability , easy connectivity with containerized applications and are interested in utilizing most popular configuration parameters with PGBouncer built-in PGBouncer feature is good choice. For customers looking for multithreaded scalability,full control of all parameters and debugging experience another choice could be setting up PGBouncer on Azure VM as an alternative.
## Next steps
postgresql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-server-parameters.md
To find the minimum `work_mem` value for a specific query, especially one genera
| Attribute | Value | |:|--:|
-| Default value | 211MB |
+| Default value | Dependent on server memory |
| Allowed value | 1MB-2GB | | Type | Dynamic | | Level | Global and granular | | Azure-Specific Notes | | #### Description
-`maintenance_work_mem` is a configuration parameter in PostgreSQL that governs the amount of memory allocated for maintenance operations, such as `VACUUM`, `CREATE INDEX`, and `ALTER TABLE`. Unlike `work_mem`, which affects memory allocation for query operations, `maintenance_work_mem` is reserved for tasks that maintain and optimize the database structure. Adjusting this parameter appropriately can help enhance the efficiency and speed of database maintenance operations.
+`maintenance_work_mem` is a configuration parameter in PostgreSQL that governs the amount of memory allocated for maintenance operations, such as `VACUUM`, `CREATE INDEX`, and `ALTER TABLE`. Unlike `work_mem`, which affects memory allocation for query operations, `maintenance_work_mem` is reserved for tasks that maintain and optimize the database structure.
+
+#### Key points
+
+* **Vacuum memory cap**: If you intend to speed up the cleanup of dead tuples by increasing `maintenance_work_mem`, be aware that VACUUM has a built-in limitation for collecting dead tuple identifiers, with the ability to use only up to 1GB of memory for this process.
+* **Separation of memory for autovacuum**: The `autovacuum_work_mem` setting allows you to control the memory used by autovacuum operations independently. It acts as a subset of the `maintenance_work_mem`, meaning that you can decide how much memory autovacuum uses without affecting the memory allocation for other maintenance tasks and data definition operations.
## Next steps
postgresql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-version-policy.md
Previously updated : 12/21/2023 Last updated : 1/30/2024
The table below provides the retirement details for PostgreSQL major versions. T
| Version | What's New | Azure support start date | Retirement date (Azure)| | - | - | | - | | [PostgreSQL 9.5 (retired)](https://www.postgresql.org/about/news/postgresql-132-126-1111-1016-9621-and-9525-released-2165/)| [Features](https://www.postgresql.org/docs/9.5/release-9-5.html) | April 18, 2018 | February 11, 2021
-| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021
-| [PostgreSQL 10 (retired)](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022
+| [PostgreSQL 9.6 (retired)](https://www.postgresql.org/about/news/postgresql-96-released-1703/) | [Features](https://wiki.postgresql.org/wiki/NewIn96) | April 18, 2018 | November 11, 2021 |
+| [PostgreSQL 10 (retired)](https://www.postgresql.org/about/news/postgresql-10-released-1786/) | [Features](https://wiki.postgresql.org/wiki/New_in_postgres_10) | June 4, 2018 | November 10, 2022 |
| [PostgreSQL 11](https://www.postgresql.org/about/news/postgresql-11-released-1894/) | [Features](https://www.postgresql.org/docs/11/release-11.html) | July 24, 2019 | November 9, 2024 |
-| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024
-| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025
-| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | June 29, 2022 | November 12, 2026
-| [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | May 15, 2023 | November 11, 2027
+| [PostgreSQL 12](https://www.postgresql.org/about/news/postgresql-12-released-1976/) | [Features](https://www.postgresql.org/docs/12/release-12.html) | Sept 22, 2020 | November 14, 2024 |
+| [PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/) | [Features](https://www.postgresql.org/docs/13/release-13.html) | May 25, 2021 | November 13, 2025 |
+| [PostgreSQL 14](https://www.postgresql.org/about/news/postgresql-14-released-2318/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | June 29, 2022 | November 12, 2026 |
+| [PostgreSQL 15](https://www.postgresql.org/about/news/postgresql-15-released-2526/) | [Features](https://www.postgresql.org/docs/14/release-14.html) | May 15, 2023 | November 11, 2027 |
## PostgreSQL 11 support in Azure Database for PostgreSQL single server and Azure Database for PostgreSQL flexible server
You might continue to run the retired version in Azure Database for PostgreSQL f
- New service capabilities developed by Azure Database for PostgreSQL flexible server might only be available to supported database server versions. - Uptime SLAs will apply solely to Azure Database for PostgreSQL flexible server service-related issues and not to any downtime caused by database engine-related bugs. - In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure might choose to stop your database server to secure the service. In such case, you'll be notified to upgrade the server before bringing the server online.
+- The new extensions introduced for Azure Postgres Flexible Server will not be supported on the community retired postgres versions.
## PostgreSQL version syntax
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-users.md
Your server admin user is a member of the azure_pg_admin role. However, the serv
The PostgreSQL engine uses privileges to control access to database objects, as discussed in the [PostgreSQL product documentation](https://www.postgresql.org/docs/current/static/sql-createrole.html). In Azure Database for PostgreSQL flexible server, the server admin user is granted these privileges: -- Sign in, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE, REPLICATION
+- Sign in, NOSUPERUSER, INHERIT, CREATEDB, CREATEROLE
The server admin user account can be used to create more users and grant those users into the azure_pg_admin role. Also, the server admin account can be used to create less privileged users and roles that have access to individual databases and schemas.
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
Last updated 01/17/2024 -
- - ignite-2023
+
postgresql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-cli.md
Title: Download server logs for Azure Database for PostgreSQL flexible server wi
description: This article describes how to download server logs by using the Azure CLI. +
az postgres flexible-server server-logs download --resource-group <myresourcegro
## Next steps - To enable and disable server logs from the portal, see [Enable, list, and download server logs for Azure Database for PostgreSQL flexible server](./how-to-server-logs-portal.md).-- Learn more about [logging](./concepts-logging.md).
+- Learn more about [logging](./concepts-logging.md).
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Azure Database for PostgreSQL
reliability Migrate Search Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-search-service.md
Title: Migrate Azure AI Search to availability zone support description: Learn how to migrate Azure AI Search to availability zone support.-+ Last updated 08/01/2022
reliability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview.md
Last updated 08/21/2023
-+ # Azure reliability documentation + Reliability consists of two principles: resiliency and availability. The goal of resiliency is to avoid failures and, if they still occur, to return your application to a fully functioning state. The goal of availability is to provide consistent access to your application or workload. It's important to plan for proactive reliability based on your application requirements. Azure includes built-in reliability services that you can use and manage based on your business needs. Whether itΓÇÖs a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional outage, Azure provides solutions that improve reliability. For example, availability sets ensure that the virtual machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability zones protect customersΓÇÖ applications and data from datacenter failures across multiple physical locations within a region. **Regions** and **availability zones** are central to your application design and resiliency strategy and are discussed in greater detail later in this article.
For detailed service-specific reliability guidance, including availability zones
For information on reliability and reliability principles and architecture in Microsoft Azure services, see [Microsoft Azure Well-Architected Framework: Reliability](/azure/architecture/framework/#reliability). ++ ## Reliability requirements The required level of reliability for any Azure solution depends on several considerations. Availability and latency SLA and other business requirements drive the architectural choices and resiliency level and should be considered first. Availability requirements range from how much downtime is acceptable ΓÇô and how much it costs your business ΓÇô to the amount of money and time that you can realistically invest in making an application highly available.
Two important metrics to consider are the recovery time objective and recovery p
- **Recovery point objective (RPO)** is the maximum duration of data loss that is acceptable during a disaster. RTO and RPO are non-functional requirements of a system and should be dictated by business requirements. To derive these values, it's a good idea to conduct a risk assessment, and clearly understanding the cost of downtime or data loss.   ++ ## Regions and availability zones
+>[!VIDEO https://learn-video.azurefd.net/vod/player?id=d36b5b2d-8bd2-43df-a796-b0c77b2f82fc]
+ Regions and availability zones are a big part of the reliability equation. Regions feature multiple, physically separate availability zones. These availability zones are connected by a high-performance network featuring less than 2ms latency between physical zones. Low latency helps your data stay synchronized and accessible when things go wrong. You can use this infrastructure strategically as you architect applications and data infrastructure that automatically replicate and deliver uninterrupted services between zones and across regions. Microsoft Azure services support availability zones and are enabled to drive your cloud operations at optimum high availability while supporting your cross-region recovery and business continuity strategy needs.
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
To explore how Azure App Service can bolster the reliability and resiliency of y
### High availability #### :::image type="icon" source="media/icon-recommendation-high.svg"::: **ASP-1 - Deploy zone-redundant App Service plans**
-To enhance the resiliency and reliability of your business-critical workloads, it's recommended that you deploy your new App Service Plans with zone-redundancy. Follow the steps to [redeploy to availability zone support](#create-a-resource-with-availability-zone-enabled), configure your pipelines to redeploy your WebApp on the new App Services Plan, and then use a [Blue-Green deployment](/azure/spring-apps/concepts-blue-green-deployment-strategies) approach to failover to the new site.
+To enhance the resiliency and reliability of your business-critical workloads, it's recommended that you deploy your new App Service Plans with zone-redundancy. Follow the steps to [redeploy to availability zone support](#create-a-resource-with-availability-zone-enabled), configure your pipelines to redeploy your WebApp on the new App Services Plan, and then use a [Blue-Green deployment](../spring-apps/enterprise/concepts-blue-green-deployment-strategies.md) approach to failover to the new site.
By distributing your applications across multiple availability zones, you can ensure their continued operation even in the event of a datacenter-level failure. For more information on availability zone support in Azure App Service, see [Availability zone support](#availability-zone-support).
reliability Reliability Spring Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-spring-apps.md
To create a service in Azure Spring Apps with zone-redundancy enabled using the
### Enable your own resource with availability zones enabled
-You can enable your own resource in Azure Spring Apps, such as your own persistent storage. However, you must make sure to enable zone-redundancy for your resource. For more information, see [How to enable your own persistent storage in Azure Spring Apps](../spring-apps/how-to-custom-persistent-storage.md).
+You can enable your own resource in Azure Spring Apps, such as your own persistent storage. However, you must make sure to enable zone-redundancy for your resource. For more information, see [How to enable your own persistent storage in Azure Spring Apps](../spring-apps/enterprise/how-to-custom-persistent-storage.md).
### Zone down experience
Use the following steps to create an Azure Traffic Manager instance for Azure Sp
| service-sample-a | East US | gateway / auth-service / account-service | | service-sample-b | West Europe | gateway / auth-service / account-service |
-1. Set up a custom domain for the service instances. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](../spring-apps/how-to-custom-domain.md). After successful setup, both service instances will bind to the same custom domain, such as `bcdr-test.contoso.com`.
+1. Set up a custom domain for the service instances. For more information, see [Tutorial: Map an existing custom domain to Azure Spring Apps](../spring-apps/enterprise/how-to-custom-domain.md). After successful setup, both service instances will bind to the same custom domain, such as `bcdr-test.contoso.com`.
1. Create a traffic manager and two endpoints. For instructions, see [Quickstart: Create a Traffic Manager profile using the Azure portal](../traffic-manager/quickstart-create-traffic-manager-profile.md), which produces the following Traffic Manager profile:
The environment is now set up. If you used the example values in the linked arti
Azure Front Door is a global, scalable entry point that uses the Microsoft global edge network to create fast, secure, and widely scalable web applications. Azure Front Door provides the same multi-geo redundancy and routing to the closest region as Azure Traffic Manager. Azure Front Door also provides advanced features such as TLS protocol termination, application layer processing, and Web Application Firewall (WAF). For more information, see [What is Azure Front Door?](../frontdoor/front-door-overview.md)
-The following diagram shows the architecture of a multi-region redundancy, virtual-network-integrated Azure Spring Apps service instance. The diagram shows the correct reverse proxy configuration for Application Gateway and Front Door with a custom domain. This architecture is based on the scenario described in [Expose applications with end-to-end TLS in a virtual network](../spring-apps/expose-apps-gateway-end-to-end-tls.md). This approach combines two Application-Gateway-integrated Azure Spring Apps virtual-network-injection instances into a geo-redundant instance.
+The following diagram shows the architecture of a multi-region redundancy, virtual-network-integrated Azure Spring Apps service instance. The diagram shows the correct reverse proxy configuration for Application Gateway and Front Door with a custom domain. This architecture is based on the scenario described in [Expose applications with end-to-end TLS in a virtual network](../spring-apps/enterprise/expose-apps-gateway-end-to-end-tls.md). This approach combines two Application-Gateway-integrated Azure Spring Apps virtual-network-injection instances into a geo-redundant instance.
:::image type="content" source="media/reliability-spring-apps/multi-region-spring-apps-reference-architecture.png" alt-text="Diagram showing the architecture of a multi-region Azure Spring Apps service instance." lightbox="media/reliability-spring-apps/multi-region-spring-apps-reference-architecture.png"::: ## Next steps -- [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](../spring-apps/quickstart.md)
+- [Quickstart: Deploy your first Spring Boot app in Azure Spring Apps](../spring-apps/enterprise/quickstart.md)
- [Reliability in Azure](./overview.md)
role-based-access-control Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/best-practices.md
Previously updated : 11/29/2023 Last updated : 01/30/2024 #Customer intent: As a dev, devops, or it admin, I want to learn how to best use Azure RBAC.
Some roles are identified as [privileged administrator roles](./role-assignments
- Remove unnecessary privileged role assignments. - Avoid assigning a privileged administrator role when a [job function role](./role-assignments-steps.md#job-function-roles) can be used instead. - If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource, instead of a broader scope, such as management group or subscription.-- If you are assigning a role with permission to create role assignments, consider adding a condition to constrain the role assignment. For more information, see [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md).
+- If you are assigning a role with permission to create role assignments, consider adding a condition to constrain the role assignment. For more information, see [Delegate Azure role assignment management to others with conditions](delegate-role-assignments-portal.md).
For more information, see [List or manage privileged administrator role assignments](./role-assignments-list-portal.md#list-or-manage-privileged-administrator-role-assignments).
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 01/18/2024 Last updated : 01/29/2024
The following table provides a brief description of each built-in role. Click th
> | [Integration Service Environment Contributor](#integration-service-environment-contributor) | Lets you manage integration service environments, but not access to them. | a41e2c5b-bd99-4a07-88f4-9bf657a760b8 | > | [Integration Service Environment Developer](#integration-service-environment-developer) | Allows developers to create and update workflows, integration accounts and API connections in integration service environments. | c7aa55d3-1abb-444a-a5ca-5e51e485d6ec | > | [Intelligent Systems Account Contributor](#intelligent-systems-account-contributor) | Lets you manage Intelligent Systems accounts, but not access to them. | 03a6d094-3444-4b3d-88af-7477090a9e5e |
-> | [Logic App Contributor](#logic-app-contributor) | Lets you manage logic apps, but not change access to them. | 87a39d53-fc1b-424a-814c-f7e04687dc9e |
-> | [Logic App Operator](#logic-app-operator) | Lets you read, enable, and disable logic apps, but not edit or update them. | 515c2055-d9d4-4321-b1b9-bd0c9a0f79fe |
+> | [Logic App Contributor](#logic-app-contributor) | Lets you manage Consumption logic apps, but not change access to them. | 87a39d53-fc1b-424a-814c-f7e04687dc9e |
+> | [Logic App Operator](#logic-app-operator) | Lets you read, enable, and disable Consumption logic apps, but not edit or update them. | 515c2055-d9d4-4321-b1b9-bd0c9a0f79fe |
+> | [Logic Apps Standard Contributor (Preview)](#logic-apps-standard-contributor) | You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership. | ad710c24-b039-4e85-a019-deb4a06e8570 |
+> | [Logic Apps Standard Developer (Preview)](#logic-apps-standard-developer) | You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope. | 523776ba-4eb24-600a-3c8f-2dc93da4bdb |
+> | [Logic Apps Standard Operator (Preview)](#logic-apps-standard-operator) | You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings. | b70c96e9-66fe-4c09-b6e7-c98e69c98555 |
+> | [Logic Apps Standard Reader (Preview)](#logic-apps-standard-reader) | You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. | 4accf36b-2c05-432f-91c8-5c532dff4c73 |
> | **Identity** | | | > | [Domain Services Contributor](#domain-services-contributor) | Can manage Azure AD Domain Services and related network configurations | eeaeda52-9324-47f6-8069-5d5bade478b2 | > | [Domain Services Reader](#domain-services-reader) | Can view Azure AD Domain Services and related network configurations | 361898ef-9ed1-48c2-849c-a832951106bb |
Grants access to read map related data from an Azure maps account.
Allow read, write and delete access to Azure Spring Cloud Config Server
-[Learn more](/azure/spring-apps/how-to-access-data-plane-azure-ad-rbac)
+[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allow read, write and delete access to Azure Spring Cloud Config Server
Allow read access to Azure Spring Cloud Config Server
-[Learn more](/azure/spring-apps/how-to-access-data-plane-azure-ad-rbac)
+[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allow read access to Azure Spring Cloud Data
Allow read, write and delete access to Azure Spring Cloud Service Registry
-[Learn more](/azure/spring-apps/how-to-access-data-plane-azure-ad-rbac)
+[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allow read, write and delete access to Azure Spring Cloud Service Registry
Allow read access to Azure Spring Cloud Service Registry
-[Learn more](/azure/spring-apps/how-to-access-data-plane-azure-ad-rbac)
+[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
Lets you manage logic apps, but not change access to them.
### Logic App Operator
-Lets you read, enable, and disable logic apps, but not edit or update them.
-
-[Learn more](/azure/logic-apps/logic-apps-securing-a-logic-app)
+Lets you read, enable, and disable Consumption logic apps, but not edit or update them. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/*/read | Read Insights alert rules |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/*/read | Read Insights alert rules. |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricAlerts/*/read | |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/diagnosticSettings/*/read | Gets diagnostic settings for Logic Apps |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricDefinitions/*/read | Gets the available metrics for Logic Apps. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/*/read | Reads Logic Apps resources. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/disable/action | Disables the workflow. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/enable/action | Enables the workflow. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/validate/action | Validates the workflow. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/diagnosticSettings/*/read | Get diagnostic settings for Consumption logic apps. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricDefinitions/*/read | Get the available metrics for Consumption logic apps. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/*/read | Read Consumption logic app resources. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/disable/action | Disable the workflow. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/enable/action | Enable the workflow. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/validate/action | Validate the workflow. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Get or list deployment operations. |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Read Connection Gateways. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | Read Connections. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | Read Custom API. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties on an App Service Plan |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Get or list resource groups. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Read connection gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | Read connections. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | Read custom APIs. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you read, enable, and disable logic apps, but not edit or update them.
} ```
+<a name="logic-apps-standard-contributor"></a>
+
+### Logic Apps Standard Contributor (Preview)
+
+You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/certificates/* | Create and manage a certificate. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/* | Create and manage a connection gateway. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/* | Create and manage a connection. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/* | Create and manage a custom API. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/listSitesAssignedToHostName/read | Get names of sites assigned to hostname. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/* | Create and manage an App Service Plan. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/* | Create and manage a web app. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "Description": "You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership.",
+ "Metadata": {
+ "CreatedBy": null,
+ "CreatedOn": "2023-08-02T22:35:40.6977003Z",
+ "UpdatedBy": null,
+ "UpdatedOn": "2023-08-23T18:55:27.6632763Z"
+ },
+ "IsBuiltIn": true,
+ "AdminSecurityClaim": "Microsoft.Web",
+ "Id": "ad710c24b0394e85a019deb4a06e8570",
+ "Name": "Logic Apps Standard Contributor (Preview)",
+ "IsServiceRole": false,
+ "Permissions": [
+ {
+ "Actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Support/*",
+ "Microsoft.Web/certificates/*",
+ "Microsoft.Web/connectionGateways/*",
+ "Microsoft.Web/connections/*",
+ "Microsoft.Web/customApis/*",
+ "Microsoft.Web/listSitesAssignedToHostName/read",
+ "Microsoft.Web/serverFarms/*",
+ "Microsoft.Web/sites/*"
+ ],
+ "NotActions": [],
+ "DataActions": [],
+ "NotDataActions": [],
+ "Condition": null,
+ "ConditionVersion": null
+ }
+ ],
+ "Scopes": [
+ "/"
+ ]
+}
+```
+
+<a name="logic-apps-standard-developer"></a>
+
+### Logic Apps Standard Developer (Preview)
+
+You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Get a list of connection gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/* | Create and manage a connection. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/* | Create and manage a custom API. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/appettings/read | Get the web app settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/list/Action | List the web app's security sensitive settings, such as publishing credentials, app settings, and connection strings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Read | Get the web app configuration settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Write | Update the web app's configuration settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/delete | Delete the web app's configuration. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/read | Get a single app setting for the web app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/write | Create or update a single app setting for the web app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/deployWorkflowArtifacts/action | Create the artifacts in a Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/* | Get or list hostruntime artifacts for the web app or function app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/listworkflowsconnections/actions | No information available. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/publish/Action | Publish the web app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/Read | Get the web app properties. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/appsettings/read | Get the web app slot's settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/appsettings/write | Create or update a single app setting for the web app slot. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/list/Action | List the web app slot's security sensitive settings, such as publishing credentials, app settings, and connection strings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/Read | Get the web app slot's configuration settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/web/appsettings/delete | Delete the web app slot's app setting. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/deployWorkflowArtifacts/action | Create the artifacts in a deployment slot for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/listworkflowsconnections/action | No information available. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/publish/Action | Publish a web app slot. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get the workflow's app configuration information based on its ID in a deployment slot for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflows/* | Manage the workflows in the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/* | Get the workflow's app configuration information based on its ID for the Standard logic app. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "Description": "You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope.",
+ "Metadata": {
+ "CreatedBy": null,
+ "CreatedOn": "2023-08-02T22:37:24.4551086Z",
+ "UpdatedBy": null,
+ "UpdatedOn": "2023-08-23T18:56:32.6015183Z"
+ },
+ "IsBuiltIn": true,
+ "AdminSecurityClaim": "Microsoft.Web",
+ "Id": "523776ba4eb24600a3c8f2dc93da4bdb",
+ "Name": "Logic Apps Standard Developer (Preview)",
+ "IsServiceRole": false,
+ "Permissions": [
+ {
+ "Actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Support/*",
+ "Microsoft.Web/connectionGateways/*/read",
+ "Microsoft.Web/connections/*",
+ "Microsoft.Web/customApis/*",
+ "Microsoft.Web/serverFarms/read",
+ "microsoft.web/sites/config/appsettings/read",
+ "Microsoft.Web/sites/config/list/Action",
+ "Microsoft.Web/sites/config/Read",
+ "microsoft.web/sites/config/Write",
+ "microsoft.web/sites/config/web/appsettings/delete",
+ "microsoft.web/sites/config/web/appsettings/read",
+ "microsoft.web/sites/config/web/appsettings/write",
+ "microsoft.web/sites/deployWorkflowArtifacts/action",
+ "microsoft.web/sites/hostruntime/*",
+ "microsoft.web/sites/listworkflowsconnections/action",
+ "Microsoft.Web/sites/publish/Action",
+ "Microsoft.Web/sites/Read",
+ "microsoft.web/sites/slots/config/appsettings/read",
+ "microsoft.web/sites/slots/config/appsettings/write",
+ "Microsoft.Web/sites/slots/config/list/Action",
+ "Microsoft.Web/sites/slots/config/Read",
+ "microsoft.web/sites/slots/config/web/appsettings/delete",
+ "microsoft.web/sites/slots/deployWorkflowArtifacts/action",
+ "microsoft.web/sites/slots/listworkflowsconnections/action",
+ "Microsoft.Web/sites/slots/publish/Action",
+ "microsoft.web/sites/slots/workflows/read",
+ "microsoft.web/sites/slots/workflowsconfiguration/read",
+ "microsoft.web/sites/workflows/*",
+ "microsoft.web/sites/workflowsconfiguration/*"
+ ],
+ "NotActions": [],
+ "DataActions": [],
+ "NotDataActions": [],
+ "Condition": null,
+ "ConditionVersion": null
+ }
+ ],
+ "Scopes": [
+ "/"
+ ]
+}
+```
+
+<a name="logic-apps-standard-operator"></a>
+
+### Logic Apps Standard Operator (Preview)
+
+You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Get a list of connection gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | No information available. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | No information available. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/applySlotConfig/Action | No information available. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Read | Get the web app configuration settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/* | Get or list hostruntime artifacts for the web app or function app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/Read | Get the web app properties. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/restart/Action | Restart the web app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/Read | Get the web app slot's configuration settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/restart/Action | Restart the web app slot. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/slotsswap/Action | Swap the web app deployment slots. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/start/Action | Start the web app slot. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/stop/Action | Stop the web app slot. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get the workflow's app configuration information based on its ID in a deployment slot for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/slotsdiffs/Action | Get the differences in the configuration between the web app and slots. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slotsswap/Action | Swap the web app deployment slots. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/start/Action | Start the web app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/stop/Action | Stop the web app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflows/read | List the workflows in the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/read | Get the workflow's app configuration based on its ID for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/write | Create or update a web app. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "Description": "You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings.",
+ "Metadata": {
+ "CreatedBy": null,
+ "CreatedOn": "2023-08-02T22:38:47.4360166Z",
+ "UpdatedBy": null,
+ "UpdatedOn": "2023-08-23T19:03:50.1098085Z"
+ },
+ "IsBuiltIn": true,
+ "AdminSecurityClaim": "Microsoft.Web",
+ "Id": "b70c96e966fe4c09b6e7c98e69c98555",
+ "Name": "Logic Apps Standard Operator (Preview)",
+ "IsServiceRole": false,
+ "Permissions": [
+ {
+ "Actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Support/*",
+ "Microsoft.Web/connectionGateways/*/read",
+ "Microsoft.Web/connections/*/read",
+ "Microsoft.Web/customApis/*/read",
+ "Microsoft.Web/serverFarms/read",
+ "Microsoft.Web/sites/applySlotConfig/Action",
+ "Microsoft.Web/sites/config/Read",
+ "microsoft.web/sites/hostruntime/*",
+ "Microsoft.Web/sites/Read",
+ "Microsoft.Web/sites/restart/Action",
+ "Microsoft.Web/sites/slots/config/Read",
+ "Microsoft.Web/sites/slots/restart/Action",
+ "Microsoft.Web/sites/slots/slotsswap/Action",
+ "Microsoft.Web/sites/slots/start/Action",
+ "Microsoft.Web/sites/slots/stop/Action",
+ "microsoft.web/sites/slots/workflows/read",
+ "microsoft.web/sites/slots/workflowsconfiguration/read",
+ "Microsoft.Web/sites/slotsdiffs/Action",
+ "Microsoft.Web/sites/slotsswap/Action",
+ "Microsoft.Web/sites/start/Action",
+ "Microsoft.Web/sites/stop/Action",
+ "microsoft.web/sites/workflows/read",
+ "microsoft.web/sites/workflowsconfiguration/read",
+ "Microsoft.Web/sites/write"
+ ],
+ "NotActions": [],
+ "DataActions": [],
+ "NotDataActions": [],
+ "Condition": null,
+ "ConditionVersion": null
+ }
+ ],
+ "Scopes": [
+ "/"
+ ]
+}
+```
+
+<a name="logic-apps-standard-reader"></a>
+
+### Logic Apps Standard Reader (Preview)
+
+You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Get a list of connection gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | No information available. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | No information available. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/webhooks/api/workflows/triggers/read | List the web app's hostruntime workflow triggers. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/webhooks/api/workflows/runs/read | List the web app's hostruntime workflow runs. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflows/read | List the workflows in the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/read | Get the workflow's app configuration based on its ID for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot for the Standard logic app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get the workflow's app configuration information based on its ID in a deployment slot for the Standard logic app. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "Description": "You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history.",
+ "Metadata": {
+ "CreatedBy": null,
+ "CreatedOn": "2023-08-02T22:33:56.2374571Z",
+ "UpdatedBy": null,
+ "UpdatedOn": "2023-08-23T19:05:11.7148533Z"
+ },
+ "IsBuiltIn": true,
+ "AdminSecurityClaim": "Microsoft.Web",
+ "Id": "4accf36b2c05432f91c85c532dff4c73",
+ "Name": "Logic Apps Standard Reader (Preview)",
+ "IsServiceRole": false,
+ "Permissions": [
+ {
+ "Actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Support/*",
+ "Microsoft.Web/connectionGateways/*/read",
+ "Microsoft.Web/connections/*/read",
+ "Microsoft.Web/customApis/*/read",
+ "Microsoft.Web/serverFarms/read",
+ "microsoft.web/sites/hostruntime/webhooks/api/workflows/triggers/read",
+ "microsoft.web/sites/hostruntime/webhooks/api/workflows/runs/read",
+ "microsoft.web/sites/workflows/read",
+ "microsoft.web/sites/workflowsconfiguration/read",
+ "microsoft.web/sites/slots/workflows/read",
+ "microsoft.web/sites/slots/workflowsconfiguration/read"
+ ],
+ "NotActions": [],
+ "DataActions": [],
+ "NotDataActions": [],
+ "Condition": null,
+ "ConditionVersion": null
+ }
+ ],
+ "Scopes": [
+ "/"
+ ]
+}
+```
+ ## Identity
role-based-access-control Conditions Authorization Actions Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-authorization-actions-attributes.md
Title: Authorization actions and attributes (preview)
+ Title: Authorization actions and attributes
description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) in authorization
Previously updated : 11/29/2023 Last updated : 01/30/2024 #Customer intent: As a dev, devops, or it admin, I want to
-# Authorization actions and attributes (preview)
-
-> [!IMPORTANT]
-> Delegating Azure role assignment management with conditions is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Authorization actions and attributes
## Authorization actions
This section lists the authorization attributes you can use in your condition ex
## Next steps -- [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md)-- [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md)
+- [Examples to delegate Azure role assignment management with conditions](delegate-role-assignments-examples.md)
+- [Delegate Azure role assignment management to others with conditions](delegate-role-assignments-portal.md)
role-based-access-control Delegate Role Assignments Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-examples.md
Title: Examples to delegate Azure role assignment management with conditions (preview) - Azure ABAC
+ Title: Examples to delegate Azure role assignment management with conditions - Azure ABAC
description: Examples to delegate Azure role assignment management to other users by using Azure attribute-based access control (Azure ABAC).
Previously updated : 12/01/2023 Last updated : 01/30/2024 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
-# Examples to delegate Azure role assignment management with conditions (preview)
-
-> [!IMPORTANT]
-> Delegating Azure role assignment management with conditions is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Examples to delegate Azure role assignment management with conditions
This article lists examples of how to delegate Azure role assignment management to other users with conditions.
New-AzRoleAssignment -ObjectId $principalId -Scope $scope -RoleDefinitionId $rol
## Next steps -- [Authorization actions and attributes (preview)](conditions-authorization-actions-attributes.md)-- [Azure role assignment condition format and syntax (preview)](conditions-format.md)-- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
+- [Authorization actions and attributes](conditions-authorization-actions-attributes.md)
+- [Azure role assignment condition format and syntax](conditions-format.md)
+- [Troubleshoot Azure role assignment conditions](conditions-troubleshoot.md)
role-based-access-control Delegate Role Assignments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-overview.md
Previously updated : 12/01/2023 Last updated : 01/30/2024 #Customer intent: As a dev, devops, or it admin, I want to delegate Azure role assignment management to other users who are closer to the decision, but want to limit the scope of the role assignments.
Here are the primary issues with the current method of delegating role assignmen
Instead of assigning the Owner or User Access Administrator roles, a more secure method is to constrain a delegate's ability to create role assignments.
-## A more secure method: Delegate role assignment management with conditions (preview)
-
-> [!IMPORTANT]
-> Delegating Azure role assignment management with conditions is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+## A more secure method: Delegate role assignment management with conditions
Delegating role assignment management with conditions is a way to restrict the role assignments a user can create. In the preceding example, Alice can allow Dara to create some role assignments on her behalf, but not all role assignments. For example, Alice can constrain the roles that Dara can assign and constrain the principals that Dara can assign roles to. This delegation with conditions is sometimes referred to as *constrained delegation* and is implemented using [Azure attribute-based access control (Azure ABAC) conditions](conditions-overview.md).
To delegate role assignment management with conditions, you assign roles as you
Choose from a list of condition templates. Select **Configure** to specify the roles, principal types, or principals.
- For more information, see [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md).
+ For more information, see [Delegate Azure role assignment management to others with conditions](delegate-role-assignments-portal.md).
:::image type="content" source="./media/shared/condition-templates.png" alt-text="Screenshot of Add role assignment condition with a list of condition templates." lightbox="./media/shared/condition-templates.png":::
To delegate role assignment management with conditions, you assign roles as you
If the condition templates don't work for your scenario or if you want more control, you can use the condition editor.
- For examples, see [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md).
+ For examples, see [Examples to delegate Azure role assignment management with conditions](delegate-role-assignments-examples.md).
:::image type="content" source="./media/shared/delegate-role-assignments-expression.png" alt-text="Screenshot of condition editor in Azure portal showing a role assignment condition to delegate role assignment management." lightbox="./media/shared/delegate-role-assignments-expression.png":::
To delegate role assignment management with conditions, you assign roles as you
## Built-in roles with conditions
-The [Key Vault Data Access Administrator](built-in-roles.md#key-vault-data-access-administrator) role already has a built-in condition to constrain role assignments. This role enables you to manage access to Key Vault secrets, certificates, and keys. It's exclusively focused on access control without the ability to assign privileged roles such as Owner or User Access Administrator roles. It allows better separation of duties for scenarios like managing encryption at rest across data services to further comply with least privilege principle. The condition constrains role assignments to the following Azure Key Vault roles:
+The [Key Vault Data Access Administrator](built-in-roles.md#key-vault-data-access-administrator) and [Virtual Machine Data Access Administrator (preview)](built-in-roles.md#virtual-machine-data-access-administrator-preview) roles already have a built-in condition to constrain role assignments.
+
+The Key Vault Data Access Administrator role enables you to manage access to Key Vault secrets, certificates, and keys. It's exclusively focused on access control without the ability to assign privileged roles such as Owner or User Access Administrator roles. It allows better separation of duties for scenarios like managing encryption at rest across data services to further comply with least privilege principle. The condition constrains role assignments to the following Azure Key Vault roles:
- [Key Vault Administrator](built-in-roles.md#key-vault-administrator) - [Key Vault Certificates Officer](built-in-roles.md#key-vault-certificates-officer)
If you want to further constrain the Key Vault Data Access Administrator role as
## Known issues
-Here are the known issues related to delegating role assignment management with conditions (preview):
+Here are the known issues related to delegating role assignment management with conditions:
- You can't delegate role assignment management with conditions using [Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md). - You can't have a role assignment with a Microsoft.Storage data action and an ABAC condition that uses a GUID comparison operator. For more information, see [Troubleshoot Azure RBAC](troubleshooting.md#symptomauthorization-failed).-- This preview isn't available in Azure Government or Microsoft Azure operated by 21Vianet. ## License requirements
Here are the known issues related to delegating role assignment management with
## Next steps -- [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md)
+- [Delegate Azure role assignment management to others with conditions](delegate-role-assignments-portal.md)
- [What is Azure attribute-based access control (Azure ABAC)?](conditions-overview.md)-- [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md)
+- [Examples to delegate Azure role assignment management with conditions](delegate-role-assignments-examples.md)
role-based-access-control Delegate Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-portal.md
Title: Delegate Azure role assignment management to others with conditions (preview) - Azure ABAC
+ Title: Delegate Azure role assignment management to others with conditions - Azure ABAC
description: How to delegate Azure role assignment management to other users by using Azure attribute-based access control (Azure ABAC).
Previously updated : 12/01/2023 Last updated : 01/30/2024 #Customer intent: As a dev, devops, or it admin, I want to delegate Azure role assignment management to other users who are closer to the decision, but want to limit the scope of the role assignments.
-# Delegate Azure role assignment management to others with conditions (preview)
-
-> [!IMPORTANT]
-> Delegating Azure role assignment management with conditions is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+# Delegate Azure role assignment management to others with conditions
As an administrator, you might get several requests to grant access to Azure resources that you want to delegate to someone else. You could assign a user the [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) roles, but these are highly privileged roles. This article describes a more secure way to [delegate role assignment management](delegate-role-assignments-overview.md) to other users in your organization, but add restrictions for those role assignments. For example, you can constrain the roles that can be assigned or constrain the principals the roles can be assigned to.
To help determine the permissions the delegate needs, answer the following quest
- Which principals can the delegate assign roles to? - Can delegate remove any role assignments?
-Once you know the permissions that delegate needs, you use the following steps to add a condition to the delegate's role assignment. For example conditions, see [Examples to delegate Azure role assignment management with conditions (preview)](delegate-role-assignments-examples.md).
+Once you know the permissions that delegate needs, you use the following steps to add a condition to the delegate's role assignment. For example conditions, see [Examples to delegate Azure role assignment management with conditions](delegate-role-assignments-examples.md).
## Step 2: Start a new role assignment
There are two ways that you can add a condition. You can use a condition templat
# [Template](#tab/template)
-1. On the **Conditions** tab under **Delegation type**, select the **Constrained (recommended)** option.
-
- | Option | Select this option to |
- | | |
- | **Constrained (recommended)** | Pick the roles or principals the user can use in role assignments |
- | **Not constrained** | Allow the user to assign any role to any principal |
+1. On the **Conditions** tab under **What user can do**, select the **Allow user to only assign selected roles to selected principals (fewer privileges)** option.
- :::image type="content" source="./media/shared/condition-constrained.png" alt-text="Screenshot of Add role assignment with the Constrained option selected." lightbox="./media/shared/condition-constrained.png":::
+ :::image type="content" source="./media/shared/condition-constrained.png" alt-text="Screenshot of Add role assignment with the constrained option selected." lightbox="./media/shared/condition-constrained.png":::
-1. Select **Add condition**.
+1. Select **Select roles and principals**.
The Add role assignment condition page appears with a list of condition templates.
There are two ways that you can add a condition. You can use a condition templat
| Condition template | Select this template to | | | |
- | Constrain roles | Constrain the roles a user can assign |
- | Constrain roles and principal types | Constrain the roles a user can assign and the types of principals the user can assign roles to |
- | Constrain roles and principals | Constrain the roles a user can assign and the principals the user can assign roles to |
+ | Constrain roles | Allow user to only assign roles you select |
+ | Constrain roles and principal types | Allow user to only assign roles you select<br/>Allow user to only assign these roles to principal types you select (users, groups, or service principals) |
+ | Constrain roles and principals | Allow user to only assign roles you select<br/>Allow user to only assign these roles to principals you select |
1. In the configure pane, add the required configurations.
- :::image type="content" source="./media/delegate-role-assignments-portal/condition-template-configure-pane.png" alt-text="Screenshot of configure pane for a condition with selection added." lightbox="./media/delegate-role-assignments-portal/condition-template-configure-pane.png":::
+ :::image type="content" source="./media/shared/condition-template-configure-pane.png" alt-text="Screenshot of configure pane for a condition with selection added." lightbox="./media/shared/condition-template-configure-pane.png":::
1. Select **Save** to add the condition to the role assignment.
If the condition templates don't work for your scenario or if you want more cont
### Open condition editor
-1. On the **Conditions** tab under **Delegation type**, select the **Constrained (recommended)** option.
-
- | Option | Select this option to |
- | | |
- | **Constrained (recommended)** | Pick the roles or principals the user can use in role assignments |
- | **Not constrained** | Allow the user to assign any role to any principal |
+1. On the **Conditions** tab under **What user can do**, select the **Allow user to only assign selected roles to selected principals (fewer privileges)** option.
:::image type="content" source="./media/shared/condition-constrained.png" alt-text="Screenshot of Add role assignment with the Constrained option selected." lightbox="./media/shared/condition-constrained.png":::
-1. Select **Add condition**.
+1. Select **Select roles and principals**.
The Add role assignment condition page appears with a list of condition templates.
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
role-based-access-control Role Assignments List Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-portal.md
Previously updated : 11/29/2023 Last updated : 01/30/2024
On the **Role assignments** tab, you can list and see the count of privileged ad
1. To manage privileged administrator role assignments, see the **Privileged** card and click **View assignments**.
- On the **Manage privileged role assignments** page, you can add a condition to constrain the privileged role assignment or remove the role assignment. For more information, see [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md).
+ On the **Manage privileged role assignments** page, you can add a condition to constrain the privileged role assignment or remove the role assignment. For more information, see [Delegate Azure role assignment management to others with conditions](delegate-role-assignments-portal.md).
:::image type="content" source="./media/role-assignments-list-portal/access-control-role-assignments-privileged-manage.png" alt-text="Screenshot of Manage privileged role assignments page showing how to add conditions or remove role assignments." lightbox="./media/role-assignments-list-portal/access-control-role-assignments-privileged-manage.png":::
role-based-access-control Role Assignments Portal Subscription Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal-subscription-admin.md
Title: Assign a user as an administrator of an Azure subscription - Azure RBAC
-description: Learn how to make a user an administrator of an Azure subscription using the Azure portal and Azure role-based access control (Azure RBAC).
+ Title: Assign a user as an administrator of an Azure subscription with conditions - Azure RBAC
+description: Learn how to make a user an administrator of an Azure subscription with conditions using the Azure portal and Azure role-based access control (Azure RBAC).
Previously updated : 05/10/2023 Last updated : 01/30/2024
-# Assign a user as an administrator of an Azure subscription
+# Assign a user as an administrator of an Azure subscription with conditions
-To make a user an administrator of an Azure subscription, assign them the [Owner](built-in-roles.md#owner) role at the subscription scope. The Owner role gives the user full access to all resources in the subscription, including the permission to grant access to others. These steps are the same as any other role assignment.
+To make a user an administrator of an Azure subscription, you assign them the [Owner](built-in-roles.md#owner) role at the subscription scope. The Owner role gives the user full access to all resources in the subscription, including the permission to grant access to others. Since the Owner role is a highly privileged role, Microsoft recommends you add a condition to constrain the role assignment. For example, you can allow a user to only assign the Virtual Machine Contributor role to service principals.
+
+This article describes how to assign a user as an administrator of an Azure subscription with conditions. These steps are the same as any other role assignment.
## Prerequisites
The [Owner](built-in-roles.md#owner) role grant full access to manage all resour
1. Click **Next**.
-## Step 5: Assign role
+## Step 5: Add a condition
+
+Since the Owner role is a highly privileged role, Microsoft recommends you add a condition to constrain the role assignment.
+
+1. On the **Conditions** tab under **What user can do**, select the **Allow user to only assign selected roles to selected principals (fewer privileges)** option.
+
+ :::image type="content" source="./media/role-assignments-portal-subscription-admin/condition-constrained-owner.png" alt-text="Screenshot of Add role assignment with the constrained option selected." lightbox="./media/role-assignments-portal-subscription-admin/condition-constrained-owner.png":::
+
+1. Select **Select roles and principals**.
+
+ The Add role assignment condition page appears with a list of condition templates.
+
+ :::image type="content" source="./media/shared/condition-templates.png" alt-text="Screenshot of Add role assignment condition with a list of condition templates." lightbox="./media/shared/condition-templates.png":::
+
+1. Select a condition template and then select **Configure**.
+
+ | Condition template | Select this template to |
+ | | |
+ | Constrain roles | Allow user to only assign roles you select |
+ | Constrain roles and principal types | Allow user to only assign roles you select<br/>Allow user to only assign these roles to principal types you select (users, groups, or service principals) |
+ | Constrain roles and principals | Allow user to only assign roles you select<br/>Allow user to only assign these roles to principals you select |
+
+ > [!TIP]
+ > If you want to allow most role assignments, but don't allow specific role assignments, you can use the advanced condition editor and manually add a condition. For an example, see [Example: Allow most roles, but don't allow others to assign roles](delegate-role-assignments-examples.md#example-allow-most-roles-but-dont-allow-others-to-assign-roles).
+
+1. In the configure pane, add the required configurations.
+
+ :::image type="content" source="./media/shared/condition-template-configure-pane.png" alt-text="Screenshot of configure pane for a condition with selection added." lightbox="./media/shared/condition-template-configure-pane.png":::
+
+1. Select **Save** to add the condition to the role assignment.
+
+## Step 6: Assign role
1. On the **Review + assign** tab, review the role assignment settings.
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 12/01/2023 Last updated : 01/30/2024
If you selected one of the following privileged roles, follow the steps in this
- [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) - [User Access Administrator](built-in-roles.md#user-access-administrator)
-1. On the **Conditions** tab under **Delegation type**, select the **Constrained (recommended)** option.
+1. On the **Conditions** tab under **What user can do**, select the **Allow user to only assign selected roles to selected principals (fewer privileges)** option.
:::image type="content" source="./media/shared/condition-constrained.png" alt-text="Screenshot of Add role assignment with the Constrained option selected." lightbox="./media/shared/condition-constrained.png":::
-1. Click **Add condition** to add a condition that constrains the roles and principals this user can assign roles to.
+1. Click **Select roles and principals** to add a condition that constrains the roles and principals this user can assign roles to.
-1. Follow the steps in [Delegate Azure role assignment management to others with conditions (preview)](delegate-role-assignments-portal.md#step-3-add-a-condition).
+1. Follow the steps in [Delegate Azure role assignment management to others with conditions](delegate-role-assignments-portal.md#step-3-add-a-condition).
# [Storage condition](#tab/storage-condition)
sap Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/run-ansible.md
Last updated 11/17/2021
-+ # Get started with Ansible configuration
sap Dbms Guide Ha Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ha-ibm.md
description: Establish high availability of IBM Db2 LUW on Azure virtual machine
+ Last updated 01/18/2024
sap High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md
description: Establish high availability of IBM Db2 LUW on Azure virtual machine
tags: azure-resource-manager+ keywords: 'SAP'
sap High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md
description: Establish high availability (HA) for SAP NetWeaver on Azure Virtual
tags: azure-resource-manager+
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
description: Establish high availability for SAP NetWeaver on Azure Virtual Mach
tags: azure-resource-manager+
sap High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md
description: This article describes Azure Virtual Machines high availability for
tags: azure-resource-manager+
sap High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md
+ Last updated 01/17/2024
sap High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md
+ Last updated 01/17/2024
sap High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md
+ Last updated 01/17/2024
sap High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md
+ Last updated 01/17/2024
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
vm-linux+ Last updated 01/17/2024
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
documentationcenter: saponazure
tags: azure-resource-manager-+
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
-+ Last updated 01/22/2024
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
description: SAP HANA scale-out with HANA system replication (HSR) and Pacemaker
tags: azure-resource-manager-+ ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md
-+ Last updated 01/16/2024
sap Sap Hana High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md
+ Last updated 01/16/2024 - # High availability for SAP HANA on Azure VMs on SUSE Linux Enterprise Server
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
- ignite-2023 Previously updated : 10/27/2023 Last updated : 01/30/2024 # AI enrichment in Azure AI Search
-In Azure AI Search, *AI enrichment* calls the APIs of [Azure AI services](/azure/ai-services/what-are-ai-services) to process content that isn't full text searchable in its raw form. Through enrichment, analysis and inference are used to create searchable content and structure where none previously existed.
+In Azure AI Search, *AI enrichment* refers to integration with [Azure AI services](/azure/ai-services/what-are-ai-services) to process content that isn't searchable in its raw form. Through enrichment, analysis and inference are used to create searchable content and structure where none previously existed.
-Because Azure AI Search is a full text search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios:
+Because Azure AI Search is a text and vector search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios. Source content must be textual (you can't enrich vectors), but the content created by an enrichment pipeline can be vectorized and indexed in a vector store using skills like [Text Split skill](cognitive-search-skill-textsplit.md) for chunking and [AzureOpenAiEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) for encoding.
-+ Apply translation and language detection for multi-lingual search
-+ Apply entity recognition to extract people names, places, and other entities from large chunks of text
-+ Apply key phrase extraction to identify and output important terms
-+ Apply Optical Character Recognition (OCR) to recognize printed and handwritten text in binary files
-+ Apply image analysis to describe image content, and output the descriptions as searchable text fields
+Built-in skills apply the following transformation and processing to raw content:
+++ Translation and language detection for multi-lingual search++ Entity recognition to extract people names, places, and other entities from large chunks of text++ Key phrase extraction to identify and output important terms++ Optical Character Recognition (OCR) to recognize printed and handwritten text in binary files++ Image analysis to describe image content, and output the descriptions as searchable text fields AI enrichment is an extension of an [**indexer pipeline**](search-indexer-overview.md) that connects to Azure data sources. An enrichment pipeline has all of the components of an indexer pipeline (indexer, data source, index), plus a [**skillset**](cognitive-search-working-with-skillsets.md) that specifies atomic enrichment steps.
search Hybrid Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/hybrid-search-overview.md
- ignite-2023 Previously updated : 11/01/2023 Last updated : 01/29/2024 # Hybrid search using vectors and full text in Azure AI Search
This article explains the concepts, benefits, and limitations of hybrid search.
## How does hybrid search work?
-In Azure AI Search, vector indexes containing embeddings can live alongside textual and numerical fields allowing you to issue hybrid full text and vector queries. Hybrid queries can take advantage of existing functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) in a single search request.
+In Azure AI Search, vector fields containing embeddings can live alongside textual and numerical fields, allowing you to formulate hybrid queries that execute in parallel. Hybrid queries can take advantage of existing functionality like filtering, faceting, sorting, scoring profiles, and [semantic ranking](semantic-search-overview.md) in a single search request.
-Hybrid search combines results from both full text and vector queries, which use different ranking functions such as BM25 and HNSW. A [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) algorithm is used to merge results. The query response provides just one result set, using RRF to determine which matches are included.
+Hybrid search combines results from both full text and vector queries, which use different ranking functions such as BM25 and HNSW. A [Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md) algorithm merges the results. The query response provides just one result set, using RRF to pick the most relevant matches from each query.
## Structure of a hybrid query
search Knowledge Store Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-concept-intro.md
Last updated 01/10/2024
# Knowledge store in Azure AI Search
-Knowledge store is secondary storage for [AI-enriched content created by a skillset](cognitive-search-concept-intro.md) in Azure AI Search. In Azure AI Search, an indexing job always sends output to a search index, but if you attach a skillset to an indexer, you can optionally also send AI-enriched output to a container or table in Azure Storage. A knowledge store can be used for independent analysis or downstream processing in non-search scenarios like knowledge mining.
+Knowledge store is secondary storage for [AI-enriched content created by a skillset](cognitive-search-concept-intro.md) in Azure AI Search. In Azure AI Search, an indexing job always sends output to a search index, but if you attach a skillset to an indexer, you can optionally also send AI-enriched output to a container or table in Azure Storage. A knowledge store can be used for independent analysis or downstream processing in non-search scenarios like knowledge mining.
The two outputs of indexing, a search index and knowledge store, are mutually exclusive products of the same pipeline. They're derived from the same inputs and contain the same data, but their content is structured, stored, and used in different applications. :::image type="content" source="media/knowledge-store-concept-intro/knowledge-store-concept-intro.svg" alt-text="Pipeline with skillset" border="false":::
-Physically, a knowledge store is [Azure Storage](../storage/common/storage-account-overview.md), either Azure Table Storage, Azure Blob Storage, or both. Any tool or process that can connect to Azure Storage can consume the contents of a knowledge store.
+Physically, a knowledge store is [Azure Storage](../storage/common/storage-account-overview.md), either Azure Table Storage, Azure Blob Storage, or both. Any tool or process that can connect to Azure Storage can consume the contents of a knowledge store. There's no query support in Azure AI Search for retrieving content from a knowledge store.
When viewed through Azure portal, a knowledge store looks like any other collection of tables, objects, or files. The following screenshot shows a knowledge store composed of three tables. You can adopt a naming convention, such as a `kstore` prefix, to keep your content together.
The type of projection you specify in this structure determines the type of stor
+ `tables` project enriched content into Table Storage. Define a table projection when you need tabular reporting structures for inputs to analytical tools or export as data frames to other data stores. You can specify multiple `tables` within the same projection group to get a subset or cross section of enriched documents. Within the same projection group, table relationships are preserved so that you can work with all of them.
- Projected content is not aggregated or normalized. The following screenshot shows a table, sorted by key phrase, with the parent document indicated in the adjacent column. In contrast with data ingestion during indexing, there is no linguistic analysis or aggregation of content. Plural forms and differences in casing are considered unique instances.
+ Projected content isn't aggregated or normalized. The following screenshot shows a table, sorted by key phrase, with the parent document indicated in the adjacent column. In contrast with data ingestion during indexing, there's no linguistic analysis or aggregation of content. Plural forms and differences in casing are considered unique instances.
:::image type="content" source="media/knowledge-store-concept-intro/kstore-keyphrases-per-document.png" alt-text="Screenshot of key phrases and documents in a table" border="true"::: + `objects` project JSON document into Blob storage. The physical representation of an `object` is a hierarchical JSON structure that represents an enriched document.
-+ `files` project image files into Blob storage. A `file` is an image extracted from a document, transferred intact to Blob storage. Although it is named "files", it shows up in Blob Storage, not file storage.
++ `files` project image files into Blob storage. A `file` is an image extracted from a document, transferred intact to Blob storage. Although it's named "files", it shows up in Blob Storage, not file storage. ## Create a knowledge store
For data sources that support change tracking, an indexer will process new and c
### Changes to a skillset
-If you are making changes to a skillset, you should [enable caching of enriched documents](cognitive-search-incremental-indexing-conceptual.md) to reuse existing enrichments where possible.
+If you're making changes to a skillset, you should [enable caching of enriched documents](cognitive-search-incremental-indexing-conceptual.md) to reuse existing enrichments where possible.
Without incremental caching, the indexer will always process documents in order of the high water mark, without going backwards. For blobs, the indexer would process blobs sorted by `lastModified`, regardless of any changes to indexer settings or the skillset. If you change a skillset, previously processed documents aren't updated to reflect the new skillset. Documents processed after the skillset change will use the new skillset, resulting in index documents being a mix of old and new skillsets.
With incremental caching, and after a skillset update, the indexer will reuse an
### Deletions
-Although an indexer creates and updates structures and content in Azure Storage, it does not delete them. Projections continue to exist even when the indexer or skillset is deleted. As the owner of the storage account, you should delete a projection if it is no longer needed.
+Although an indexer creates and updates structures and content in Azure Storage, it doesn't delete them. Projections continue to exist even when the indexer or skillset is deleted. As the owner of the storage account, you should delete a projection if it's no longer needed.
## Next steps
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
search Resource Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-tools.md
Productivity tools are built by engineers at Microsoft, but aren't part of the A
| Tool name | Description | Source code | |--| |-| | [Back up and Restore](https://github.com/liamc) | Download the retrievable fields of an index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
-| [Chat with your data solution accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator/README.md) | Code and docs to create interactive search solution in production environments. | [https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) |
+| [Chat with your data solution accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator/blob/main/README.md) | Code and docs to create interactive search solution in production environments. | [https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) |
| [Knowledge Mining Accelerator](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) | | [Performance testing solution](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md) | This solution helps you load test Azure AI Search. It uses Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) | | [Visual Studio Code extension](https://github.com/microsoft/vscode-azurecognitivesearch) | Although the extension is no longer available in the Visual Studio Code Marketplace, the code is open sourced at `https://github.com/microsoft/vscode-azurecognitivesearch`. You can clone and modify the tool for your own use. |
search Retrieval Augmented Generation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/retrieval-augmented-generation-overview.md
The decision about which information retrieval system to use is critical because
+ Security, global reach, and reliability for both data and operations.
-+ Integration with LLMs.
++ Integration with embedding models for indexing, and chat models or language understanding models for retrieval. Azure AI Search is a [proven solution for information retrieval](/azure/developer/python/get-started-app-chat-template?tabs=github-codespaces) in a RAG architecture. It provides indexing and query capabilities, with the infrastructure and security of the Azure cloud. Through code and other components, you can design a comprehensive RAG solution that includes all of the elements for generative AI over your proprietary content.
The web app provides the user experience, providing the presentation, context, a
The app server or orchestrator is the integration code that coordinates the handoffs between information retrieval and the LLM. One option is to use [LangChain](https://python.langchain.com/docs/get_started/introduction) to coordinate the workflow. LangChain [integrates with Azure AI Search](https://python.langchain.com/docs/integrations/retrievers/azure_cognitive_search), making it easier to include Azure AI Search as a [retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/) in your workflow.
-The information retrieval system provides the searchable index, query logic, and the payload (query response). The search index can contain vectors or non-vector content. Although most samples and demos include vector fields, it's not a requirement. The query is executed using the existing search engine in Azure AI Search, which can handle keyword (or term) and vector queries. The index is created in advance, based on a schema you define, and loaded with your content that's sourced from files, databases, or storage.
+The information retrieval system provides the searchable index, query logic, and the payload (query response). The search index can contain vectors or nonvector content. Although most samples and demos include vector fields, it's not a requirement. The query is executed using the existing search engine in Azure AI Search, which can handle keyword (or term) and vector queries. The index is created in advance, based on a schema you define, and loaded with your content that's sourced from files, databases, or storage.
The LLM receives the original prompt, plus the results from Azure AI Search. The LLM analyzes the results and formulates a response. If the LLM is ChatGPT, the user interaction might be a back and forth conversation. If you're using Davinci, the prompt might be a fully composed answer. An Azure solution most likely uses Azure OpenAI, but there's no hard dependency on this specific service.
There's no query type in Azure AI Search - not even semantic or vector search -
| Query feature | Purpose | Why use it | ||||
-| [Simple or full Lucene syntax](search-query-create.md) | Query execution over text and non-vector numeric content | Full text search is best for exact matches, rather than similar matches. Full text search queries are ranked using the [BM25 algorithm](index-similarity-and-scoring.md) and support relevance tuning through scoring profiles. It also supports filters and facets. |
-| [Filters](search-filters.md) and [facets](search-faceted-navigation.md) | Applies to text or numeric (non-vector) fields only. Reduces the search surface area based on inclusion or exclusion criteria. | Adds precision to your queries. |
+| [Simple or full Lucene syntax](search-query-create.md) | Query execution over text and nonvector numeric content | Full text search is best for exact matches, rather than similar matches. Full text search queries are ranked using the [BM25 algorithm](index-similarity-and-scoring.md) and support relevance tuning through scoring profiles. It also supports filters and facets. |
+| [Filters](search-filters.md) and [facets](search-faceted-navigation.md) | Applies to text or numeric (nonvector) fields only. Reduces the search surface area based on inclusion or exclusion criteria. | Adds precision to your queries. |
| [Semantic ranking](semantic-how-to-query-request.md) | Re-ranks a BM25 result set using semantic models. Produces short-form captions and answers that are useful as LLM inputs. | Easier than scoring profiles, and depending on your content, a more reliable technique for relevance tuning. | [Vector search](vector-search-how-to-query.md) | Query execution over vector fields for similarity search, where the query string is one or more vectors. | Vectors can represent all types of content, in any language. |
-| [Hybrid search](hybrid-search-how-to-query.md) | Combines any or all of the above query techniques. Vector and non-vector queries execute in parallel and are returned in a unified result set. | The most significant gains in precision and recall are through hybrid queries. |
+| [Hybrid search](hybrid-search-how-to-query.md) | Combines any or all of the above query techniques. Vector and nonvector queries execute in parallel and are returned in a unified result set. | The most significant gains in precision and recall are through hybrid queries. |
### Structure the query response
Rows are matches to the query, ranked by relevance, similarity, or both. By defa
When you're working with complex processes, a large amount of data, and expectations for millisecond responses, it's critical that each step adds value and improves the quality of the end result. On the information retrieval side, *relevance tuning* is an activity that improves the quality of the results sent to the LLM. Only the most relevant or the most similar matching documents should be included in results.
-Relevance applies to keyword (non-vector) search and to hybrid queries (over the non-vector fields). In Azure AI Search, there's no relevance tuning for similarity search and vector queries. [BM25 ranking](index-similarity-and-scoring.md) is the ranking algorithm for full text search.
+Relevance applies to keyword (nonvector) search and to hybrid queries (over the nonvector fields). In Azure AI Search, there's no relevance tuning for similarity search and vector queries. [BM25 ranking](index-similarity-and-scoring.md) is the ranking algorithm for full text search.
Relevance tuning is supported through features that enhance BM25 ranking. These approaches include:
search Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md
Code samples from the Azure AI Search team demonstrate features and workflows. M
| Samples | Article | ||| | [quickstart](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart/v11) | Source code for the Python portion of [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md). This article covers the basic workflow for creating, loading, and querying a search index using sample data. |
-| [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/quickstart-semantic-search/) | Source code for the Python portion of [Quickstart: Semantic ranking using the Azure SDKs](search-get-started-semantic.md). It shows the index schema and query request for invoking semantic ranking. |
+| [quickstart-semantic-search](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-Semantic-Search) | Source code for the Python portion of [Quickstart: Semantic ranking using the Azure SDKs](search-get-started-semantic.md). It shows the index schema and query request for invoking semantic ranking. |
| [search-website-functions-v4](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/search-website-functions-v4) | Source code for [Tutorial: Add search to web apps](tutorial-python-overview.md). Demonstrates an end-to-end search app that includes a rich client plus components for hosting the app and handling search requests.| | [tutorial-ai-enrichment](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Tutorial-AI-Enrichment) | Source code for [Tutorial: Use Python and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob-python.md). This article shows how to create a blob indexer with a cognitive skillset, where the skillset creates and transforms raw content to make it searchable or consumable. |
search Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-rest.md
Code samples from the Azure AI Search team demonstrate features and workflows. M
| Samples | Description | ||| | [Quickstart](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart) | Source code for [Quickstart: Create a search index using REST APIs](search-get-started-rest.md). This sample covers the basic workflow for creating, loading, and querying a search index using sample data. |
-| [Quickstart-vectors](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vector) | Source code for [Quickstart: Vector search using REST APIs](search-get-started-vector.md). This sample covers the basic workflow for indexing and querying vector data. |
+| [Quickstart-vectors](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vectors) | Source code for [Quickstart: Vector search using REST APIs](search-get-started-vector.md). This sample covers the basic workflow for indexing and querying vector data. |
| [Tutorial](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Tutorial) | Source code for [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md). This sample shows you how to create a skillset that iterates over Azure blobs to extract information and infer structure.| | [Debug-sessions](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Debug-sessions) | Source code for [Tutorial: Diagnose, repair, and commit changes to your skillset](cognitive-search-tutorial-debug-sessions.md). This sample shows you how to use a skillset debug session in the Azure portal. REST is used to create the objects used during debug.| | [custom-analyzers](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/custom-analyzers) | Source code for [Tutorial: Create a custom analyzer for phone numbers](tutorial-create-custom-analyzer.md). This sample explains how to use analyzers to preserve patterns and special characters in searchable content.|
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-file-storage-integration.md
Title: Azure Files indexer (preview)
description: Set up an Azure Files indexer to automate indexing of file shares in Azure AI Search. -+
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
Last updated 01/19/2024
# Quickstart: Vector search using REST APIs
-Get started with vector search in Azure AI Search using the **2023-11-01** REST APIs that create, load, and query a search index.
+Get started with vector stores in Azure AI Search using the **2023-11-01** REST APIs that load, and query vectors.
-Search indexes can have vector and nonvector fields. You can execute pure vector queries, or hybrid queries targeting both vector *and* textual fields configured for filters, sorts, facets, and semantic reranking.
+In Azure AI Search, a *vector store* has an index schema that defines vector and nonvector fields, a vector configuration for algorithms that create the embedding space, and settings on vector field definitions that are used in query requests. The [Create Index](/rest/api/searchservice/indexes/create-or-update) API creates the vector store.
+
+You can execute pure vector queries, or hybrid queries targeting both vector *and* textual fields configured for filters, sorts, facets, and semantic reranking.
> [!NOTE]
-> The stable REST API version depends on external modules for data chunking and embedding. If you want test-drive the [built-in data chunking and vectorization (public preview)](vector-search-integrated-vectorization.md) features, try the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
+> The stable REST API version depends on external solutions for data chunking and embedding. If you want evalulate the [built-in data chunking and vectorization (public preview)](vector-search-integrated-vectorization.md) features, try the [**Import and vectorize data** wizard](search-get-started-portal-import-vectors.md) for an end-to-end walkthrough.
## Prerequisites
Search indexes can have vector and nonvector fields. You can execute pure vector
+ An Azure subscription. [Create one for free](https://azure.microsoft.com/free/).
-+ Azure AI Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created.
++ Azure AI Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created. You can use the Free tier for this quickstart, but Basic or higher is recommended for larger data files. + Optionally, for [semantic reranking](semantic-search-overview.md) shown in the last example, your search service must be Basic tier or higher, with [semantic ranking enabled](semantic-how-to-enable-disable.md).
The vector query string is semantically similar to the search string, but has te
### Single vector search
-In this vector query, which is shortened for brevity, the `"value"` contains the vectorized text of the query input, `"fields"` determines which vector fields are searched, and `"k"` specifies the number of nearest neighbors to return.
+In this vector query, which is shortened for brevity, the `"vector"` contains the vectorized text of the query input, `"fields"` determines which vector fields are searched, and `"k"` specifies the number of nearest neighbors to return.
The vector query string is *"classic lodging near running trails, eateries, retail"* - vectorized into 1536 embeddings for this query.
api-key: {{admin-api-key}}
{ "count": true, "select": "HotelId, HotelName, Description, Category",
- "vectors": [
+ "vectorQueries": [
{
- "value": [0.01944167, 0.0040178085
+ "vector"": [0.01944167, 0.0040178085
. . . 010858015, -0.017496133], "k": 7,
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
The following properties can be set for CORS:
## Allowed updates on existing indexes
-[**Create Index**](/rest/api/searchservice/create-index) creates the physical data structures (files and inverted indices) on your search service. Once the index is created, your ability to effect changes using [**Update Index**](/rest/api/searchservice/update-index) is contingent upon whether your modifications invalidate those physical structures. Most field attributes can't be changed once the field is created in your index.
+[**Create Index**](/rest/api/searchservice/create-index) creates the physical data structures (files and inverted indexes) on your search service. Once the index is created, your ability to effect changes using [**Update Index**](/rest/api/searchservice/update-index) is contingent upon whether your modifications invalidate those physical structures. Most field attributes can't be changed once the field is created in your index.
Alternatively, you can [create an index alias](search-how-to-alias.md) that serves as a stable reference in your application code. Instead of updating your code, you can update an index alias to point to newer index versions.
search Search Indexer How To Access Private Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-how-to-access-private-sql.md
Title: Connect to SQL Managed Instance
description: Configure an indexer connection to access content in an Azure SQL Managed instance that's protected through a private endpoint. -+
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Document size is actually a limit on the size of the Index API request body. Sin
When estimating document size, remember to consider only those fields that can be consumed by a search service. Any binary or image data in source documents should be omitted from your calculations.
-## Vector storage limits
+## Vector index size limits
When you index documents with vector fields, Azure AI Search constructs internal vector indexes using the algorithm parameters you provide. The size of these vector indexes is restricted by the memory reserved for vector search for your service's tier (or SKU).
-The service enforces a vector storage quota **for every partition** in your search service. Each extra partition increases the available vector storage quota. This quota is a hard limit to ensure your service remains healthy, which means that further indexing attempts once the limit is exceeded results in failure. You can resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
+The service enforces a vector index size quota **for every partition** in your search service. Each extra partition increases the available vector index size quota. This quota is a hard limit to ensure your service remains healthy, which means that further indexing attempts once the limit is exceeded results in failure. You can resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
-The table describes the vector storage quota per partition across the service tiers (or SKU). For context, it includes:
+The table describes the vector index size quota per partition across the service tiers (or SKU). For context, it includes:
+ [Partition storage limits](#service-limits) for each tier, repeated here for context. + Amount of each partition (in GB) available for vector indexes (created when you add vector fields to an index). + Approximate number of embeddings (floating point values) per partition.
-Use the [Get Service Statistics API (GET /servicestats)](/rest/api/searchservice/get-service-statistics) to retrieve your vector storage quota. See our [documentation on vector storage](vector-search-index-size.md) for more details.
+Use the [Get Service Statistics API (GET /servicestats)](/rest/api/searchservice/get-service-statistics) to retrieve your vector index size quota. See our [documentation on vector index size](vector-search-index-size.md) for more details.
### Services created before July 1, 2023
Use the [Get Service Statistics API (GET /servicestats)](/rest/api/searchservice
### Services created after July 1, 2023 in supported regions
-Azure AI Search is rolling out increased vector storage limits worldwide for **new search services**, but the team is building out infrastructure capacity in certain regions. Unfortunately, existing services can't be migrated to the new limits.
+Azure AI Search is rolling out increased vector index size limits worldwide for **new search services**, but the team is building out infrastructure capacity in certain regions. Unfortunately, existing services can't be migrated to the new limits.
The following regions **do not** support increased limits:
search Search Lucene Query Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-lucene-query-architecture.md
For the **description** field, the index is as follows:
**Matching query terms against indexed terms**
-Given the inverted indices above, letΓÇÖs return to the sample query and see how matching documents are found for our example query. Recall that the final query tree looks like this:
+Given the inverted indexes above, letΓÇÖs return to the sample query and see how matching documents are found for our example query. Recall that the final query tree looks like this:
![Conceptual diagram of a boolean query with analyzed terms.][4]
search Search Manage Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-azure-cli.md
Title: Azure CLI scripts using the az search module
description: Create and configure an Azure AI Search service with the Azure CLI. You can scale a service up or down, manage admin and query api-keys, and query for system information. -+ ms.devlang: azurecli
search Search Modeling Multitenant Saas Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-modeling-multitenant-saas-applications.md
Title: Multitenancy and content isolation description: Learn about common design patterns for multitenant SaaS applications while using Azure AI Search.-+
search Search Performance Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-analysis.md
Title: Analyze performance description: Learn about the tools, behaviors, and approaches for analyzing query and indexing performance in Azure AI Search.-+
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-tips.md
Title: Performance tips description: Learn about tips and best practices for maximizing performance on a search service.-+
search Search Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md
Title: Reliability in Azure AI Search description: Find out about reliability in Azure AI Search.-+
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Previously updated : 05/16/2023 Last updated : 01/05/2024 - subject-rbac-steps - references_regions
For more information on how to acquire a token for a specific environment, see [
If you're already a Contributor or Owner of your search service, you can present a bearer token for your user identity for authentication to Azure AI Search. The following instructions explain how to set up a Postman collection to send requests as the current user.
-1. Get a bearer token for the current user:
+1. Get a bearer token for the current user using the Azure CLI:
```azurecli az account get-access-token --scope https://search.azure.com/.default ```
+ Or by using PowerShell:
+
+ ```powershell
+ Get-AzAccessToken -ResourceUrl "https://graph.microsoft.com/"
+ ```
+ 1. Start a new Postman collection and edit its properties. In the **Variables** tab, create the following variable: | Variable | Description |
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
Although you can add new fields at any time, existing field definitions are lock
## Physical structure and size
-In Azure AI Search, the physical structure of an index is largely an internal implementation. You can access its schema, query its content, monitor its size, and manage capacity, but the clusters themselves (indices, [shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards), and other files and folders) are managed internally by Microsoft.
+In Azure AI Search, the physical structure of an index is largely an internal implementation. You can access its schema, query its content, monitor its size, and manage capacity, but the clusters themselves (indexes, [shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards), and other files and folders) are managed internally by Microsoft.
You can monitor index size in the Indexes tab in the Azure portal, or by issuing a [GET INDEX request](/rest/api/searchservice/get-index) against your search service. You can also issue a [Service Statistics request](/rest/api/searchservice/get-service-statistics) and check the value of storage size.
The size of an index is determined by:
Document composition and quantity are determined by what you choose to import. Remember that a search index should only contain searchable content. If source data includes binary fields, omit those fields unless you're using AI enrichment to crack and analyze the content to create text searchable information.
-Field attributes determine behaviors. To support those behaviors, the indexing process creates the necessary data structures. For example, for a field of type `Edm.String`, "searchable" invokes [full text search](search-lucene-query-architecture.md), which scans inverted indices for the tokenized term. In contrast, a "filterable" or "sortable" attribute supports iteration over unmodified strings. The example in the next section shows variations in index size based on the selected attributes.
+Field attributes determine behaviors. To support those behaviors, the indexing process creates the necessary data structures. For example, for a field of type `Edm.String`, "searchable" invokes [full text search](search-lucene-query-architecture.md), which scans inverted indexes for the tokenized term. In contrast, a "filterable" or "sortable" attribute supports iteration over unmodified strings. The example in the next section shows variations in index size based on the selected attributes.
[**Suggesters**](index-add-suggesters.md) are constructs that support type-ahead or autocomplete queries. As such, when you include a suggester, the indexing process creates the data structures necessary for verbatim character matches. Suggesters are implemented at the field level, so choose only those fields that are reasonable for type-ahead.
But you'll also want to become familiar with methodologies for loading an index
+ [Create a search index](search-how-to-create-search-index.md)
-+ [Create a vector index](vector-search-how-to-create-index.md)
++ [Create a vector store](vector-search-how-to-create-index.md) + [Create an index alias](search-how-to-alias.md)
search Vector Search How To Configure Vectorizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-configure-vectorizer.md
You can use the [**Import and vectorize data wizard**](search-get-started-portal
+ A skillset that performs data chunking and vectorization of those chunks. You can omit a skillset if you only want integrated vectorization at query time, or if you don't need chunking or [index projections](index-projections-concept-intro.md) during indexing. This article assumes you already know how to [create a skillset](cognitive-search-defining-skillset.md).
-+ An index that specifies vector and non-vector fields. This article assumes you already know how to [create a vector index](vector-search-how-to-create-index.md) and covers just the steps for adding vectorizers and field assignments.
++ An index that specifies vector and non-vector fields. This article assumes you already know how to [create a vector store](vector-search-how-to-create-index.md) and covers just the steps for adding vectorizers and field assignments. + An [indexer](search-howto-create-indexers.md) that drives the pipeline.
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
Title: Add vector search
+ Title: Create a vector store
description: Create or update a search index to include vector fields.
- ignite-2023 Previously updated : 11/27/2023 Last updated : 01/29/2024
-# Add vector fields to a search index
+# Create a vector store
-In Azure AI Search, vector data is indexed as *vector fields* in a [search index](search-what-is-an-index.md).
+In Azure AI Search, a *vector store* has an index schema that defines vector and nonvector fields, a vector configuration for algorithms that create the embedding space, and settings on vector field definitions that are used in query requests. The [Create Index](/rest/api/searchservice/indexes/create-or-update) API creates the vector store.
Follow these steps to index vector data: > [!div class="checklist"]
-> + Add one or more vector configurations to an index schema.
-> + Add one or more vector fields.
-> + Load the index with vector data [as a separate step](#load-vector-data-for-indexing), or use [integrated vectorization (preview)](vector-search-integrated-vectorization.md) for data chunking and encoding during indexing.
+> + Define a schema with one or more vector configurations that specifies algorithms for indexing and search
+> + Add one or more vector fields
+> + Load prevectorized data [as a separate step](#load-vector-data-for-indexing), or use [integrated vectorization (preview)](vector-search-integrated-vectorization.md) for data chunking and encoding during indexing.
This article applies to the generally available, non-preview version of [vector search](vector-search-overview.md), which assumes your application code calls external resources for chunking and encoding.
This article applies to the generally available, non-preview version of [vector
## Prerequisites
-+ Azure AI Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there's a small subset that support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created.
++ Azure AI Search, in any region and on any tier. Most existing services support vector search. For services created prior to January 2019, there's a small subset that can't support vector search. If an index containing vector fields fails to be created or updated, this is an indicator. In this situation, a new service must be created.
-+ Pre-existing vector embeddings in your source documents. Azure AI Search doesn't generate vectors in the generally available version of vector search. We recommend [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) but you can use any model for vectorization. For more information, see [Generate embeddings](vector-search-how-to-generate-embeddings.md).
++ Pre-existing vector embeddings in your source documents. Azure AI Search doesn't generate vectors in the generally available version of the Azure SDKs and REST APIs. We recommend [Azure OpenAI embedding models](/azure/ai-services/openai/concepts/models#embeddings-models) but you can use any model for vectorization. For more information, see [Generate embeddings](vector-search-how-to-generate-embeddings.md). + You should know the dimensions limit of the model used to create the embeddings and how similarity is computed. In Azure OpenAI, for **text-embedding-ada-002**, the length of the numerical vector is 1536. Similarity is computed using `cosine`.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
Be sure to the **JSON view** and formulate the vector query in JSON. The search
+ Use the [**@azure/search-documents 12.0.0-beta.4**](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.4) package for vector scenarios.
-+ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript) GitHub repository for JavaScript code samples.
++ See the [azure-search-vector](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript/JavaScriptVectorDemo) GitHub repository for JavaScript code samples.
Multiple sets are created if the query targets multiple vector fields, or if the
## Next steps
-As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/cazure-search-vector-samplesr/tree/main/demo-javascript).
+As a next step, we recommend reviewing the demo code for [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript/JavaScriptVectorDemo).
search Vector Search Index Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md
Title: Vector storage limits
+ Title: Vector index limits
description: Explanation of the factors affecting the size of a vector index.
- ignite-2023 Previously updated : 11/16/2023 Last updated : 01/30/2024
-# Vector storage limit
+# Vector index size limits
-When you index documents with vector fields, Azure AI Search constructs internal vector indexes using the algorithm parameters that you specified for the field. Because Azure AI Search imposes limits on vector storage, it's important that you know how to retrieve metrics about the vector index size, and how to estimate the vector storage requirements for your use case.
+When you index documents with vector fields, Azure AI Search constructs internal vector indexes using the algorithm parameters that you specified for the field. Because Azure AI Search imposes limits on vector index size, it's important that you know how to retrieve metrics about the vector index size, and how to estimate the vector index size requirements for your use case.
## Key points about vector size limits The size of vector indexes is measured in bytes. The size constraints are based on memory reserved for vector search, but also have implications for storage at the service level. Size constraints vary by service tier (or SKU).
-The service enforces a vector storage quota **based on the number of partitions** in your search service, where the quota per partition varies by tier and also by service creation date (see [Vector storage](search-limits-quotas-capacity.md#vector-storage-limits) in service limits).
+The service enforces a vector index size quota **based on the number of partitions** in your search service, where the quota per partition varies by tier and also by service creation date (see [Vector index size](search-limits-quotas-capacity.md#vector-index-size-limits) in service limits).
-Each extra partition that you add to your service increases the available vector storage quota. This quota is a hard limit to ensure your service remains healthy. It also means that if vector size exceeds this limit, any further indexing requests result in failure. You can resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
+Each extra partition that you add to your service increases the available vector index size quota. This quota is a hard limit to ensure your service remains healthy. It also means that if vector size exceeds this limit, any further indexing requests result in failure. You can resume indexing once you free up available quota by either deleting some vector documents or by scaling up in partitions.
The following table shows vector quotas by partition, and by service if all partitions are in use. This table is for newer search services created *after July 1, 2023*. For more information, including limits for older search services and also limits on the approximate number of embeddings per partition, see [Search service limits](search-limits-quotas-capacity.md).
To obtain the **vector index size**, multiply this **raw_size** by the **algorit
Disk storage overhead of vector data is roughly three times the size of vector index size.
-### Storage vs. vector storage quotas
+### Storage vs. vector index size quotas
-Service storage and vector storage quotas aren't separate quotas. Vector indexes contribute to the [storage quota for the search service](search-limits-quotas-capacity.md#service-limits) as a whole. For example, if your storage quota is exhausted but there's remaining vector quota, you can't index any more documents, regardless if they're vector documents, until you scale up in partitions to increase storage quota or delete documents (either text or vector) to reduce storage usage. Similarly, if vector quota is exhausted but there's remaining storage quota, further indexing attempts fail until vector quota is freed, either by deleting some vector documents or by scaling up in partitions.
+Service storage and vector index size quotas aren't separate quotas. Vector indexes contribute to the [storage quota for the search service](search-limits-quotas-capacity.md#service-limits) as a whole. For example, if your storage quota is exhausted but there's remaining vector quota, you can't index any more documents, regardless if they're vector documents, until you scale up in partitions to increase storage quota or delete documents (either text or vector) to reduce storage usage. Similarly, if vector quota is exhausted but there's remaining storage quota, further indexing attempts fail until vector quota is freed, either by deleting some vector documents or by scaling up in partitions.
search Vector Search Integrated Vectorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-integrated-vectorization.md
Data chunking (Text Split skill) is free and available on all Azure AI services
+ Subdivide large documents into chunks, useful for vector and non-vector scenarios. For vectors, chunks help you meet the input constraints of embedding models. For non-vector scenarios, you might have a chat-style search app where GPT is assembling responses from indexed chunks. You can use vectorized or non-vectorized chunks for chat-style search.
-+ Build a vector store where all of the fields are vector fields, and the document ID (required for a search index) is the only string field. Query the vector index to retrieve document IDs, and then send the document's vector fields to another model.
++ Build a vector store where all of the fields are vector fields, and the document ID (required for a search index) is the only string field. Query the vector store to retrieve document IDs, and then send the document's vector fields to another model. + Combine vector and text fields for hybrid search, with or without semantic ranking. Integrated vectorization simplifies all of the [scenarios supported by vector search](vector-search-overview.md#what-scenarios-can-vector-search-support).
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
- ignite-2023 Previously updated : 12/05/2023 Last updated : 01/29/2024
-# Vector search in Azure AI Search
+# Vector stores and vector search in Azure AI Search
-Vector search is an approach in information retrieval that uses numeric representations of content for search scenarios. Because the content is numeric rather than plain text, the search engine matches on vectors that are the most similar to the query, with no requirement for matching on exact terms.
+Vector search is an approach in information retrieval that stores numeric representations of content for search scenarios. Because the content is numeric rather than plain text, the search engine matches on vectors that are the most similar to the query, with no requirement for matching on exact terms.
This article is a high-level introduction to vector support in Azure AI Search. It also explains integration with other Azure services and covers [terminology and concepts](#vector-search-concepts) related to vector search development. We recommend this article for background, but if you'd rather get started, follow these steps: > [!div class="checklist"]
-> + [Generate vector embeddings](vector-search-how-to-generate-embeddings.md) before you start, or try out [integrated vectorization (preview)](vector-search-integrated-vectorization.md).
-> + [Add vector fields to an index](vector-search-how-to-create-index.md).
-> + [Load vector data](search-what-is-data-import.md) into an index using push or pull methodologies.
-> + [Query vector data](vector-search-how-to-query.md) using the Azure portal, REST APIs, or Azure SDK packages.
+> + [Provide embeddings](vector-search-how-to-generate-embeddings.md) or [generate embeddings (preview)](vector-search-integrated-vectorization.md)
+> + [Create a vector store](vector-search-how-to-create-index.md)
+> + [Run vector queries](vector-search-how-to-query.md)
You could also begin with the [vector quickstart](search-get-started-vector.md) or the [code samples on GitHub](https://github.com/Azure/azure-search-vector-samples).
-Vector search is in the Azure portal and the Azure SDKs for [.NET](https://www.nuget.org/packages/Azure.Search.Documents), [Python](https://pypi.org/project/azure-search-documents), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2).
+## How vector search works in Azure AI Search
-## What's vector search in Azure AI Search?
-
-Vector search is a new capability for indexing, storing, and retrieving vector embeddings from a search index. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://aka.ms/what-is-rag).
+Vector support includes indexing, storing, and querying of vector embeddings from a search index.
The following diagram shows the indexing and query workflows for vector search. :::image type="content" source="media/vector-search-overview/vector-search-architecture-diagram-3.svg" alt-text="Architecture of vector search workflow." border="false" lightbox="media/vector-search-overview/vector-search-architecture-diagram-3-high-res.png":::
-On the indexing side, Azure AI Search takes vector embeddings and uses a [nearest neighbors algorithm](vector-search-ranking.md) to co-locate similar vectors together in the search index (vectors about popular movies are closer than vectors about popular dog breeds).
+On the indexing side, Azure AI Search takes vector embeddings and uses a [nearest neighbors algorithm](vector-search-ranking.md) to place similar vectors close together in an index. Internally, it creates vector indexes for each vector field.
-How you get embeddings from your source content depends on your approach and whether you can use preview features. You can vectorize or generate embeddings using models from OpenAI, Azure OpenAI, and any number of providers, over a wide range of source content including text, images, and other content types supported by the models. You can then push pre-vectorized content to [vector fields](vector-search-how-to-create-index.md) in a search index. That's the generally available approach. If you can use preview features, Azure AI Search provides [integrated data chunking and vectorization](vector-search-integrated-vectorization.md) in an indexer pipeline. You still provide the resources (endpoints and connection information), but Azure AI Search makes all of the calls and handles the transitions.
+How you get embeddings from your source content into Azure AI Search depends on your approach and whether you can use preview features. You can vectorize or generate embeddings as a preliminary step using models from OpenAI, Azure OpenAI, and any number of providers, over a wide range of source content including text, images, and other content types supported by the models. You can then push prevectorized content to [vector fields](vector-search-how-to-create-index.md) in a vector store. That's the generally available approach. If you can use preview features, Azure AI Search offers [integrated data chunking and vectorization](vector-search-integrated-vectorization.md) in an indexer pipeline. You still provide the resources (endpoints and connection information to Azure OpenAI), but Azure AI Search makes all of the calls and handles the transitions.
-On the query side, in your client application, collect the query input from a user. You can then add an encoding step that converts the input into a vector, and then send the vector query to your index on Azure AI Search for a similarity search. As with indexing, you can deploy the [integrated vectorization (preview)](vector-search-integrated-vectorization.md) to convert text inputs to a vector. For either approach, Azure AI Search returns documents with the requested `k` nearest neighbors (kNN) in the results.
+On the query side, in your client application, you collect the query input from a user, usually through a prompt workflow. You can then add an encoding step that converts the input into a vector, and then send the vector query to your index on Azure AI Search for a similarity search. As with indexing, you can deploy the [integrated vectorization (preview)](vector-search-integrated-vectorization.md) to convert the question into a vector. For either approach, Azure AI Search returns documents with the requested `k` nearest neighbors (kNN) in the results.
-Azure AI Search supports [hybrid scenarios](hybrid-search-overview.md). You can index vector data as fields in documents alongside alphanumeric content. Vector queries can be issued singly or in combination with filters and other query types, including term queries and semantic ranking in the same search request.
+Azure AI Search supports [hybrid scenarios](hybrid-search-overview.md) that run vector and keyword search in parallel, returning a unified result set that often provides better results than just vector or keyword search alone. For hybrid, vector and nonvector content is ingested into the same index, for queries that run side by side.
## Availability and pricing
Vector search is available as part of all Azure AI Search tiers in all regions a
Newer services created after July 1, 2023 support [higher quotas for vector indexes](vector-search-index-size.md).
+Vector search is available in:
+++ Azure portal using the [Import and vectorize data wizard](search-get-started-portal-import-vectors.md)++ Azure REST APIs, [version 2023-11-01](/rest/api/searchservice/operation-groups)++ Azure SDKs for [.NET](https://www.nuget.org/packages/Azure.Search.Documents), [Python](https://pypi.org/project/azure-search-documents), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/12.0.0-beta.2)++ Other Azure offerings such as Azure AI Studio and Azure OpenAI Studio.+ > [!NOTE] > Some older search services created before January 1, 2019 are deployed on infrastructure that doesn't support vector workloads. If you try to add a vector field to a schema and get an error, it's a result of outdated services. In this situation, you must create a new search service to try out the vector feature.
Newer services created after July 1, 2023 support [higher quotas for vector inde
Scenarios for vector search include:
-+ **Vector search for text**. Encode text using embedding models such as OpenAI embeddings or open source models such as SBERT, and retrieve documents with queries that are also encoded as vectors.
++ **Vector database**. Azure AI Search stores the data that you query over. Use it as a pure vector store any time you need long-term memory or a knowledge base, or grounding data for [Retrieval Augmented Generation (RAG) architecture](https://aka.ms/what-is-rag), or any app that uses vectors.
-+ **Vector search across different content types (multimodal)**. Encode images and text using multimodal embeddings (for example, with [OpenAI CLIP](https://github.com/openai/CLIP) or [GPT-4 Turbo with Vision](/azure/ai-services/openai/whats-new#gpt-4-turbo-with-vision-now-available) in Azure OpenAI) and query an embedding space composed of vectors from both content types.
++ **Similarity search**. Encode text using embedding models such as OpenAI embeddings or open source models such as SBERT, and retrieve documents with queries that are also encoded as vectors.
-+ **Multilingual search**. Use a multilingual embeddings model to represent your document in multiple languages in a single vector space to find documents regardless of the language they are in.
++ **Search across different content types (multimodal)**. Encode images and text using multimodal embeddings (for example, with [OpenAI CLIP](https://github.com/openai/CLIP) or [GPT-4 Turbo with Vision](/azure/ai-services/openai/whats-new#gpt-4-turbo-with-vision-now-available) in Azure OpenAI) and query an embedding space composed of vectors from both content types.
-+ [**Hybrid search**](hybrid-search-overview.md). Vector search is implemented at the field level, which means you can build queries that include both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic ranking](semantic-search-overview.md) for even more accuracy with L2 reranking using the same language models that power Bing.
++ [**Hybrid search**](hybrid-search-overview.md). In Azure AI Search, hybrid search refers to vector and keyword query execution from the same request. Vector support is implemented at the field level, with an index containing both vector fields and searchable text fields. The queries execute in parallel and the results are merged into a single response. Optionally, add [semantic ranking](semantic-search-overview.md) for more accuracy with L2 reranking using the same language models that power Bing.
-+ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for metadata filters, and including or excluding search documents based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine can process the filter before or after the vector query executes.
++ **Multilingual search**. Providing a search experience in the users own language is possible through embedding models and chat models trained in multiple languages. If you need more control over translation, you can supplement with the [multi-language capabilities](search-language-support.md) that Azure AI Search supports for nonvector content, in hybrid search scenarios.
-+ **Vector database**. Use Azure AI Search as a vector store to serve as long-term memory or an external knowledge base for Large Language Models (LLMs), or other applications. For example, you can use Azure AI Search as a [*vector index* in an Azure Machine Learning prompt flow](/azure/machine-learning/concept-vector-stores) for Retrieval Augmented Generation (RAG) applications.
++ **Filtered vector search**. A query request can include a vector query and a [filter expression](search-filters.md). Filters apply to text and numeric fields, and are useful for metadata filters, and including or excluding search results based on filter criteria. Although a vector field isn't filterable itself, you can set up a filterable text or numeric field. The search engine can process the filter before or after the vector query executes. ## Azure integration and related services
-You can use other Azure services to provide embeddings and data storage.
-
-+ Azure OpenAI provides embedding models. Demos and samples target the [text-embedding-ada-002](/azure/ai-services/openai/concepts/models#embeddings-models) and other models. We recommend Azure OpenAI for generating embeddings for text.
-
-+ [Image Retrieval Vectorize Image API(Preview)](/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-image-api) supports vectorization of image content. We recommend this API for generating embeddings for images.
-
-+ Azure AI Search can automatically index vector data from two data sources: [Azure blob indexers](search-howto-indexing-azure-blob-storage.md) and [Azure Cosmos DB for NoSQL indexers](search-howto-index-cosmosdb.md). For more information, see [Add vector fields to a search index.](vector-search-how-to-create-index.md)
+Azure AI Search is deeply integrated across the Azure AI platform. The following table lists several that are useful in vector workloads.
-+ [LangChain](https://docs.langchain.com/docs/) is a framework for developing applications powered by language models. Use the [Azure AI Search vector store integration](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/azuresearch) to simplify the creation of applications using LLMs with Azure AI Search as your vector datastore.
+| Product | Integration |
+||-|
+| Azure OpenAI Studio | In the chat with your data playground, **Add your own data** uses Azure AI Search for grounding data and conversational search. This is the easiest and fastest approach for chatting with your data. |
+| Azure OpenAI | Azure OpenAI provides embedding models and chat models. Demos and samples target the [text-embedding-ada-002](/azure/ai-services/openai/concepts/models#embeddings-models). We recommend Azure OpenAI for generating embeddings for text. |
+| Azure AI Services | [Image Retrieval Vectorize Image API(Preview)](/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-image-api) supports vectorization of image content. We recommend this API for generating embeddings for images. |
+| Azure data platforms: Azure Blob Storage, Azure Cosmos DB | You can use [indexers](search-indexer-overview.md) to automate data ingestion, and then use [integrated vectorization (preview)](vector-search-integrated-vectorization.md) to generate embeddings. Azure AI Search can automatically index vector data from two data sources: [Azure blob indexers](search-howto-indexing-azure-blob-storage.md) and [Azure Cosmos DB for NoSQL indexers](search-howto-index-cosmosdb.md). For more information, see [Add vector fields to a search index.](vector-search-how-to-create-index.md). |
-+ [Semantic kernel](https://github.com/microsoft/semantic-kernel/blob/main/README.md) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. It's useful for chunking large documents in a larger workflow that sends inputs to embedding models.
+It's also commonly used in open-source frameworks like [LangChain](https://js.langchain.com/docs/integrations/vectorstores/azure_aisearch).
## Vector search concepts
For example, documents that talk about different species of dogs would be cluste
### Nearest neighbors search
-In vector search, the search engine searches through the vectors within the embedding space to identify those that are near to the query vector. This technique is called [*nearest neighbor search*](https://en.wikipedia.org/wiki/Nearest_neighbor_search). Nearest neighbors help quantify the similarity between items. A high degree of vector similarity indicates that the original data was similar too. To facilitate fast nearest neighbor search, the search engine will perform optimizations or employ data structures or data partitioning to reduce the search space. Each vector search algorithm will have different approaches to this problem, trading off different characteristics such as latency, throughput, recall, and memory. To compute similarity, similarity metrics provide the mechanism for computing this distance.
+In vector search, the search engine scans vectors within the embedding space to identify vectors that are closest to the query vector. This technique is called [*nearest neighbor search*](https://en.wikipedia.org/wiki/Nearest_neighbor_search). Nearest neighbors help quantify the similarity between items. A high degree of vector similarity indicates that the original data was similar too. To facilitate fast nearest neighbor search, the search engine performs optimizations, or employs data structures and data partitioning, to reduce the search space. Each vector search algorithm solves the nearest neighbor problems in different ways as they optimize for minimum latency, maximum throughput, recall, and memory. To compute similarity, similarity metrics provide the mechanism for computing distance.
Azure AI Search currently supports the following algorithms:
-+ Hierarchical Navigable Small World (HNSW): HNSW is a leading ANN algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently. It organizes high-dimensional data points into a hierarchical graph structure that enables fast and scalable similarity search while allowing a tunable a trade-off between search accuracy and computational cost. Because the algorithm requires all data points to reside in memory for fast random access, this algorithm consumes [vector storage](vector-search-index-size.md) quota.
++ Hierarchical Navigable Small World (HNSW): HNSW is a leading ANN algorithm optimized for high-recall, low-latency applications where data distribution is unknown or can change frequently. It organizes high-dimensional data points into a hierarchical graph structure that enables fast and scalable similarity search while allowing a tunable a trade-off between search accuracy and computational cost. Because the algorithm requires all data points to reside in memory for fast random access, this algorithm consumes [vector index size](vector-search-index-size.md) quota.
-+ Exhaustive K-nearest neighbors (KNN): Calculates the distances between the query vector and all data points. It's computationally intensive, so it works best for smaller datasets. Because the algorithm doesn't require fast random access of data points, this algorithm doesn't consume vector storage quota. However, this algorithm will provide the global set of nearest neighbors.
++ Exhaustive K-nearest neighbors (KNN): Calculates the distances between the query vector and all data points. It's computationally intensive, so it works best for smaller datasets. Because the algorithm doesn't require fast random access of data points, this algorithm doesn't consume vector index size quota. However, this algorithm provides the global set of nearest neighbors. Within an index definition, you can specify one or more algorithms, and then for each vector field specify which algorithm to use:
-+ [Create a vector index](vector-search-how-to-create-index.md) to specify an algorithm in the index and on fields.
++ [Create a vector store](vector-search-how-to-create-index.md) to specify an algorithm in the index and on fields. + For exhaustive KNN, use [2023-11-01](/rest/api/searchservice/indexes/create-or-update), [2023-10-01-Preview](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true), or Azure SDK beta libraries that target either REST API version. Algorithm parameters that are used to initialize the index during index creation are immutable and can't be changed after the index is built. However, parameters that affect the query-time characteristics (`efSearch`) can be modified.
-In addition, fields that specify HNSW algorithm also support exhaustive KNN search using the [query request](vector-search-how-to-query.md) parameter `"exhaustive": true`. The opposite isn't true however. If a field is indexed for `exhaustiveKnn`, you can't use HNSW in the query because the additional data structures that enable efficient search donΓÇÖt exist.
+In addition, fields that specify HNSW algorithm also support exhaustive KNN search using the [query request](vector-search-how-to-query.md) parameter `"exhaustive": true`. The opposite isn't true however. If a field is indexed for `exhaustiveKnn`, you can't use HNSW in the query because the extra data structures that enable efficient search donΓÇÖt exist.
### Approximate Nearest Neighbors
Azure AI Search uses HNSW for its ANN algorithm.
## Next steps + [Try the quickstart](search-get-started-vector.md)
-+ [Learn more about vector indexing](vector-search-how-to-create-index.md)
++ [Learn more about vector stores](vector-search-how-to-create-index.md) + [Learn more about vector queries](vector-search-how-to-query.md) + [Azure Cognitive Search and LangChain: A Seamless Integration for Enhanced Vector Search Capabilities](https://techcommunity.microsoft.com/t5/azure-ai-services-blog/azure-cognitive-search-and-langchain-a-seamless-integration-for/ba-p/3901448)
search Vector Search Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-ranking.md
HNSW is recommended for most scenarios due to its efficiency when searching over
## How nearest neighbor search works
-Vector queries execute against an embedding space consisting of vectors generated from the same embedding model. Generally, the input value within a query request is fed into the same machine learning model that generated embeddings in the vector index. The output is a vector in the same embedding space. Since similar vectors are clustered close together, finding matches is equivalent to finding the vectors that are closest to the query vector, and returning the associated documents as the search result.
+Vector queries execute against an embedding space consisting of vectors generated from the same embedding model. Generally, the input value within a query request is fed into the same machine learning model that generated embeddings in the vector store. The output is a vector in the same embedding space. Since similar vectors are clustered close together, finding matches is equivalent to finding the vectors that are closest to the query vector, and returning the associated documents as the search result.
For example, if a query request is about hotels, the model maps the query into a vector that exists somewhere in the cluster of vectors representing documents about hotels. Identifying which vectors are the most similar to the query, based on a similarity metric, determines which documents are the most relevant.
-When vector fields are indexed for exhaustive KNN, the query executes against "all neighbors". For fields indexed for HNSW, the search engine uses an HNSW graph to search over a subset of nodes within the vector index.
+When vector fields are indexed for exhaustive KNN, the query executes against "all neighbors". For fields indexed for HNSW, the search engine uses an HNSW graph to search over a subset of nodes within the vector store.
### Creating the HNSW graph
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **SDL Phase** | Build | | **Applicable Technologies** | .NET Framework 3 | | **Attributes** | Client Credential Type - None |
-| **References** | [MSDN](/previous-versions/msp-n-p/ff648500(v=pandp.10)), [Fortify](https://community.microfocus.com/t5/UFT-Discussions/UFT-API-Test-with-WCF-wsHttpBinding/m-p/600927) |
+| **References** | [MSDN](/previous-versions/msp-n-p/ff648500(v=pandp.10)), [Fortify](https://community.microfocus.com/devops-cloud/uft-one/f/discussions/326834/uft-api-test-with-wcf-wshttpbinding) |
| **Steps** | The absence of authentication means everyone is able to access this service. A service that does not authenticate its clients allows access to all users. Configure the application to authenticate against client credentials. This can be done by setting the message clientCredentialType to Windows or Certificate. | ### Example
The `<netMsmqBinding/>` element of the WCF configuration file below instructs WC
| **SDL Phase** | Build | | **Applicable Technologies** | Generic, .NET Framework 3 | | **Attributes** | Client Credential Type - None |
-| **References** | [MSDN](/previous-versions/msp-n-p/ff648500(v=pandp.10)), [Fortify](https://community.microfocus.com/t5/UFT-Discussions/UFT-API-Test-with-WCF-wsHttpBinding/m-p/600927) |
+| **References** | [MSDN](/previous-versions/msp-n-p/ff648500(v=pandp.10)), [Fortify](https://community.microfocus.com/devops-cloud/uft-one/f/discussions/326834/uft-api-test-with-wcf-wshttpbinding) |
| **Steps** | The absence of authentication means everyone is able to access this service. A service that does not authenticate its clients allows all users to access its functionality. Configure the application to authenticate against client credentials. This can be done by setting the transport clientCredentialType to Windows or Certificate. | ### Example
security Identity Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-overview.md
ms.assetid: 5aa0a7ac-8f18-4ede-92a1-ae0dfe585e28
Previously updated : 12/05/2022 Last updated : 01/25/2024 # Customer intent: As an IT Pro or decision maker, I am trying to learn about identity management capabilities in Azure
By taking advantage of the security benefits of Microsoft Entra ID, you can:
-* Create and manage a single identity for each user across your hybrid enterprise, keeping users, groups, and devices in sync.
+* Create and manage a single identity for each user across your hybrid enterprise, keeping users, groups, and devices in sync.
* Provide SSO access to your applications, including thousands of pre-integrated SaaS apps. * Enable application access security by enforcing rules-based multifactor authentication for both on-premises and cloud applications. * Provision secure remote access to on-premises web applications through Microsoft Entra application proxy.
The article focuses on the following core Azure Identity management capabilities
## Single sign-on
-SSO means being able to access all the applications and resources that you need to do business, by signing in only once using a single user account. Once signed in, you can access all of the applications you need without being required to authenticate (for example, type a password) a second time.
+Single sign-on (SSO) means being able to access all the applications and resources that you need to do business, by signing in only once using a single user account. Once signed in, you can access all of the applications you need without being required to authenticate (for example, type a password) a second time.
Many organizations rely upon SaaS applications such as Microsoft 365, Box, and Salesforce for user productivity. Historically, IT staff needed to individually create and update user accounts in each SaaS application, and users had to remember a password for each SaaS application.
Learn more:
## Reverse proxy
-Microsoft Entra application proxy lets you publish on-premises applications, such as [SharePoint](https://support.office.com/article/What-is-SharePoint-97b915e6-651b-43b2-827d-fb25777f446f?ui=en-US&rs=en-US&ad=US) sites, [Outlook Web App](/Exchange/clients/outlook-on-the-web/outlook-on-the-web), and [IIS](https://www.iis.net/)-based apps inside your private network and provides secure access to users outside your network. Application Proxy provides remote access and SSO for many types of on-premises web applications with the thousands of SaaS applications that Microsoft Entra ID supports. Employees can sign in to your apps from home on their own devices and authenticate through this cloud-based proxy.
+Microsoft Entra application proxy lets you publish applications on a private network, such as [SharePoint](https://support.office.com/article/What-is-SharePoint-97b915e6-651b-43b2-827d-fb25777f446f?ui=en-US&rs=en-US&ad=US) sites, [Outlook Web App](/Exchange/clients/outlook-on-the-web/outlook-on-the-web), and [IIS](https://www.iis.net/)-based apps inside your private network and provides secure access to users outside your network. Application Proxy provides remote access and SSO for many types of on-premises web applications with the thousands of SaaS applications that Microsoft Entra ID supports. Employees can sign in to your apps from home on their own devices and authenticate through this cloud-based proxy.
Learn more:
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
We recommend the following actions to ensure your general security posture:
- **Ensure that your organization has extended detection and response (XDR) and security information and event management (SIEM) solutions in place**, such as [Microsoft Defender XDR for Endpoint](/microsoft-365/security/defender/microsoft-365-defender), [Microsoft Sentinel](../../sentinel/overview.md), and [Microsoft Defender for IoT](../../defender-for-iot/organizations/index.yml). -- **Review [MicrosoftΓÇÖs Enterprise access model](/security/privileged-access-workstations/privileged-access-access-model)**.
+- **Review MicrosoftΓÇÖs Enterprise access model**
### Improve identity security posture
This section provides possible methods and steps to consider when building your
> [!IMPORTANT] > The exact steps required in your organization will depend on what persistence you've discovered in your investigation, and how confident you are that your investigation was complete and has discovered all possible entry and persistence methods. >
-> Ensure that any actions taken are performed from a trusted device, built from a [clean source](/security/privileged-access-workstations/privileged-access-access-model). For example, use a fresh, [privileged access workstation](/security/privileged-access-workstations/privileged-access-deployment).
+> Ensure that any actions taken are performed from a trusted device, built from a clean source. For example, use a fresh, privileged access workstation.
> The following sections include the following types of recommendations for remediating and retaining administrative control:
In addition to the recommendations listed earlier in this article, we also recom
|Activity |Description | ||| |**Rebuild affected systems** | Rebuild systems that were identified as compromised by the attacker during your investigation. |
-|**Remove unnecessary admin users** | Remove unnecessary members from Domain Admins, Backup Operators, and Enterprise Admin groups. For more information, see [Securing Privileged Access](/security/privileged-access-workstations/overview). |
+|**Remove unnecessary admin users** | Remove unnecessary members from Domain Admins, Backup Operators, and Enterprise Admin groups. For more information, see Securing Privileged Access. |
|**Reset passwords to privileged accounts** | Reset passwords of all privileged accounts in the environment. <br><br>**Note**: Privileged accounts are not limited to built-in groups, but can also be groups that are delegated access to server administration, workstation administration, or other areas of your environment. | |**Reset the krbtgt account** | Reset the **krbtgt** account twice using the [New-KrbtgtKeys](https://github.com/microsoft/New-KrbtgtKeys.ps1/blob/master/New-KrbtgtKeys.ps1) script. <br><br>**Note**: If you are using Read-Only Domain Controllers, you will need to run the script separately for Read-Write Domain Controllers and for Read-Only Domain Controllers. | |**Schedule a system restart** | After you validate that no persistence mechanisms created by the attacker exist or remain on your system, schedule a system restart to assist with removing memory-resident malware. |
sentinel Connect Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-aws.md
Title: Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data description: Use the AWS connector to delegate Microsoft Sentinel access to AWS resource logs, creating a trust relationship between Amazon Web Services and Microsoft Sentinel. - Previously updated : 12/12/2022 + Last updated : 01/31/2024 # Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data
This graphic and the following text show how the parts of this connector solutio
- To connect to the SQS queue and the S3 bucket, Microsoft Sentinel uses AWS credentials and connection information embedded in the AWS S3 connector's configuration. The AWS credentials are configured with a role and a permissions policy giving them access to those resources. Similarly, the Microsoft Sentinel workspace ID is embedded in the AWS configuration, so there is in effect two-way authentication.
+ For customers in **Azure Government clouds**, Microsoft Sentinel uses a federated web identity provider (Microsoft Entra ID) for authenticating with AWS through OpenID Connect (OIDC), and assuming an AWS IAM role.
+ ## Connect the S3 connector - **In your AWS environment:**
This graphic and the following text show how the parts of this connector solutio
- **In Microsoft Sentinel:**
- - Enable and configure the **AWS S3 Connector** in the Microsoft Sentinel portal. See the instructions below.
+ - Enable and configure the **AWS S3 Connector** in the Microsoft Sentinel portal. [See the instructions below](#add-the-aws-role-and-queue-information-to-the-s3-data-connector).
## Automatic setup
The script takes the following actions:
- Configures any necessary IAM permissions policies and applies them to the IAM role created above.
+For Azure Government clouds, a specialized script first creates an OIDC identity provider, to which it assigns the IAM assumed role. It then performs all the other steps above.
+ ### Prerequisites for automatic setup - You must have PowerShell and the AWS CLI on your machine.
To run the script to set up the connector, use the following steps:
If you don't see the connector, install the Amazon Web Services solution from the **Content Hub** in Microsoft Sentinel. 1. In the details pane for the connector, select **Open connector page**.+ 1. In the **Configuration** section, under **1. Set up your AWS environment**, expand **Setup with PowerShell script (recommended)**. 1. Follow the on-screen instructions to download and extract the [AWS S3 Setup Script](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/ConfigAwsS3DataConnectorScripts.zip?raw=true) (link downloads a zip file containing the main setup script and helper scripts) from the connector page.
+ > [!NOTE]
+ > For ingesting AWS logs into an **Azure Government cloud**, download and extract [this specialized AWS S3 Gov Setup Script](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/ConfigAwsS3DataConnectorScriptsGov.zip?raw=true) instead.
+ 1. Before running the script, run the `aws configure` command from your PowerShell command line, and enter the relevant information as prompted. See [AWS Command Line Interface | Configuration basics](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) (from AWS documentation) for details. 1. Now run the script. Copy the command from the connector page (under "Run script to set up the environment") and paste it in your command line.
Microsoft recommends using the automatic setup script to deploy this connector.
### Prepare your AWS resources -- Create an **S3 bucket** to which you will ship the logs from your AWS services - VPC, GuardDuty, CloudTrail, or CloudWatch.
+1. Create an **S3 bucket** to which you will ship the logs from your AWS services - VPC, GuardDuty, CloudTrail, or CloudWatch.
- See the [instructions to create an S3 storage bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) in the AWS documentation. -- Create a standard **Simple Queue Service (SQS) message queue** to which the S3 bucket will publish notifications.
+1. Create a standard **Simple Queue Service (SQS) message queue** to which the S3 bucket will publish notifications.
- See the [instructions to create a standard Simple Queue Service (SQS) queue](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/creating-sqs-standard-queues.html) in the AWS documentation. -- Configure your S3 bucket to send notification messages to your SQS queue.
+1. Configure your S3 bucket to send notification messages to your SQS queue.
- See the [instructions to publish notifications to your SQS queue](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html) in the AWS documentation.
-### Create an AWS assumed role and grant access to the AWS Sentinel account
+### Install AWS data connector and prepare environment
1. In Microsoft Sentinel, select **Data connectors** from the navigation menu.
Microsoft recommends using the automatic setup script to deploy this connector.
1. Under **Configuration**, expand **Setup with PowerShell script (recommended)**, then copy the **External ID (Workspace ID)** to your clipboard.
-1. In a different browser window or tab, open the AWS console. Follow the [instructions in the AWS documentation for creating a role for an AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html).
-
- - For the account type, instead of **This account**, choose **Another AWS account**.
-
- - In the **Account ID** field, enter the number **197857026523** (you can copy and paste it from here). This number is **Microsoft Sentinel's service account ID for AWS**. It tells AWS that the account using this role is a Microsoft Sentinel user.
-
- - In the options, select **Require external ID** (*do not* select *Require MFA*). In the **External ID** field, paste your Microsoft Sentinel **Workspace ID** that you copied in the previous step. This identifies *your specific Microsoft Sentinel account* to AWS.
-
- - Assign the necessary permissions policies. These policies include:
- - `AmazonSQSReadOnlyAccess`
- - `AWSLambdaSQSQueueExecutionRole`
- - `AmazonS3ReadOnlyAccess`
- - `ROSAKMSProviderPolicy`
- - Additional policies for ingesting the different types of AWS service logs.
-
- For information on these policies, see the [AWS S3 connector permissions policies page](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/AwsRequiredPolicies.md) in the Microsoft Sentinel GitHub repository.
+### Create an AWS assumed role and grant access to the AWS Sentinel account
- - Name the role with a meaningful name that includes a reference to Microsoft Sentinel. Example: "*MicrosoftSentinelRole*".
+The following instructions apply for public **Azure Commercial clouds** only. For granting access to AWS from Azure Government clouds, see [For Azure Government: Use identity federation](#for-azure-government-use-identity-federation).
+
+1. In a different browser window or tab, open the AWS console.
+
+1. Create an **IAM assumed role**. Follow these instructions in the AWS documentation:<br>[Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html).
+
+ | Parameter | Selection/Value | Comments |
+ | - | - | - |
+ | **Trusted entity type** | *AWS account* | Instead of default *AWS service*. |
+ | **Which account** | *Another AWS account*,<br>Account ID `197857026523` | Instead of the default *This account*,<br>Microsoft Sentinel's application service account.|
+ | **Options** | *Require external ID* | *Do not* select *Require MFA* |
+ | **External ID** | Your Microsoft Sentinel *Workspace ID*,<br>pasted from your clipboard. | This identifies *your specific Microsoft Sentinel account* to AWS. |
+ | **Permissions to assign** | <ul><li>`AmazonSQSReadOnlyAccess`<li>`AWSLambdaSQSQueueExecutionRole`<li>`AmazonS3ReadOnlyAccess`<li>`ROSAKMSProviderPolicy`<li>Additional policies for ingesting the different types of AWS service logs. | For information on these policies, see the [AWS S3 connector permissions policies page](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/AwsRequiredPolicies.md) in the Microsoft Sentinel GitHub repository. |
+ | **Name** | Example: "*MicrosoftSentinelRole*". | Choose a meaningful name that includes a reference to Microsoft Sentinel. |
+
+1. Continue with [Add the AWS role and queue information to the S3 data connector](#add-the-aws-role-and-queue-information-to-the-s3-data-connector) below.
+
+#### For Azure Government: Use identity federation
+
+1. In a different browser window or tab, open the AWS console.
+
+1. Create a **web identity provider**. Follow these instructions in the AWS documentation:<br>[Creating OpenID Connect (OIDC) identity providers](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html).
+
+ | Parameter | Selection/Value | Comments |
+ | - | - | - |
+ | **Client ID** | Ignore this, you already have it. See **Audience** line below. | |
+ | **Provider type** | *OpenID Connect* | Instead of default *SAML*.|
+ | **Provider URL** | `https://sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e/` | |
+ | **Thumbprint** | `626d44e704d1ceabe3bf0d53397464ac8080142c` | If created in the IAM console, selecting **Get thumbprint** should give you this result. |
+ | **Audience** | `api://d4230588-5f84-4281-a9c7-2c15194b28f7` | |
+
+1. Create an **IAM assumed role**. Follow these instructions in the AWS documentation:<br>[Creating a role for web identity or OpenID Connect Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_oidc.html#idp_oidc_Create).
+
+ | Parameter | Selection/Value | Comments |
+ | - | - | - |
+ | **Trusted entity type** | *Web identity* | Instead of default *AWS service*. |
+ | **Identity provider** | `sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e/` | The provider you created in the previous step. |
+ | **Audience** | `api://d4230588-5f84-4281-a9c7-2c15194b28f7` | The audience you defined for the identity provider in the previous step. |
+ | **Permissions to assign** | <ul><li>`AmazonSQSReadOnlyAccess`<li>`AWSLambdaSQSQueueExecutionRole`<li>`AmazonS3ReadOnlyAccess`<li>`ROSAKMSProviderPolicy`<li>Additional policies for ingesting the different types of AWS service logs. | For information on these policies, see the [AWS S3 connector permissions policies page](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/AwsRequiredPoliciesForGov.md) for Government, in the Microsoft Sentinel GitHub repository. |
+ | **Name** | Example: "*MicrosoftSentinelRole*". | Choose a meaningful name that includes a reference to Microsoft Sentinel. |
++
+1. Edit the new role's trust policy and add another condition:<br>`"sts:RoleSessionName": "MicrosoftSentinel_{WORKSPACE_ID)"`
+
+ The finished trust policy should look like this:
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "Federated": "arn:aws:iam::XXXXXXXXXXXX:oidc-provider/sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e/"
+ },
+ "Action": "sts:AssumeRoleWithWebIdentity",
+ "Condition": {
+ "StringEquals": {
+ "sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e/:aud": "api://d4230588-5f84-4281-a9c7-2c15194b28f7",
+ "sts:RoleSessionName": "MicrosoftSentinel_XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+ - `XXXXXXXXXXXX` is your AWS Account ID.
+ - `XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX` is your Microsoft Sentinel workspace ID.
+
+ Update (save) the policy when you're done editing.
### Add the AWS role and queue information to the S3 data connector
Setting up this connector has two steps:
> [!IMPORTANT] > As of December 1, 2020, the **AwsRequestId** field has been replaced by the **AwsRequestId_** field (note the added underscore). The data in the old **AwsRequestId** field will be preserved through the end of the customer's specified data retention period. + ## Next steps
In this document, you learned how to connect to AWS resources to ingest their lo
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md). - [Use workbooks](monitor-your-data.md) to monitor your data.+
sentinel Connect Google Cloud Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-google-cloud-platform.md
Title: Stream Google Cloud Platform into Microsoft Sentinel
-description: This article describes how to stream audit log data from the Google Cloud Platform (GCP) into Microsoft Sentinel.
-
+ Title: Ingest Google Cloud Platform log data into Microsoft Sentinel
+description: This article describes how to ingest service log data from the Google Cloud Platform (GCP) into Microsoft Sentinel.
++ Previously updated : 03/23/2023- Last updated : 01/17/2024 #Customer intent: As a security operator, I want to ingest GCP audit log data into Microsoft Sentinel to get full security coverage and analyze and detect attacks in my multicloud environment.
-# Stream Google Cloud Platform logs into Microsoft Sentinel
+# Ingest Google Cloud Platform log data into Microsoft Sentinel
Organizations are increasingly moving to multicloud architectures, whether by design or due to ongoing requirements. A growing number of these organizations use applications and store data on multiple public clouds, including the Google Cloud Platform (GCP). This article describes how to ingest GCP data into Microsoft Sentinel to get full security coverage and analyze and detect attacks in your multicloud environment.
-With the **GCP Pub/Sub Audit Logs** connector, based on our [Codeless Connector Platform](create-codeless-connector.md?tabs=deploy-via-arm-template%2Cconnect-via-the-azure-portal) (CCP), you can ingest logs from your GCP environment using the GCP [Pub/Sub capability](https://cloud.google.com/pubsub/docs/overview).
+With the **GCP Pub/Sub** connector, based on our [Codeless Connector Platform](create-codeless-connector.md?tabs=deploy-via-arm-template%2Cconnect-via-the-azure-portal) (CCP), you can ingest logs from your GCP environment using the GCP [Pub/Sub capability](https://cloud.google.com/pubsub/docs/overview).
> [!IMPORTANT] > The GCP Pub/Sub Audit Logs connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Google's Cloud Audit Logs records a trail that practitioners can use to monitor access and detect potential threats across GCP resources.
+Google's Cloud Audit Logs records an audit trail that analysts can use to monitor access and detect potential threats across GCP resources.
## Prerequisites
-Before you begin, verify that you have:
+Before you begin, verify that you have the following:
-- The Microsoft Sentinel solution enabled. -- A defined Microsoft Sentinel workspace.-- A GCP environment collecting GCP audit logs. -- The Microsoft Sentinel Contributor role.-- Access to edit and create resources in the GCP project.
+- The Microsoft Sentinel solution is enabled.
+- A defined Microsoft Sentinel workspace exists.
+- A GCP environment (a **project**) exists and is collecting GCP audit logs.
+- Your Azure user has the Microsoft Sentinel Contributor role.
+- Your GCP user has access to edit and create resources in the GCP project.
+- The GCP Identity and Access Management (IAM) API and the GCP Cloud Resource Manager API are both enabled.
## Set up GCP environment
-You can set up the GCP environment in one of two ways:
+There are two things you need to set up in your GCP environment:
-- [Create GCP resources via the Terraform API](#create-gcp-resources-via-the-terraform-api): Terraform provides an API for the Identity and Access Management (IAM) that creates the resources: The topic, a subscription for the topic, a workload identity pool, a workload identity provider, a service account, and a role. -- [Set up GCP environment manually](#set-up-the-gcp-environment-manually-via-the-gcp-portal) via the GCP console.
+1. [Set up Microsoft Sentinel authentication in GCP](#gcp-authentication-setup) by creating the following resources in the GCP IAM service:
+ - Workload identity pool
+ - Workload identity provider
+ - Service account
+ - Role
-### Create GCP resources via the Terraform API
+1. [Set up log collection in GCP and ingestion into Microsoft Sentinel](#gcp-audit-logs-setup) by creating the following resources in the GCP Pub/Sub service:
+ - Topic
+ - Subscription for the topic
+
+You can set up the environment in one of two ways:
+
+- [Create GCP resources via the Terraform API](?tabs=terraform): Terraform provides APIs for resource creation and for Identity and Access Management (see [Prerequisites](#prerequisites)). Microsoft Sentinel provides Terraform scripts that issue the necessary commands to the APIs.
+- [Set up GCP environment manually](?tabs=manual), creating the resources yourself in the GCP console.
+
+### GCP Authentication Setup
+
+# [Terraform API Setup](#tab/terraform)
1. Open [GCP Cloud Shell](https://cloud.google.com/shell/).
-1. Open the editor and type:
-
- ```
+
+1. Select the **project** you want to work with, by typing the following command in the editor:
+ ```bash
gcloud config set project {projectId} ```
-1. In the next window, select **Authorize**.
-1. Copy the Terraform [GCPInitialAuthenticationSetup script](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPInitialAuthenticationSetup), paste the script to a new file, and save it as a .tf file.
-1. In the editor, type:
+1. Copy the Terraform authentication script provided by Microsoft Sentinel from the Sentinel GitHub repository into your GCP Cloud Shell environment.
+
+ 1. Open the Terraform [GCPInitialAuthenticationSetup script](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPInitialAuthenticationSetup/GCPInitialAuthenticationSetup.tf) file and copy its contents.
+
+ > [!NOTE]
+ > For ingesting GCP data into an **Azure Government cloud**, [use this authentication setup script instead](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/GCP/Terraform/sentinel_resources_creation_gov/GCPInitialAuthenticationSetupGov/GCPInitialAuthenticationSetupGov.tf).
+
+ 1. Create a directory in your Cloud Shell environment, enter it, and create a new blank file.
+ ```bash
+ mkdir {directory-name} && cd {directory-name} && touch initauth.tf
+ ```
+
+ 1. Open *initauth.tf* in the Cloud Shell editor and paste the contents of the script file into it.
+
+1. Initialize Terraform in the directory you created by typing the following command in the terminal:
+ ```bash
+ terraform init
```
- terraform init
- ```
-1. Type:
-
- ```
+
+1. When you receive the confirmation message that Terraform was initialized, run the script by typing the following command in the terminal:
+ ```bash
terraform apply ```
-1. Type your Microsoft tenant ID. Learn how to [find your tenant ID](/azure/active-directory-b2c/tenant-management-read-tenant-name).
-1. When asked if a workload Identity Pool has already been created for Azure, type *yes* or *no*.
+1. When the script prompts for your Microsoft tenant ID, copy and paste it into the terminal.
+ > [!NOTE]
+ > You can find and copy your tenant ID on the **GCP Pub/Sub Audit Logs** connector page in the Microsoft Sentinel portal, or in the **Portal settings** screen (accessible anywhere in the Azure portal by selecting the gear icon along the top of the screen), in the **Directory ID** column.
+ > :::image type="content" source="media/connect-google-cloud-platform/find-tenant-id.png" alt-text="Screenshot of portal settings screen." lightbox="media/connect-google-cloud-platform/find-tenant-id.png":::
+
+1. When asked if a workload Identity Pool has already been created for Azure, answer *yes* or *no* accordingly.
+ 1. When asked if you want to create the resources listed, type *yes*.
-1. Save the resources parameters for later use.
-1. In a new folder, copy the Terraform [GCPAuditLogsSetup script](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPAuditLogsSetup) into a new file, and save it as a .tf file:
- ```
- cd {foldername}
- ```
-1. In the editor, type:
+When the output from the script is displayed, save the resources parameters for later use.
- ```
- terraform init
- ```
+# [Manual setup](#tab/manual)
-1. Type:
+Create and configure the following items in the Google Cloud Platform [Identity and Access Management (IAM)](https://cloud.google.com/iam/docs/overview) service.
- ```
- terraform apply
- ```
+#### Create a custom role
- To ingest logs from an entire organization using a single Pub/Sub, type:
+1. Follow the instructions in the Google Cloud documentation to [**create a role**](https://cloud.google.com/iam/docs/creating-custom-roles#creating). Per those instructions, create a custom role from scratch.
- ```
- terraform apply -var="organization-id= {organizationId} "
- ```
+1. Name the role so it's recognizable as a Sentinel custom role.
-1. Type *yes*.
+1. Fill in the relevant details and add permissions as needed:
+ - **pubsub.subscriptions.consume**
+ - **pubsub.subscriptions.get**
-1. Save the resource parameters for later use.
+ You can filter the list of available permissions by roles. Select the **Pub/Sub Subscriber** and **Pub/Sub Viewer** roles to filter the list.
-1. Wait five minutes before moving to the next step.
+For more information about creating roles in Google Cloud Platform, see [Create and manage custom roles](https://cloud.google.com/iam/docs/creating-custom-roles) in the Google Cloud documentation.
-## Set up the GCP Pub/Sub connector in Microsoft Sentinel
+#### Create a service account
-1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
-1. In the **Content hub**, in the search bar, type *Google Cloud Platform Audit Logs*.
-1. Install the **Google Cloud Platform Audit Logs** solution.
-1. Select **Data connectors**, and in the search bar, type *GCP Pub/Sub Audit Logs*.
-1. Select the **GCP Pub/Sub Audit Logs (Preview)** connector.
-1. Below the connector description, select **Open connector page**.
-1. In the **Configuration** area, select **Add new**.
-1. Type the resource parameters you created when you [created the GCP resources](#create-gcp-resources-via-the-terraform-api). Make sure that the Data Collection Endpoint Name and the Data Collection Rule Name begin with **Microsoft-Sentinel-** and select **Connect**.
+1. Follow the instructions in the Google Cloud documentation to [**create a service account**](https://cloud.google.com/iam/docs/service-accounts-create#creating).
-## Verify that the GCP data is in the Microsoft Sentinel environment
+1. Name the service account so it's recognizable as a Sentinel service account.
-1. To ensure that the GCP logs were successfully ingested into Microsoft Sentinel, run the following query 30 minutes after you finish to [set up the connector](#set-up-the-gcp-pubsub-connector-in-microsoft-sentinel).
+1. Assign [the role you created in the previous section](#create-a-custom-role) to the service account.
- ```
- GCPAuditLogs
- | take 10
- ```
+For more information about service accounts in Google Cloud Platform, see [Service accounts overview](https://cloud.google.com/iam/docs/service-account-overview) in the Google Cloud documentation.
-1. Enable the [health feature](enable-monitoring.md) for data connectors.
+#### Create the workload identity pool and provider
-### Set up the GCP environment manually via the GCP portal
+1. Follow the instructions in the Google Cloud documentation to [**create the workload identity pool and provider**](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_the_workload_identity_pool_and_provider).
-This section shows you how to set up the GCP environment manually. Alternatively, you can set up the environment [via the Terraform API](#create-gcp-resources-via-the-terraform-api). If you already set up the environment via the API, skip this section.
+1. For the **Name** and **Pool ID**, enter your Azure **Tenant ID**, with the dashes removed.
+ > [!NOTE]
+ > You can find and copy your tenant ID on the **Portal settings** screen, in the **Directory ID** column. The portal settings screen is accessible anywhere in the Azure portal by selecting the gear icon along the top of the screen.
+ > :::image type="content" source="media/connect-google-cloud-platform/find-tenant-id.png" alt-text="Screenshot of portal settings screen." lightbox="media/connect-google-cloud-platform/find-tenant-id.png":::
-#### Create the role
+1. Add an identity provider to the pool. Choose **Open ID Connect (OIDC)** as the provider type.
-1. In the GCP console, navigate to **IAM & Admin**.
-1. Select **Roles** and select **Create role**.
-1. Fill in the relevant details and add permissions as needed.
-1. Filter the permissions by the **Pub/Sub Subscriber** and **Pub/Sub Viewer** roles, and select **pubsub.subscriptions.consume** and **pubsub.subscriptions.get** permissions.
-1. To confirm, select **ADD**.
+1. Name the identity provider so it's recognizable for its purpose.
- :::image type="content" source="media/connect-google-cloud-platform/gcp-create-role.png" alt-text="Screenshot of adding permissions when adding a GCP role.":::
+1. Enter the following values in the provider settings (these aren't samples&mdash;use these actual values):
+ - **Issuer (URL)**: `https://sts.windows.net/33e01921-4d64-4f8c-a055-5bdaffd5e33d`
+ - **Audience**: the application ID URI: `api://2041288c-b303-4ca0-9076-9612db3beeb2`
+ - **Attribute mapping**: `google.subject=assertion.sub`
-1. To create the role, select **Create**.
+ > [!NOTE]
+ > To set up the connector to send logs from GCP to the **Azure Government cloud**, use the following alternate values for the provider settings instead of those above:
+ > - **Issuer (URL)**: `https://sts.windows.net/cab8a31a-1906-4287-a0d8-4eef66b95f6e`
+ > - **Audience**: `api://e9885b54-fac0-4cd6-959f-a72066026929`
-#### Create the service account
+For more information about workload identity federation in Google Cloud Platform, see [Workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation) in the Google Cloud documentation.
-1. In the GCP Console, navigate to **Service Accounts**, and select **Create Service Account**.
-1. Fill in the relevant details and select **Create and continue**.
-1. Select [the role you created previously](#create-the-role), and select **Done** to create the service account.
+#### Grant the identity pool access to the service account
-#### Create the workload identity federation
+1. Locate and select the service account you created earlier.
-1. In the GCP Console, navigate to **Workload Identity Federation**.
-1. If it's your first time using this feature, select **Get started**. Otherwise, select **Create pool**.
-1. Fill in the required details, and make sure that the **Tenant ID** and **Tenant name** is the TenantID **without dashes**.
-
- > [!NOTE]
- > To find the tenant ID, in the Azure portal, navigate to **All Services > Microsoft Entra ID > Overview** and copy the **TenantID**.
+1. Locate the **permissions** configuration of the service account.
-1. Make sure that **Enable pool** is selected.
+1. **Grant access** to the principal that represents the workload identity pool and provider that you created in the previous step.
+ - Use the following format for the principal name:
+ ```http
+ principal://iam.googleapis.com/projects/{PROJECT_NUMBER}/locations/global/workloadIdentityPools/{WORKLOAD_IDENTITY_POOL_ID}/subject/{WORKLOAD_IDENTITY_PROVIDER_ID}
+ ```
- :::image type="content" source="media/connect-google-cloud-platform/gcp-create-identity-pool.png" alt-text="Screenshot of creating the identity pool as part of creating the GCP workload identity federation.":::
+ - Assign the **Workload Identity User** role and save the configuration.
-1. To add a provider to the pool:
- - Select **OIDC**
- - Type the **Issuer (URL)**: `https://sts.windows.net/33e01921-4d64-4f8c-a055-5bdaffd5e33d`
- - Next to **Audiences**, select **Allowed audiences**, and next to **Audience 1**, type: *api://2041288c-b303-4ca0-9076-9612db3beeb2*.
+For more information about granting access in Google Cloud Platform, see [Manage access to projects, folders, and organizations](https://cloud.google.com/iam/docs/granting-changing-revoking-access) in the Google Cloud documentation.
- :::image type="content" source="media/connect-google-cloud-platform/gcp-add-provider-pool.png" alt-text="Screenshot of adding the provider to the pool when creating the GCP workload identity federation.":::
+
- :::image type="content" source="media/connect-google-cloud-platform/gcp-add-provider-pool-audiences.png" alt-text="Screenshot of adding the provider pool audiences when creating the GCP workload identity federation.":::
+### GCP Audit Logs Setup
-#### Configure the provider attributes
-
-1. Under **OIDC 1**, select **assertion.sub**.
+# [Terraform API Setup](#tab/terraform)
- :::image type="content" source="media/connect-google-cloud-platform/gcp-configure-provider-attributes.png" alt-text="Screenshot of configuring the GCP provider attributes.":::
-
-1. Select **Continue** and **Save**.
-1. In the **Workload Identity Pools** main page, select the created pool.
-1. Select **Grant access**, select the [service account you created previously](#create-the-service-account), and select **All identities in the pool** as the principals.
+1. Copy the Terraform audit log setup script provided by Microsoft Sentinel from the Sentinel GitHub repository into a different folder in your GCP Cloud Shell environment.
- :::image type="content" source="media/connect-google-cloud-platform/gcp-grant-access.png" alt-text="Screenshot of granting access to the GCP service account.":::
+ 1. Open the Terraform [GCPAuditLogsSetup script](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPAuditLogsSetup/GCPAuditLogsSetup.tf) file and copy its contents.
-1. Confirm that the connected service account is displayed.
+ > [!NOTE]
+ > For ingesting GCP data into an **Azure Government cloud**, [use this audit log setup script instead](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/GCP/Terraform/sentinel_resources_creation_gov/GCPAuditLogsSetup/GCPAuditLogsSetup.tf).
- :::image type="content" source="media/connect-google-cloud-platform/gcp-connected-service-account.png" alt-text="Screenshot of viewing the connected GCP service accounts.":::
+ 1. Create another directory in your Cloud Shell environment, enter it, and create a new blank file.
+ ```bash
+ mkdir {other-directory-name} && cd {other-directory-name} && touch auditlog.tf
+ ```
-#### Create a topic
+ 1. Open *auditlog.tf* in the Cloud Shell editor and paste the contents of the script file into it.
-1. In the GCP console, navigate to **Topics**.
-1. Create a new topic and select a **Topic ID**.
-1. Select **Add default subscription** and under **Encryption**, select **Google-managed encryption key**.
+1. Initialize Terraform in the new directory by typing the following command in the terminal:
+ ```bash
+ terraform init
+ ```
-#### Create a sink
+1. When you receive the confirmation message that Terraform was initialized, run the script by typing the following command in the terminal:
+ ```bash
+ terraform apply
+ ```
+
+ To ingest logs from an entire organization using a single Pub/Sub, type:
+
+ ```bash
+ terraform apply -var="organization-id= {organizationId} "
+ ```
+
+1. When asked if you want to create the resources listed, type *yes*.
-1. In the GCP console, navigate to **Log Router**.
-1. Select **Create sink** and fill in the relevant details.
-1. Under **Sink destination**, select **Cloud Pub/Sub topic** and select [the topic you created previously](#create-a-topic).
+When the output from the script is displayed, save the resources parameters for later use.
- :::image type="content" source="media/connect-google-cloud-platform/gcp-sink-destination.png" alt-text="Screenshot of defining the GCP sink destination.":::
+Wait five minutes before moving to the next step.
-1. If needed, filter the logs by selecting specific logs to include. Otherwise, all logs are sent.
-1. Select **Create sink**.
+# [Manual setup](#tab/manual)
-> [!NOTE]
-> To ingest logs for the entire organization:
-> 1. Select the organization under **Project**.
-> 1. Repeat steps 2-4, and under **Choose logs to include in the sink** in the **Log Router** section, select **Include logs ingested by this organization and all child resources**.
+#### Create a publishing topic
-
-#### Verify that GCP can receive incoming messages
+Use the [Google Cloud Platform Pub/Sub service](https://cloud.google.com/pubsub/docs/overview) to set up export of audit logs.
+
+Follow the instructions in the Google Cloud documentation to [**create a topic**](https://cloud.google.com/pubsub/docs/create-topic) for publishing logs to.
+- Choose a Topic ID that reflects the purpose of log collection for export to Microsoft Sentinel.
+- Add a default subscription.
+- Use a **Google-managed encryption key** for encryption.
+
+#### Create a log sink
+
+Use the [Google Cloud Platform Log Router service](https://cloud.google.com/logging/docs/export/configure_export_v2) to set up collection of audit logs.
+
+**To collect logs for resources in the current project only:**
+
+1. Verify that your project is selected in the project selector.
+
+1. Follow the instructions in the Google Cloud documentation to [**set up a sink**](https://cloud.google.com/logging/docs/export/configure_export_v2#creating_sink) for collecting logs.
+ - Choose a Name that reflects the purpose of log collection for export to Microsoft Sentinel.
+ - Select "Cloud Pub/Sub topic" as the destination type, and choose the topic you created in the previous step.
+
+**To collect logs for resources throughout the entire organization:**
+
+1. Select your **organization** in the project selector.
+
+1. Follow the instructions in the Google Cloud documentation to [**set up a sink**](https://cloud.google.com/logging/docs/export/configure_export_v2#creating_sink) for collecting logs.
+ - Choose a Name that reflects the purpose of log collection for export to Microsoft Sentinel.
+ - Select "Cloud Pub/Sub topic" as the destination type, and choose the default "Use a Cloud Pub/Sub topic in a project".
+ - Enter the destination in the following format: `pubsub.googleapis.com/projects/{PROJECT_ID}/topics/{TOPIC_ID}`.
+
+1. Under **Choose logs to include in the sink**, select **Include logs ingested by this organization and all child resources**.
+
+#### Verify that GCP can receive incoming messages
+
+1. In the GCP Pub/Sub console, navigate to **Subscriptions**.
-1. In the GCP console, navigate to **Subscriptions**.
1. Select **Messages**, and select **PULL** to initiate a manual pull. + 1. Check the incoming messages. ++
+## Set up the GCP Pub/Sub connector in Microsoft Sentinel
+
+1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
+
+1. In the **Content hub**, in the search bar, type *Google Cloud Platform Audit Logs*.
+
+1. Install the **Google Cloud Platform Audit Logs** solution.
+
+1. Select **Data connectors**, and in the search bar, type *GCP Pub/Sub Audit Logs*.
+
+1. Select the **GCP Pub/Sub Audit Logs (Preview)** connector.
+
+1. In the details pane, select **Open connector page**.
+
+1. In the **Configuration** area, select **Add new collector**.
+
+ :::image type="content" source="media/connect-google-cloud-platform/add-new-collector.png" alt-text="Screenshot of GCP connector configuration" lightbox="media/connect-google-cloud-platform/add-new-collector.png":::
+
+1. In the **Connect a new collector** panel, type the resource parameters you created when you [created the GCP resources](#set-up-gcp-environment).
+
+ :::image type="content" source="media/connect-google-cloud-platform/new-collector-dialog.png" alt-text="Screenshot of new collector side panel.":::
+
+1. Make sure that the values in all the fields match their counterparts in your GCP project, and select **Connect**.
+
+## Verify that the GCP data is in the Microsoft Sentinel environment
+
+1. To ensure that the GCP logs were successfully ingested into Microsoft Sentinel, run the following query 30 minutes after you finish to [set up the connector](#set-up-the-gcp-pubsub-connector-in-microsoft-sentinel).
+
+ ```
+ GCPAuditLogs
+ | take 10
+ ```
+
+1. Enable the [health feature](enable-monitoring.md) for data connectors.
+ ## Next steps
-In this article, you learned how to ingest GCP data into Microsoft Sentinel using the GCP Pub/Sub Audit Logs connector. To learn more about Microsoft Sentinel, see the following articles:
-- Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.
+ In this article, you learned how to ingest GCP data into Microsoft Sentinel using the GCP Pub/Sub connectors. To learn more about Microsoft Sentinel, see the following articles:
+
+ - Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+ - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+ - [Use workbooks](monitor-your-data.md) to monitor your data.
+
sentinel Create Codeless Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector.md
There are 5 ARM deployment resources in this template guide which house the 4 CC
//}, //"apikey": { // "defaultValue": "API Key",
- // "type": "string",
+ // "type": "securestring",
// "minLength": 1 //} },
sentinel Data Connector Connection Rules Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connector-connection-rules-reference.md
After the user returns to the client via the redirect URL, the application will
| **ClientSecret** | True | String | The client secret | | **AuthorizationCode** | Mandatory when grantType = `authorization_code` | String | If grant type is `authorization_code` this field value will be the authorization code returned from the auth serve. | | **Scope** | True for `authorization_code` grant type<br> optional for `client_credentials` grant type| String | A space-separated list of scopes for user consent. For more information, see [OAuth2 scopes and permissions](/entra/identity-platform/scopes-oidc). |
-| **RedirectUri** | True | String | URL for redirect, must be `https://portal.azure.com/TokenAuthorize` |
+| **RedirectUri** | Mandatory when grantType = `authorization_code` | String | URL for redirect, must be `https://portal.azure.com/TokenAuthorize` |
| **GrantType** | True | String | `authorization_code` or `client_credentials` | | **TokenEndpoint** | True | String | URL to exchange code with valid token in `authorization_code` grant or client id and secret with valid token in `client_credentials` grant. | | **TokenEndpointHeaders** | | Object | An optional key value object to send custom headers to token server |
Paging: {
#### Configure NextPageUrl
-`NextPageUrl` paging means the API response includes a complex link in the response body similar to `LinkHeader`, but the
+`NextPageUrl` paging means the API response includes a complex link in the response body similar to `LinkHeader`, but the URL is included in the response body instead of the header.
| Field | Required | Type | Description | |-|-|-|-|
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
This article describes the features available in Microsoft Sentinel across diffe
|Feature |Feature stage |Azure commercial |Azure Government |Azure China 21Vianet | |||||| |[Amazon Web Services](connect-aws.md?tabs=ct) |GA |&#x2705; |&#10060; |&#10060; |
-|[Amazon Web Services S3 (Preview)](connect-aws.md?tabs=s3) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[Amazon Web Services S3 (Preview)](connect-aws.md?tabs=s3) |Public preview |&#x2705; |&#x2705; |&#10060; |
|[Microsoft Entra ID](connect-azure-active-directory.md) |GA |&#x2705; |&#x2705;|&#x2705; <sup>[1](#logsavailable)</sup> | |[Microsoft Entra ID Protection](connect-services-api-based.md) |GA |&#x2705;| &#x2705; |&#10060; | |[Azure Activity](data-connectors/azure-activity.md) |GA |&#x2705;| &#x2705;|&#x2705; |
This article describes the features available in Microsoft Sentinel across diffe
|[Common Event Format (CEF)](connect-common-event-format.md) |GA |&#x2705; |&#x2705;|&#x2705; | |[Common Event Format (CEF) via AMA (Preview)](connect-cef-ama.md) |Public preview |&#x2705;|&#10060; |&#x2705; | |[DNS](data-connectors/dns.md) |Public preview |&#x2705;| &#10060; |&#x2705; |
-|[GCP Pub/Sub Audit Logs](connect-google-cloud-platform.md) |Public preview |&#x2705; |&#10060; |&#10060; |
+|[GCP Pub/Sub Audit Logs](connect-google-cloud-platform.md) |Public preview |&#x2705; |&#x2705; |&#10060; |
|[Microsoft Defender XDR](connect-microsoft-365-defender.md?tabs=MDE) |GA |&#x2705;| &#x2705;|&#10060; | |[Microsoft Purview Insider Risk Management (Preview)](sentinel-solutions-catalog.md#domain-solutions) |Public preview |&#x2705; |&#x2705;|&#10060; | |[Microsoft Defender for Cloud](connect-defender-for-cloud.md) |GA |&#x2705; |&#x2705; |&#x2705;|
sentinel Deploy Data Connector Agent Container Other Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container-other-methods.md
description: This article shows you how to manually deploy the container that ho
-+ Last updated 01/03/2024
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+## February 2024
+
+### AWS and GCP data connectors now support Azure Government clouds
+
+Microsoft Sentinel data connectors for Amazon Web Services (AWS) and Google Cloud Platform (GCP) now include supporting configurations to ingest data into workspaces in Azure Government clouds.
+
+The configurations for these connectors for Azure Government customers differs slightly from the public cloud configuration. See the relevant documentation for details:
+
+- [Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data](connect-aws.md)
+- [Ingest Google Cloud Platform log data into Microsoft Sentinel](connect-google-cloud-platform.md)
+ ## January 2024 ### Reduce false positives for SAP systems with analytics rules
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Service Connector lets you quickly connect compute services to cloud services, w
- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/quickstart.md).
+- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/enterprise/quickstart.md).
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
service-connector Quickstart Portal Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-portal-spring-cloud-connection.md
This quickstart shows you how to connect Azure Spring Apps to other Cloud resour
## Prerequisites - An Azure account with an active subscription. [Create an Azure account for free](https://azure.microsoft.com/free).-- An app deployed to [Azure Spring Apps](../spring-apps/quickstart.md) in a [region supported by Service Connector](./concept-region-support.md).
+- An app deployed to [Azure Spring Apps](../spring-apps/enterprise/quickstart.md) in a [region supported by Service Connector](./concept-region-support.md).
- A target resource to connect Azure Spring Apps to. For example, a [storage account](../storage/common/storage-account-create.md). ## Sign in to Azure
service-connector Tutorial Java Spring Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-confluent-kafka.md
Create an instance of Apache Kafka for Confluent Cloud by following [this guidan
### Create an Azure Spring Apps instance
-Create an instance of Azure Spring Apps by following [the Azure Spring Apps quickstart](../spring-apps/quickstart.md) in Java. Make sure your Azure Spring Apps instance is created in [a region that has Service Connector support](concept-region-support.md).
+Create an instance of Azure Spring Apps by following [the Azure Spring Apps quickstart](../spring-apps/enterprise/quickstart.md) in Java. Make sure your Azure Spring Apps instance is created in [a region that has Service Connector support](concept-region-support.md).
## Build and deploy the app
service-connector Tutorial Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-passwordless.md
For Azure App Service, you can deploy the application code via the `az webapp de
### [Spring Apps](#tab/springapp)
-For Azure Spring Apps, you can deploy the application code via the `az spring app deploy` command. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](../spring-apps/quickstart.md).
+For Azure Spring Apps, you can deploy the application code via the `az spring app deploy` command. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](../spring-apps/enterprise/quickstart.md).
### [Container Apps](#tab/containerapp)
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Azure Service Fabric
site-recovery Azure To Azure How To Enable Replication Ade Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms.md
Previously updated : 01/23/2024 Last updated : 01/31/2024
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
Previously updated : 07/14/2023 Last updated : 01/31/2024
By default, the following occurs:
1. Temporary replicas of the source disks (disks attached to the VMs in secondary region) are created with the name `ms-asr-<GUID>`, that are used to transfer / read data. The temp disks let us utilize the complete bandwidth of the disk instead of only 16% bandwidth of the original disks (that are connected to the VM). The temp disks are deleted once the reprotection completes. 1. If the target availability set doesn't exist, a new one is created as part of the reprotect job if necessary. If you've customized the reprotection settings, then the selected set is used.
-When you trigger a reprotect job, and the target VM exists, the following occurs:
+**When you trigger a reprotect job, and the target VM exists, the following occurs:**
1. The target side VM is turned off if it's running. 1. If the VM is using managed disks, a copy of the original disk is created with an `-ASRReplica` suffix. The original disks are deleted. The `-ASRReplica` copies are used for replication.
When you trigger a reprotect job, and the target VM exists, the following occurs
1. Only changes between the source disk and the target disk are synchronized. The differentials are computed by comparing both the disks and then transferred. Check below to find the estimated time to complete the reprotection. 1. After the synchronization completes, the delta replication begins, and a recovery point is created in line with the replication policy.
-When you trigger a reprotect job, and the target VM and disks don't exist, the following occurs:
+**When you trigger a reprotect job, and the target VM and disks don't exist, the following occurs:**
1. If the VM is using managed disks, replica disks are created with `-ASRReplica` suffix. The `-ASRReplica` copies are used for replication. 1. If the VM is using unmanaged disks, replica disks are created in the target storage account. 1. The entire disks are copied from the failed over region to the new target region. 1. After the synchronization completes, the delta replication begins, and a recovery point is created in line with the replication policy.
+> [!NOTE]
+> The `ms-asr` disks are temporary disks that are deleted after the *reprotect* action is completed. You will be charged a minimal cost based on the Azure managed disk price for the time that these disks are active.
++ #### Estimated time to do the reprotection In most cases, Azure Site Recovery doesn't replicate the complete data to the source region. The amount of data replicated depends on the following conditions:
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
If you just created a free Azure account, you're the owner of your subscription.
- In Azure portal, navigate to **Microsoft Entra ID** > **Users** > **User Settings**. In **User settings**, verify that Microsoft Entra users can register applications (set to *Yes* by default).
- - In case the **App registrations** settings is set to *No*, request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the Application Developer role to an account to allow the registration of Microsoft Entra App.
+ - In case the **App registrations** settings is set to *No*, request the tenant/global admin to assign the required permission. The Application Developer role **cannot** be used to enable registration of Microsoft Entra App.
## Prepare infrastructure
site-recovery How To Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-migrate-run-as-accounts-managed-identity.md
Previously updated : 09/14/2023 Last updated : 01/31/2024 # Migrate from a Run As account to Managed Identities
On Azure, managed identities eliminate the need for developers having to manage
## Prerequisites
-Before you migrate from a Run As account to a managed identity, ensure that you have the appropriate roles to create a system-assigned identity for your automation account and to assign it the Contributor role in the corresponding recovery services vault.
+Before you migrate from a Run As account to a managed identity, ensure that you have the appropriate roles to create a system-assigned identity for your automation account and to assign it the *Owner* role in the corresponding recovery services vault.
## Benefits of managed identities
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Title: "Quickstart: Azure Blob Storage library - .NET"
-description: In this quickstart, you will learn how to use the Azure Blob Storage client library for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
+description: In this quickstart, you learn how to use the Azure Blob Storage client library for .NET to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container.
Previously updated : 11/09/2022 Last updated : 01/30/2024 ms.devlang: csharp
ai-usage: ai-assisted
# Quickstart: Azure Blob Storage client library for .NET
-Get started with the Azure Blob Storage client library for .NET. Azure Blob Storage is Microsoft's object storage solution for the cloud. Follow these steps to install the package and try out example code for basic tasks. Blob storage is optimized for storing massive amounts of unstructured data.
+Get started with the Azure Blob Storage client library for .NET. Azure Blob Storage is Microsoft's object storage solution for the cloud, and is optimized for storing massive amounts of unstructured data.
+
+In this article, you follow steps to install the package and try out example code for basic tasks.
[API reference documentation](/dotnet/api/azure.storage.blobs) | [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs) | [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/blobs/toc.json#blob-samples)
This section walks you through preparing a project to work with the Azure Blob S
### Create the project
-For the steps ahead, you'll need to create a .NET console app using either the .NET CLI or Visual Studio 2022.
+Create a .NET console app using either the .NET CLI or Visual Studio 2022.
### [Visual Studio 2022](#tab/visual-studio)
For the steps ahead, you'll need to create a .NET console app using either the .
1. For the **Project Name**, enter *BlobQuickstart*. Leave the default values for the rest of the fields and select **Next**.
-1. For the **Framework**, ensure .NET 6.0 is selected. Then choose **Create**. The new project will open inside the Visual Studio environment.
+1. For the **Framework**, ensure the latest installed version of .NET is selected. Then choose **Create**. The new project opens inside the Visual Studio environment.
### [.NET CLI](#tab/net-cli)
dotnet add package Azure.Storage.Blobs
If this command to add the package fails, follow these steps: -- Make sure that `nuget.org` is added as a package source. You can list the package sources using the [dotnet nuget list source](/dotnet/core/tools/dotnet-nuget-list-source#examples) command:
+- Make sure that `nuget.org` is added as a package source. You can list the package sources using the [`dotnet nuget list source`](/dotnet/core/tools/dotnet-nuget-list-source#examples) command:
```dotnetcli dotnet nuget list source ``` -- If you don't see `nuget.org` in the list, you can add it using the [dotnet nuget add source](/dotnet/core/tools/dotnet-nuget-add-source#examples) command:
+- If you don't see `nuget.org` in the list, you can add it using the [`dotnet nuget add source`](/dotnet/core/tools/dotnet-nuget-add-source#examples) command:
```dotnetcli dotnet nuget add source https://api.nuget.org/v3/index.json -n nuget.org
using System.IO;
Console.WriteLine("Hello, World!"); ``` - ## Object model Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
Use the following .NET classes to interact with these resources:
## Code examples
-The sample code snippets in the following sections demonstrate how to perform basic data operations with the Azure Blob Storage client library for .NET.
+The sample code snippets in the following sections demonstrate how to perform the following tasks with the Azure Blob Storage client library for .NET:
+
+- [Authenticate to Azure and authorize access to blob data](#authenticate-to-azure-and-authorize-access-to-blob-data)
+- [Create a container](#create-a-container)
+- [Upload a blob to a container](#upload-a-blob-to-a-container)
+- [List blobs in a container](#list-blobs-in-a-container)
+- [Download a blob](#download-a-blob)
+- [Delete a container](#delete-a-container)
> [!IMPORTANT]
-> Make sure you have installed the correct NuGet packages and added the necessary using statements in order for the code samples to work, as described in the [setting up](#setting-up) section.
+> Make sure you've installed the correct NuGet packages and added the necessary using statements in order for the code samples to work, as described in the [setting up](#setting-up) section.
-* **Azure.Identity** (if you are using the passwordless approach)
-* **Azure.Storage.Blobs**
### Create a container
-Decide on a name for the new container. The code below appends a GUID value to the container name to ensure that it is unique.
+Create a new container in your storage account by calling the [CreateBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainerasync) method on the `blobServiceClient` object. In this example, the code appends a GUID value to the container name to ensure that it's unique.
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-You can call the [CreateBlobContainerAsync](/dotnet/api/azure.storage.blobs.blobserviceclient.createblobcontainerasync) method on the `blobServiceClient` to create a container in your storage account.
-
-Add this code to the end of the `Program.cs` class:
+Add this code to the end of the `Program.cs` file:
```csharp // TODO: Replace <storage-account-name> with your actual storage account name
BlobContainerClient containerClient = await blobServiceClient.CreateBlobContaine
To learn more about creating a container, and to explore more code samples, see [Create a blob container with .NET](storage-blob-container-create.md).
+> [!IMPORTANT]
+> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
+ ### Upload a blob to a container
+Upload a blob to a container using [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync). The example code creates a text file in the local *data* directory to upload to the container.
+ Add the following code to the end of the `Program.cs` class: ```csharp
BlobClient blobClient = containerClient.GetBlobClient(fileName);
Console.WriteLine("Uploading to Blob storage as blob:\n\t {0}\n", blobClient.Uri);
-// Upload data from the local file
+// Upload data from the local file, overwrite the blob if it already exists
await blobClient.UploadAsync(localFilePath, true); ```
-The code snippet completes the following steps:
-
-1. Creates a text file in the local *data* directory.
-1. Gets a reference to a [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) object by calling the [GetBlobClient](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobclient) method on the container from the [Create a container](#create-a-container) section.
-1. Uploads the local text file to the blob by calling the [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync#Azure_Storage_Blobs_BlobClient_UploadAsync_System_String_System_Boolean_System_Threading_CancellationToken_) method. This method creates the blob if it doesn't already exist, and overwrites it if it does.
- To learn more about uploading blobs, and to explore more code samples, see [Upload a blob with .NET](storage-blob-upload.md). ### List blobs in a container
-List the blobs in the container by calling the [GetBlobsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsasync) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob.
+List the blobs in the container by calling the [GetBlobsAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.getblobsasync) method.
-Add the following code to the end of the `Program.cs` class:
+Add the following code to the end of the `Program.cs` file:
```csharp Console.WriteLine("Listing blobs...");
To learn more about listing blobs, and to explore more code samples, see [List b
### Download a blob
-Download the previously created blob by calling the [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) method. The example code adds a suffix of "DOWNLOADED" to the file name so that you can see both files in local file system.
+Download the blob we created earlier by calling the [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) method. The example code appends the string "DOWNLOADED" to the file name so that you can see both files in local file system.
-Add the following code to the end of the `Program.cs` class:
+Add the following code to the end of the `Program.cs` file:
```csharp // Download the blob to a local file
To learn more about downloading blobs, and to explore more code samples, see [Do
### Delete a container
-The following code cleans up the resources the app created by deleting the entire container by using [DeleteAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteasync). It also deletes the local files created by the app.
+The following code cleans up the resources the app created by deleting the container using [DeleteAsync](/dotnet/api/azure.storage.blobs.blobcontainerclient.deleteasync). The code example also deletes the local files created by the app.
-The app pauses for user input by calling `Console.ReadLine` before it deletes the blob, container, and local files. This is a good chance to verify that the resources were actually created correctly, before they are deleted.
+The app pauses for user input by calling `Console.ReadLine` before it deletes the blob, container, and local files. This is a good chance to verify that the resources were created correctly, before they're deleted.
-Add the following code to the end of the `Program.cs` class:
+Add the following code to the end of the `Program.cs` file:
```csharp // Clean up
To learn more about deleting a container, and to explore more code samples, see
## The completed code
-After completing these steps the code in your `Program.cs` file should now resemble the following:
+After completing these steps, the code in your `Program.cs` file should now resemble the following:
## [Passwordless (Recommended)](#tab/managed-identity)
dotnet build
dotnet run ```
-The output of the app is similar to the following example:
+The output of the app is similar to the following example (GUID values omitted for readability):
```output Azure Blob Storage - .NET quickstart sample Uploading to Blob storage as blob:
- https://mystorageacct.blob.core.windows.net/quickstartblobs60c70d78-8d93-43ae-954d-8322058cfd64/quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31.txt
+ https://mystorageacct.blob.core.windows.net/quickstartblobsGUID/quickstartGUID.txt
Listing blobs...
- quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31.txt
+ quickstartGUID.txt
Downloading blob to
- ./data/quickstart2fe6c5b4-7918-46cb-96f4-8c4c5cb2fd31DOWNLOADED.txt
+ ./data/quickstartGUIDDOWNLOADED.txt
Press any key to begin clean up Deleting blob container...
Deleting the local source and downloaded files...
Done ```
-Before you begin the clean up process, check your *data* folder for the two files. You can open them and observe that they are identical.
+Before you begin the clean-up process, check your *data* folder for the two files. You can open them and observe that they're identical.
-After you've verified the files, press the **Enter** key to delete the test files and finish the demo.
+After you verify the files, press the **Enter** key to delete the test files and finish the demo.
## Next steps
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
storage Storage Files Configure P2s Vpn Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md
description: How to configure a point-to-site (P2S) VPN on Windows for use with
Previously updated : 12/01/2023 Last updated : 01/31/2024
Before setting up the point-to-site VPN, you need to collect some information ab
In order to set up a point-to-site VPN using the Azure portal, you'll need to know your resource group name, virtual network name, gateway subnet name, and storage account name.
-# [PowerShell](#tab/azure-powershell)
+# [Azure PowerShell](#tab/azure-powershell)
Run this script to collect the necessary information. Replace `<resource-group>`, `<vnet-name>`, `<subnet-name>`, and `<storage-account-name>` with the appropriate values for your environment.
-```PowerShell
-$resourceGroupName = "<resource-group-name>"
-$virtualNetworkName = "<vnet-name>"
-$subnetName = "<subnet-name>"
-$storageAccountName = "<storage-account-name>"
+```azurepowershell
+$resourceGroupName = '<resource-group-name>'
+$virtualNetworkName = '<vnet-name>'
+$subnetName = '<subnet-name>'
+$storageAccountName = '<storage-account-name>'
-$virtualNetwork = Get-AzVirtualNetwork `
- -ResourceGroupName $resourceGroupName `
- -Name $virtualNetworkName
+$virtualNetworkParams = @{
+ ResourceGroupName = $resourceGroupName
+ Name = $virtualNetworkName
+}
+$virtualNetwork = Get-AzVirtualNetwork @virtualNetworkParams
-$subnetId = $virtualNetwork | `
- Select-Object -ExpandProperty Subnets | `
- Where-Object { $_.Name -eq "StorageAccountSubnet" } | `
+$subnetId = $virtualNetwork |
+ Select-Object -ExpandProperty Subnets |
+ Where-Object {$_.Name -eq 'StorageAccountSubnet'} |
Select-Object -ExpandProperty Id
-$storageAccount = Get-AzStorageAccount `
- -ResourceGroupName $resourceGroupName `
- -Name $storageAccountName
+$storageAccountParams = @{
+ ResourceGroupName = $resourceGroupName
+ Name = $storageAccountName
+}
+$storageAccount = Get-AzStorageAccount @storageAccountParams
-$privateEndpoint = Get-AzPrivateEndpoint | `
+$privateEndpoint = Get-AzPrivateEndpoint |
Where-Object {
- $subnets = $_ | `
- Select-Object -ExpandProperty Subnet | `
- Where-Object { $_.Id -eq $subnetId }
+ $subnets = $_ |
+ Select-Object -ExpandProperty Subnet |
+ Where-Object {$_.Id -eq $subnetId}
- $connections = $_ | `
- Select-Object -ExpandProperty PrivateLinkServiceConnections | `
- Where-Object { $_.PrivateLinkServiceId -eq $storageAccount.Id }
+ $connections = $_ |
+ Select-Object -ExpandProperty PrivateLinkServiceConnections |
+ Where-Object {$_.PrivateLinkServiceId -eq $storageAccount.Id}
$null -ne $subnets -and $null -ne $connections
- } | `
+ } |
Select-Object -First 1 ```
$privateEndpoint = Get-AzPrivateEndpoint | `
In order for VPN connections from your on-premises Windows machines to be authenticated to access your virtual network, you must create two certificates: 1. A root certificate, which will be provided to the virtual machine gateway
-2. A client certificate, which will be signed with the root certificate
+1. A client certificate, which will be signed with the root certificate
You can either use a root certificate that was generated with an enterprise solution, or you can generate a self-signed certificate. If you're using an enterprise solution, acquire the .cer file for the root certificate from your IT organization.
If you aren't using an enterprise certificate solution, create a self-signed roo
> [!IMPORTANT] > Run this PowerShell script as administrator from an on-premises machine running Windows 10/Windows Server 2016 or later. Don't run the script from a Cloud Shell or VM in Azure.
-```PowerShell
-$rootcertname = "CN=P2SRootCert"
-$certLocation = "Cert:\CurrentUser\My"
-$vpnTemp = "C:\vpn-temp\"
-$exportedencodedrootcertpath = $vpnTemp + "P2SRootCertencoded.cer"
-$exportedrootcertpath = $vpnTemp + "P2SRootCert.cer"
+```powershell
+$rootcertname = 'CN=P2SRootCert'
+$certLocation = 'Cert:\CurrentUser\My'
+$vpnTemp = 'C:\vpn-temp'
+$exportedencodedrootcertpath = "$vpnTemp\P2SRootCertencoded.cer"
+$exportedrootcertpath = "$vpnTemp\P2SRootCert.cer"
-if (-Not (Test-Path $vpnTemp)) {
+if (-Not (Test-Path -Path $vpnTemp -PathType Container)) {
New-Item -ItemType Directory -Force -Path $vpnTemp | Out-Null }
-if ($PSVersionTable.PSVersion -ge [System.Version]::new(6, 0)) {
- Install-Module WindowsCompatibility
- Import-WinModule PKI
+if ($PSVersionTable.PSVersion.Major -ge 6) {
+ Import-Module -Name PKI -UseWindowsPowerShell
+}
+
+$selfSignedCertParams = @{
+ Type = 'Custom'
+ KeySpec = 'Signature'
+ Subject = $rootcertname
+ KeyExportPolicy = 'Exportable'
+ HashAlgorithm = 'sha256'
+ KeyLength = '2048'
+ CertStoreLocation = $certLocation
+ KeyUsageProperty = 'Sign'
+ KeyUsage = 'CertSign'
}
+$rootcert = New-SelfSignedCertificate @selfSignedCertParams
-$rootcert = New-SelfSignedCertificate `
- -Type Custom `
- -KeySpec Signature `
- -Subject $rootcertname `
- -KeyExportPolicy Exportable `
- -HashAlgorithm sha256 `
- -KeyLength 2048 `
- -CertStoreLocation $certLocation `
- -KeyUsageProperty Sign `
- -KeyUsage CertSign
-
-Export-Certificate `
- -Cert $rootcert `
- -FilePath $exportedencodedrootcertpath `
- -NoClobber | Out-Null
+Export-Certificate -Cert $rootcert -FilePath $exportedencodedrootcertpath -NoClobber | Out-Null
certutil -encode $exportedencodedrootcertpath $exportedrootcertpath | Out-Null $rawRootCertificate = Get-Content -Path $exportedrootcertpath
-[System.String]$rootCertificate = ""
-foreach($line in $rawRootCertificate) {
- if ($line -notlike "*Certificate*") {
+$rootCertificate = ''
+
+foreach ($line in $rawRootCertificate) {
+ if ($line -notlike '*Certificate*') {
$rootCertificate += $line } }
The Azure virtual network gateway is the service that your on-premises Windows m
Deploying a virtual network gateway requires two basic components: 1. A public IP address that will identify the gateway to your clients wherever they are in the world
-2. The root certificate you created in the previous step, which will be used to authenticate your clients
+1. The root certificate you created in the previous step, which will be used to authenticate your clients
You can use the Azure portal or Azure PowerShell to deploy the virtual network gateway. Deployment can take up to 45 minutes to complete.
To deploy a virtual network gateway using the Azure portal, follow these instruc
1. Select **Save** at the top of the page to save all of the configuration settings and upload the root certificate public key information to Azure.
-# [PowerShell](#tab/azure-powershell)
+# [Azure PowerShell](#tab/azure-powershell)
-Replace `<desired-vpn-name>`, `<desired-region>`, and `<gateway-subnet-name>` in the following script with the proper values for these variables.
+Replace `<desired-vpn-name>` and `<desired-region>` in the following script with the proper values for these variables.
While this resource is being deployed, this PowerShell script will block the deployment from being completed. This is expected.
-```PowerShell
-$vpnName = "<desired-vpn-name>"
+```azurepowershell
+$vpnName = '<desired-vpn-name>'
$publicIpAddressName = "$vpnName-PublicIP"
-$region = "<desired-region>"
-$gatewaySubnet = "<gateway-subnet-name>"
-
-$publicIPAddress = New-AzPublicIpAddress `
- -ResourceGroupName $resourceGroupName `
- -Name $publicIpAddressName `
- -Location $region `
- -Sku Basic `
- -AllocationMethod Dynamic
-
-$gatewayIpConfig = New-AzVirtualNetworkGatewayIpConfig `
- -Name "vnetGatewayConfig" `
- -SubnetId $gatewaySubnet.Id `
- -PublicIpAddressId $publicIPAddress.Id
-
-$azRootCertificate = New-AzVpnClientRootCertificate `
- -Name "P2SRootCert" `
- -PublicCertData $rootCertificate
-
-$vpn = New-AzVirtualNetworkGateway `
- -ResourceGroupName $resourceGroupName `
- -Name $vpnName `
- -Location $region `
- -GatewaySku VpnGw2 `
- -GatewayType Vpn `
- -VpnType RouteBased `
- -IpConfigurations $gatewayIpConfig `
- -VpnClientAddressPool "172.16.201.0/24" `
- -VpnClientProtocol IkeV2 `
- -VpnClientRootCertificates $azRootCertificate
+$region = '<desired-region>'
+
+$publicIpParams = @{
+ ResourceGroupName = $resourceGroupName
+ Name = $publicIpAddressName
+ Location = $region
+ Sku = 'Basic'
+ AllocationMethod = 'Dynamic'
+}
+$publicIpAddress = New-AzPublicIpAddress @publicIPParams
+
+$gatewayIpParams = @{
+ Name = 'vnetGatewayConfig'
+ SubnetId = $subnetId
+ PublicIpAddressId = $publicIPAddress.Id
+}
+$gatewayIpConfig = New-AzVirtualNetworkGatewayIpConfig @gatewayIpParams
+
+$vpnClientRootCertParams = @{
+ Name = 'P2SRootCert'
+ PublicCertData = $rootCertificate
+}
+$azRootCertificate = New-AzVpnClientRootCertificate @vpnClientRootCertParams
+
+$virtualNetGatewayParams = @{
+ ResourceGroupName = $resourceGroupName
+ Name = $vpnName
+ Location = $region
+ GatewaySku = 'VpnGw2'
+ IpConfigurations = $gatewayIpConfig
+ GatewayType = 'Vpn'
+ VpnType = 'RouteBased'
+ IpConfigurations = $gatewayIpConfig
+ VpnClientAddressPool = '172.16.201.0/24'
+ VpnClientProtocol = 'IkeV2'
+ VpnClientRootCertificates = $azRootCertificate
+}
+$vpn = New-AzVirtualNetworkGateway @virtualNetGatewayParams
```
If not, use the following steps to identify the self-signed root certificate tha
1. Get a list of the certificates that are installed on your computer. ```powershell
- Get-ChildItem -Path "Cert:\CurrentUser\My"
+ Get-ChildItem -Path 'Cert:\CurrentUser\My'
``` 1. Locate the subject name from the returned list, then copy the thumbprint that's located next to it to a text file. In the following example, there are two certificates. The CN name is the name of the self-signed root certificate from which you want to generate a child certificate. In this case, it's called *P2SRootCert*.
- ```
+ ```Output
Thumbprint Subject - - AED812AD883826FF76B4D1D5A77B3C08EFA79F3F CN=P2SChildCert4
If not, use the following steps to identify the self-signed root certificate tha
1. Declare a variable for the root certificate using the thumbprint from the previous step. Replace THUMBPRINT with the thumbprint of the root certificate from which you want to generate a client certificate. ```powershell
- $rootcert = Get-ChildItem -Path "Cert:\CurrentUser\My\<THUMBPRINT>"
+ $rootcert = Get-ChildItem -Path 'Cert:\CurrentUser\My\<THUMBPRINT>'
``` For example, using the thumbprint for *P2SRootCert* in the previous step, the command looks like this: ```powershell
- $rootcert = Get-ChildItem -Path "Cert:\CurrentUser\My\7181AA8C1B4D34EEDB2F3D3BEC5839F3FE52D655"
+ $rootcert = Get-ChildItem -Path 'Cert:\CurrentUser\My\7181AA8C1B4D34EEDB2F3D3BEC5839F3FE52D655'
``` #### Generate a client certificate
Use the `New-AzVpnClientConfiguration` PowerShell cmdlet to generate a client ce
> [!IMPORTANT] > Run this PowerShell script as administrator from the on-premises Windows machine that you want to connect to the Azure file share. The computer must be running Windows 10/Windows Server 2016 or later. Don't run the script from a Cloud Shell in Azure. Make sure you sign in to your Azure account before running the script (`Connect-AzAccount`).
-```PowerShell
-$clientcertpassword = "1234"
-$resourceGroupName = "<resource-group-name>"
-$vpnName = "<vpn-gateway-name>"
-$vpnTemp = "C:\vpn-temp\"
-$certLocation = "Cert:\CurrentUser\My"
-
-$vpnClientConfiguration = New-AzVpnClientConfiguration `
- -ResourceGroupName $resourceGroupName `
- -Name $vpnName `
- -AuthenticationMethod EAPTLS
+```azurepowershell
+$clientcertpassword = '<enter-your-password>'
+$resourceGroupName = '<resource-group-name>'
+$vpnName = '<vpn-gateway-name>'
+$vpnTemp = 'C:\vpn-temp'
+$certLocation = 'Cert:\CurrentUser\My'
+
+$vpnClientConfigParams = @{
+ ResourceGroupName = $resourceGroupName
+ Name = $vpnName
+ AuthenticationMethod = 'EAPTLS'
+}
+$vpnClientConfiguration = New-AzVpnClientConfiguration @vpnClientConfigParams
-Invoke-WebRequest `
- -Uri $vpnClientConfiguration.VpnProfileSASUrl `
- -OutFile "$vpnTemp\vpnclientconfiguration.zip"
+$webRequestParams = @{
+ Uri = $vpnClientConfiguration.VpnProfileSASUrl
+ OutFile = "$vpnTemp\vpnclientconfiguration.zip"
+}
+Invoke-WebRequest @webRequestParams
-Expand-Archive `
- -Path "$vpnTemp\vpnclientconfiguration.zip" `
- -DestinationPath "$vpnTemp\vpnclientconfiguration"
+$expandArchiveParams = @{
+ Path = "$vpnTemp\vpnclientconfiguration.zip"
+ DestinationPath = "$vpnTemp\vpnclientconfiguration"
+}
+Expand-Archive @expandArchiveParams
$vpnGeneric = "$vpnTemp\vpnclientconfiguration\Generic" $vpnProfile = ([xml](Get-Content -Path "$vpnGeneric\VpnSettings.xml")).VpnProfile
-$exportedclientcertpath = $vpnTemp + "P2SClientCert.pfx"
-$clientcertname = "CN=" + $vpnProfile.VpnServer
-
-$clientcert = New-SelfSignedCertificate `
- -Type Custom `
- -DnsName $vpnProfile.VpnServer `
- -KeySpec Signature `
- -Subject $clientcertname `
- -KeyExportPolicy Exportable `
- -HashAlgorithm sha256 `
- -KeyLength 2048 `
- -CertStoreLocation $certLocation `
- -Signer $rootcert `
- -TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2")
+$exportedclientcertpath = "$vpnTemp\P2SClientCert.pfx"
+$clientcertname = "CN=$($vpnProfile.VpnServer)"
+
+$selfSignedCertParams = @{
+ Type = 'Custom'
+ DnsName = $vpnProfile.VpnServer
+ KeySpec = 'Signature'
+ Subject = $clientcertname
+ KeyExportPolicy = 'Exportable'
+ HashAlgorithm = 'sha256'
+ KeyLength = 2048
+ CertStoreLocation = $certLocation
+ Signer = $rootcert
+ TextExtension = @('2.5.29.37={text}1.3.6.1.5.5.7.3.2')
+}
+$clientcert = New-SelfSignedCertificate @selfSignedCertParams
$mypwd = ConvertTo-SecureString -String $clientcertpassword -Force -AsPlainText
-Export-PfxCertificate `
- -FilePath $exportedclientcertpath `
- -Password $mypwd `
- -Cert $clientcert | Out-Null
+Export-PfxCertificate -FilePath $exportedclientcertpath -Password $mypwd -Cert $clientcert |
+ Out-Null
``` ## Configure the VPN client
You can use the same VPN client configuration package on each Windows client com
1. On the **Connection status** page, select **Connect** to start the connection. If you see a **Select Certificate** screen, verify that the client certificate showing is the one that you want to use to connect. If it isn't, use the drop-down arrow to select the correct certificate, and then select **OK**.
-# [PowerShell](#tab/azure-powershell)
+# [Azure PowerShell](#tab/azure-powershell)
The following PowerShell script will install the client certificate required for authentication against the virtual network gateway, and then download and install the VPN package. Remember to replace `<computer1>` and `<computer2>` with the desired computers. You can run this script on as many machines as you desire by adding more PowerShell sessions to the `$sessions` array. Your user account must be an administrator on each of these machines. If one of these machines is the local machine you're running the script from, you must run the script from an elevated PowerShell session.
-```PowerShell
+```powershell
$sessions = [System.Management.Automation.Runspaces.PSSession[]]@()
-$sessions += New-PSSession -ComputerName "<computer1>"
-$sessions += New-PSSession -ComputerName "<computer2>"
+$sessions += New-PSSession -ComputerName '<computer1>'
+$sessions += New-PSSession -ComputerName '<computer2>'
foreach ($session in $sessions) { Invoke-Command -Session $session -ArgumentList $vpnTemp -ScriptBlock { $vpnTemp = $args[0]
- if (-Not (Test-Path $vpnTemp)) {
- New-Item `
- -ItemType Directory `
- -Force `
- -Path "C:\vpn-temp" | Out-Null
+ if (-Not (Test-Path -Path $vpnTemp -PathType Container)) {
+ New-Item -ItemType Directory -Force -Path 'C:\vpn-temp' | Out-Null
} }
- Copy-Item `
- -Path $exportedclientcertpath, $exportedrootcertpath, "$vpnTemp\vpnclientconfiguration.zip" `
- -Destination $vpnTemp `
- -ToSession $session
-
- Invoke-Command `
- -Session $session `
- -ArgumentList `
- $mypwd, `
- $vpnTemp, `
- $virtualNetworkName `
- -ScriptBlock {
- $mypwd = $args[0]
- $vpnTemp = $args[1]
- $virtualNetworkName = $args[2]
-
- Import-PfxCertificate `
- -Exportable `
- -Password $mypwd `
- -CertStoreLocation "Cert:\LocalMachine\My" `
- -FilePath "$vpnTemp\P2SClientCert.pfx" | Out-Null
-
- Import-Certificate `
- -FilePath "$vpnTemp\P2SRootCert.cer" `
- -CertStoreLocation "Cert:\LocalMachine\Root" | Out-Null
-
- Expand-Archive `
- -Path "$vpnTemp\vpnclientconfiguration.zip" `
- -DestinationPath "$vpnTemp\vpnclientconfiguration"
- $vpnGeneric = "$vpnTemp\vpnclientconfiguration\Generic"
-
- $vpnProfile = ([xml](Get-Content -Path "$vpnGeneric\VpnSettings.xml")).VpnProfile
-
- Add-VpnConnection `
- -Name $virtualNetworkName `
- -ServerAddress $vpnProfile.VpnServer `
- -TunnelType Ikev2 `
- -EncryptionLevel Required `
- -AuthenticationMethod MachineCertificate `
- -SplitTunneling `
- -AllUserConnection
-
- Add-VpnConnectionRoute `
- -Name $virtualNetworkName `
- -DestinationPrefix $vpnProfile.Routes `
- -AllUserConnection
-
- Add-VpnConnectionRoute `
- -Name $virtualNetworkName `
- -DestinationPrefix $vpnProfile.VpnClientAddressPool `
- -AllUserConnection
-
- rasdial $virtualNetworkName
+ $copyItemParams = @{
+ Path = @(
+ $exportedclientcertpath,
+ $exportedrootcertpath,
+ "$vpnTemp\vpnclientconfiguration.zip"
+ )
+ Destination = $vpnTemp
+ ToSession = $session
+ }
+ Copy-Item @copyItemParams
+
+ $invokeCmdParams = @{
+ Session = $session
+ ArgumentList = @($mypwd, $vpnTemp, $virtualNetworkName)
+ }
+ Invoke-Command @invokeCmdParams -ScriptBlock {
+ $mypwd = $args[0]
+ $vpnTemp = $args[1]
+ $virtualNetworkName = $args[2]
+
+ $pfxCertParams = @{
+ Exportable = $true
+ Password = $mypwd
+ CertStoreLocation = 'Cert:\LocalMachine\My'
+ FilePath = "$vpnTemp\P2SClientCert.pfx"
+ }
+ Import-PfxCertificate @pfxCertParams | Out-Null
+
+ $importCertParams = @{
+ FilePath = "$vpnTemp\P2SRootCert.cer"
+ CertStoreLocation = "Cert:\LocalMachine\Root"
+ }
+ Import-Certificate @importCertParams | Out-Null
+
+ $vpnGenericParams = @{
+ Path = "$vpnTemp\vpnclientconfiguration.zip"
+ DestinationPath = "$vpnTemp\vpnclientconfiguration"
+ }
+ Expand-Archive @vpnGenericParams
+
+ $vpnGeneric = "$vpnTemp\vpnclientconfiguration\Generic"
+
+ $vpnProfile = ([xml](Get-Content -Path "$vpnGeneric\VpnSettings.xml")).VpnProfile
+
+ $vpnConnectionParams = @{
+ Name = $virtualNetworkName
+ ServerAddress = $vpnProfile.VpnServer
+ TunnelType = 'Ikev2'
+ EncryptionLevel = 'Required'
+ AuthenticationMethod = 'MachineCertificate'
+ SplitTunneling = $true
+ AllUserConnection = $true
+ }
+ Add-VpnConnection @vpnConnectionParams
+
+ $vpnConnRoute1Params = @{
+ Name = $virtualNetworkName
+ DestinationPrefix = $vpnProfile.Routes
+ AllUserConnection = $true
}
+ Add-VpnConnectionRoute @vpnConnRoute1Params
+
+ $vpnConnRoute2Params = @{
+ Name = $virtualNetworkName
+ DestinationPrefix = $vpnProfile.VpnClientAddressPool
+ AllUserConnection = $true
+ }
+ Add-VpnConnectionRoute @vpnConnRoute2Params
+
+ rasdial $virtualNetworkName
+ }
} Remove-Item -Path $vpnTemp -Recurse
To mount the file share using your storage account key, open a Windows command p
net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:localhost\<YourStorageAccountName> <YourStorageAccountKey> ```
-# [PowerShell](#tab/azure-powershell)
+# [Azure PowerShell](#tab/azure-powershell)
The following PowerShell script will mount the share, list the root directory of the share to prove the share is actually mounted, and then unmount the share. > [!NOTE] > It isn't possible to mount the share persistently over PowerShell remoting. To mount persistently, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md).
-```PowerShell
-$myShareToMount = "<file-share>"
+```azurepowershell
+$myShareToMount = '<file-share>'
-$storageAccountKeys = Get-AzStorageAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $storageAccountName
-$storageAccountKey = ConvertTo-SecureString `
- -String $storageAccountKeys[0].Value `
- -AsPlainText `
- -Force
+$storageAccountKeyParams = @{
+ ResourceGroupName = $resourceGroupName
+ Name = $storageAccountName
+}
+$storageAccountKeys = Get-AzStorageAccountKey @storageAccountKeyParams
+
+$convertToSecureStringParams = @{
+ String = $storageAccountKeys[0].Value
+ AsPlainText = $true
+ Force = $true
+}
+$storageAccountKey = ConvertTo-SecureString @convertToSecureStringParams
+
+$getAzNetworkInterfaceParams = @{
+ ResourceId = $privateEndpoint.NetworkInterfaces[0].Id
+}
+$nic = Get-AzNetworkInterface @getAzNetworkInterfaceParams
-$nic = Get-AzNetworkInterface -ResourceId $privateEndpoint.NetworkInterfaces[0].Id
$storageAccountPrivateIP = $nic.IpConfigurations[0].PrivateIpAddress
-Invoke-Command `
- -Session $sessions `
- -ArgumentList `
- $storageAccountName, `
- $storageAccountKey, `
- $storageAccountPrivateIP, `
- $myShareToMount `
- -ScriptBlock {
- $storageAccountName = $args[0]
- $storageAccountKey = $args[1]
- $storageAccountPrivateIP = $args[2]
- $myShareToMount = $args[3]
-
- $credential = [System.Management.Automation.PSCredential]::new(
- "AZURE\$storageAccountName",
- $storageAccountKey)
-
- New-PSDrive `
- -Name Z `
- -PSProvider FileSystem `
- -Root "\\$storageAccountPrivateIP\$myShareToMount" `
- -Credential $credential `
- -Persist | Out-Null
- Get-ChildItem -Path Z:\
- Remove-PSDrive -Name Z
+$invokeCmdParams = @{
+ Session = $sessions
+ ArgumentList = @(
+ $storageAccountName,
+ $storageAccountKey,
+ $storageAccountPrivateIP,
+ $myShareToMount
+ )
+}
+Invoke-Command @invokeCmdParams -ScriptBlock {
+ $storageAccountName = $args[0]
+ $storageAccountKey = $args[1]
+ $storageAccountPrivateIP = $args[2]
+ $myShareToMount = $args[3]
+
+ $credential = [System.Management.Automation.PSCredential]::new(
+ "AZURE\$storageAccountName",
+ $storageAccountKey
+ )
+
+ $psDriveParams = @{
+ Name = 'Z'
+ PSProvider = 'FileSystem'
+ Root = "\\$storageAccountPrivateIP\$myShareToMount"
+ Credential = $credential
+ Persist = $true
}
+ New-PSDrive @psDriveParams | Out-Null
+
+ Get-ChildItem -Path Z:\
+ Remove-PSDrive -Name Z
+}
```
If a root certificate needs to be rotated due to expiration or new requirements,
Replace `<resource-group-name>`, `<desired-vpn-name-here>`, and `<new-root-cert-name>` with your own values, then run the script.
-```PowerShell
+```azurepowershell
#Creating the new Root Certificate
-$ResourceGroupName = "<resource-group-name>"
-$vpnName = "<desired-vpn-name-here>"
-$NewRootCertName = "<new-root-cert-name>"
-
-$rootcertname = "CN=$NewRootCertName"
-$certLocation = "Cert:\CurrentUser\My"
-$date = get-date -Format "MM_yyyy"
-$vpnTemp = "C:\vpn-temp_$date\"
-$exportedencodedrootcertpath = $vpnTemp + "P2SRootCertencoded.cer"
-$exportedrootcertpath = $vpnTemp + "P2SRootCert.cer"
-
-if (-Not (Test-Path $vpnTemp)) {
+$ResourceGroupName = '<resource-group-name>'
+$vpnName = '<desired-vpn-name-here>'
+$NewRootCertName = '<new-root-cert-name>'
+$rootcertname = "CN=$NewRootCertName"
+$certLocation = 'Cert:\CurrentUser\My'
+$date = Get-Date -Format 'MM_yyyy'
+$vpnTemp = "C:\vpn-temp_$date"
+$exportedencodedrootcertpath = "$vpnTemp\P2SRootCertencoded.cer"
+$exportedrootcertpath = "$vpnTemp\P2SRootCert.cer"
+
+if (-Not (Test-Path -Path $vpnTemp -PathType Container)) {
New-Item -ItemType Directory -Force -Path $vpnTemp | Out-Null }
-$rootcert = New-SelfSignedCertificate `
- -Type Custom `
- -KeySpec Signature `
- -Subject $rootcertname `
- -KeyExportPolicy Exportable `
- -HashAlgorithm sha256 `
- -KeyLength 2048 `
- -CertStoreLocation $certLocation `
- -KeyUsageProperty Sign `
- -KeyUsage CertSign
-
-Export-Certificate `
- -Cert $rootcert `
- -FilePath $exportedencodedrootcertpath `
- -NoClobber | Out-Null
+$selfSignedCertParams = @{
+ Type = 'Custom'
+ KeySpec = 'Signature'
+ Subject = $rootcertname
+ KeyExportPolicy = 'Exportable'
+ HashAlgorithm = 'sha256'
+ KeyLength = 2048
+ CertStoreLocation = $certLocation
+ KeyUsageProperty = 'Sign'
+ KeyUsage = 'CertSign'
+}
+$rootcert = New-SelfSignedCertificate @selfSignedCertParams
+
+$exportCertParams = @{
+ Cert = $rootcert
+ FilePath = $exportedencodedrootcertpath
+ NoClobber = $true
+}
+Export-Certificate @exportCertParams | Out-Null
certutil -encode $exportedencodedrootcertpath $exportedrootcertpath | Out-Null $rawRootCertificate = Get-Content -Path $exportedrootcertpath
-[System.String]$rootCertificate = ""
+$rootCertificate = ''
+ foreach($line in $rawRootCertificate) {
- if ($line -notlike "*Certificate*") {
+ if ($line -notlike '*Certificate*') {
$rootCertificate += $line } }
foreach($line in $rawRootCertificate) {
#Fetching gateway details and adding the newly created Root Certificate. $gateway = Get-AzVirtualNetworkGateway -Name $vpnName -ResourceGroupName $ResourceGroupName
-Add-AzVpnClientRootCertificate `
- -PublicCertData $rootCertificate `
- -ResourceGroupName $ResourceGroupName `
- -VirtualNetworkGatewayName $gateway `
- -VpnClientRootCertificateName $NewRootCertName
-
+$vpnClientRootCertParams = @{
+ PublicCertData = $rootCertificate
+ ResourceGroupName = $ResourceGroupName
+ VirtualNetworkGatewayName = $gateway
+ VpnClientRootCertificateName = $NewRootCertName
+}
+Add-AzVpnClientRootCertificate @vpnClientRootCertParams
``` ## See also
Add-AzVpnClientRootCertificate `
- [Configure server settings for P2S VPN Gateway connections](../../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md) - [Networking considerations for direct Azure file share access](storage-files-networking-overview.md) - [Configure a point-to-site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md)-- [Configure a site-to-site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md)
+- [Configure a site-to-site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md)
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
Title: Backup and restore - snapshots, geo-redundant
+ Title: Backup and restore - snapshots, geo-redundant
description: Learn how backup and restore works in Azure Synapse Analytics dedicated SQL pool. Use backups to restore your data warehouse to a restore point in the primary region. Use geo-redundant backups to restore to a different geographical region. + + Last updated : 01/31/2024 + - Previously updated : 11/30/2022---
-# Backup and restore in Azure Synapse Dedicated SQL pool
+# Backup and restore dedicated SQL pools in Azure Synapse Analytics
+
+In this article, you'll learn how to use backup and restore in Azure Synapse dedicated SQL pool.
-Learn how to use backup and restore in Azure Synapse Dedicated SQL pool. Use dedicated SQL pool restore points to recover or copy your data warehouse to a previous state in the primary region. Use data warehouse geo-redundant backups to restore to a different geographical region.
+Use dedicated SQL pool restore points to recover or copy your data warehouse to a previous state in the primary region. Use data warehouse geo-redundant backups to restore to a different geographical region.
+
+> [!NOTE]
+> Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to [How to enable a workspace for your dedicated SQL pool (formerly SQL DW)](workspace-connected-create.md). For more information, see [What's the difference between Azure Synapse dedicated SQL pools (formerly SQL DW) and dedicated SQL pools in an Azure Synapse Analytics workspace?](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/msft-docs-what-s-the-difference-between-synapse-formerly-sql-dw/ba-p/3597772).
## What is a data warehouse snapshot
A *data warehouse snapshot* creates a restore point you can leverage to recover
A *data warehouse restore* is a new data warehouse that is created from a restore point of an existing or deleted data warehouse. Restoring your data warehouse is an essential part of any business continuity and disaster recovery strategy because it re-creates your data after accidental corruption or deletion. Data warehouse snapshot is also a powerful mechanism to create copies of your data warehouse for test or development purposes. > [!NOTE]
-> Dedicated SQL pool Recovery Time Objective (RTO) rates can vary. Factors that may affect the recovery (restore) time:
+> Dedicated SQL pool Recovery Time Objective (RTO) rates can vary. Factors that might affect the recovery (restore) time:
> - The database size
-> - The location of the source and target data warehouse (i.e., geo-restore)
+> - The location of the source and target data warehouse (in the case of a geo-restore)
## Automatic Restore Points
-Snapshots are a built-in feature that creates restore points. You do not have to enable this capability. However, the dedicated SQL pool should be in an active state for restore point creation. If it is paused frequently, automatic restore points may not be created so make sure to create user-defined restore point before pausing the dedicated SQL pool. Automatic restore points currently cannot be deleted by users as the service uses these restore points to maintain SLAs for recovery.
+Snapshots are a built-in feature that creates restore points. You do not have to enable this capability. However, the dedicated SQL pool should be in an active state for restore point creation. If it is paused frequently, automatic restore points might not be created so make sure to create user-defined restore point before pausing the dedicated SQL pool. Automatic restore points currently cannot be deleted by users as the service uses these restore points to maintain SLAs for recovery.
Snapshots of your data warehouse are taken throughout the day creating restore points that are available for seven days. This retention period cannot be changed. Dedicated SQL pool supports an eight-hour recovery point objective (RPO). You can restore your data warehouse in the primary region from any one of the snapshots taken in the past seven days. To see when the last snapshot started, run this query on your online dedicated SQL pool. ```sql
-select top 1 *
-from sys.pdw_loader_backup_runs
-order by run_id desc
-;
+SELECT TOP 1 *
+FROM sys.pdw_loader_backup_runs
+ORDER BY run_id desc;
```+ > [!NOTE]
-> Backups occur every four (4) hours to meet an eight (8) hour SLA. Therefore, the sys.pdw_loader_backup_runs dynamic management view will display backup activity every four (4) hours.
+> Backups occur every four (4) hours to meet an eight (8) hour SLA. Therefore, the `sys.pdw_loader_backup_runs` dynamic management view will display backup activity every four (4) hours.
-## User-Defined Restore Points
+## User-defined restore points
-This feature enables you to manually trigger snapshots to create restore points of your data warehouse before and after large modifications. This capability ensures that restore points are logically consistent, which provides additional data protection in case of any workload interruptions or user errors for quick recovery time. User-defined restore points are available for seven days and are automatically deleted on your behalf. You cannot change the retention period of user-defined restore points. **42 user-defined restore points** are guaranteed at any point in time so they must be [deleted](#delete-user-defined-restore-points) before creating another restore point. You can trigger snapshots to create user-defined restore points through [PowerShell](/powershell/module/az.synapse/new-azsynapsesqlpoolrestorepoint?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.jsont#examples) or the Azure portal.
+This feature enables you to manually trigger snapshots to create restore points of your data warehouse before and after large modifications. This capability ensures that restore points are logically consistent, which provides additional data protection in case of any workload interruptions or user errors for quick recovery time. User-defined restore points are available for seven days and are automatically deleted on your behalf. You cannot change the retention period of user-defined restore points. **42 user-defined restore points** are guaranteed at any point in time so they must be [deleted](#delete-user-defined-restore-points) before creating another restore point. You can trigger snapshots to create user-defined restore points by using the Azure portal or programmatically by using the [PowerShell or REST APIs](#create-user-defined-restore-points)
+
+- For more information on user-defined restore points in a standalone data warehouse (formerly SQL pool), see [User-defined restore points for a dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-restore-points.md).
+- For more information on user-defined restore points in a dedicated SQL pool in a Synapse workspace, [User-defined restore points in Azure Synapse Analytics](../backuprestore/sqlpool-create-restore-point.md).
> [!NOTE]
-> If you require restore points longer than 7 days, please vote for this capability [here](https://feedback.azure.com/d365community/idea/4c446fd9-0b25-ec11-b6e6-000d3a4f07b8).
+> If you require restore points longer than 7 days, please [vote for this capability](https://feedback.azure.com/d365community/idea/4c446fd9-0b25-ec11-b6e6-000d3a4f07b8).
> [!NOTE] > In case you're looking for a Long-Term Backup (LTR) concept: > 1. Create a new user-defined restore point, or you can use one of the automatically generated restore points.
-> 2. Restore from the newly created restore point to a new data warehouse.
-> 3. After you have restored, you have the dedicated SQL pool online. Pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Synapse storage rate.
+> 1. Restore from the newly created restore point to a new data warehouse.
+> 1. After you have restored, you have the dedicated SQL pool online. Pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Synapse storage rate.
> > If you need an active copy of the restored data warehouse, you can resume, which should take only a few minutes.
-### Delete User Defined Restore Points
+### Create user-defined restore points
+
+You can create a new user-defined restore point programmatically. Choose the correct method based on the SQL pool you are using: either a standalone dedicated SQL pool (formerly SQL DW), or a dedicated SQL pool within a Synapse workspace.
+
+**Azure PowerShell**
+- For dedicated SQL pool (formerly SQL DW), use [New-AzSqlDatabaseRestorePoint](/powershell/module/az.sql/new-azsqldatabaserestorepoint)
+- For dedicated SQL pool (within Synapse workspace), use [New-AzSynapseSqlPoolRestorePoint](/powershell/module/az.synapse/new-azsynapsesqlpoolrestorepoint)
+
+**REST APIs**
+- For dedicated SQL pool (formerly SQL DW), use [Restore Points - Create](/rest/api/sql/restore-points/create)
+- For dedicated SQL pool (within Synapse workspace), use [Sql Pool Restore Points - Create](/rest/api/synapse/resourcemanager/sql-pool-restore-points/create)
-To delete a specific user-defined restore point programmatically, verify the below methods. It is crucial to use the correct method based on the SQL Pool you are usingΓÇöeither a formerly SQL DW or a SQL Pool within a Synapse workspace.
+### Delete user-defined restore points
+You can delete a specific user-defined restore point programmatically. Choose the correct method based on the SQL pool you are using: either a standalone dedicated SQL pool (formerly SQL DW), or a dedicated SQL pool within a Synapse workspace.
**Azure PowerShell** - For dedicated SQL pool (formerly SQL DW), use [Remove-AzSqlDatabaseRestorePoint](/powershell/module/az.sql/remove-azsqldatabaserestorepoint)
To delete a specific user-defined restore point programmatically, verify the bel
The following lists details for restore point retention periods: 1. Dedicated SQL pool deletes a restore point when it hits the 7-day retention period **and** when there are at least 42 total restore points (including both user-defined and automatic).
-2. Snapshots are not taken when a dedicated SQL pool is paused.
-3. The age of a restore point is measured by the absolute calendar days from the time the restore point is taken including when the SQL pool is paused.
-4. At any point in time, a dedicated SQL pool is guaranteed to be able to store up to 42 user-defined restore points or 42 automatic restore points as long as these restore points have not reached the 7-day retention period
-5. If a snapshot is taken, the dedicated SQL pool is then paused for greater than 7 days, and then resumed, the restore point will persist until there are 42 total restore points (including both user-defined and automatic)
+1. Snapshots are not taken when a dedicated SQL pool is paused.
+1. The age of a restore point is measured by the absolute calendar days from the time the restore point is taken including when the SQL pool is paused.
+1. At any point in time, a dedicated SQL pool is guaranteed to be able to store up to 42 user-defined restore points or 42 automatic restore points as long as these restore points have not reached the 7-day retention period
+1. If a snapshot is taken, the dedicated SQL pool is then paused for greater than 7 days, and then resumed, the restore point will persist until there are 42 total restore points (including both user-defined and automatic)
### Snapshot retention when a SQL pool is dropped
When you drop a dedicated SQL pool, a final snapshot is created and saved for se
A geo-backup is created once per day to a [paired data center](../../availability-zones/cross-region-replication-azure.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json). The RPO for a geo-restore is 24 hours. A geo-restore is always a data movement operation and the RTO will depend on the data size. Only the latest geo-backup is retained. You can restore the geo-backup to a server in any other region where dedicated SQL pool is supported. A geo-backup ensures you can restore data warehouse in case you cannot access the restore points in your primary region.
-If you do not require geo-backups for your dedicated SQL pool, you can disable them and save on disaster recovery storage costs. To do so, refer to [How to guide: Disable geo-backups for a dedicated SQL pool (formerly SQL DW)](disable-geo-backup.md). Note that if you disable geo-backups, you will not be able to recover your dedicated SQL pool to your paired Azure region if your primary Azure data center is unavailable.
+If you do not require geo-backups for your dedicated SQL pool, you can disable them and save on disaster recovery storage costs. To do so, refer to [How to guide: Disable geo-backups for a dedicated SQL pool (formerly SQL DW)](disable-geo-backup.md). If you disable geo-backups, you will not be able to recover your dedicated SQL pool to your paired Azure region if your primary Azure data center is unavailable.
> [!NOTE]
-> If you require a shorter RPO for geo-backups, vote for this capability [here](https://feedback.azure.com/d365community/idea/dc4975e5-0b25-ec11-b6e6-000d3a4f07b8). You can also create a user-defined restore point and restore from the newly created restore point to a new data warehouse in a different region. After you have restored, you have the data warehouse online and can pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Premium Storage rate. Another common pattern for a shorter recovery point is to ingest data into primary and secondary instances of a data warehouse in parallel. In this scenario, data is ingested from a source (or sources) and persisted to two separate instances of the data warehouse (primary and secondary). To save on compute costs, you can pause the secondary instance of the warehouse. If you need an active copy of the data warehouse, you can resume, which should take only a few minutes.
+> If you require a shorter RPO for geo-backups, [vote for this capability](https://feedback.azure.com/d365community/idea/dc4975e5-0b25-ec11-b6e6-000d3a4f07b8). You can also create a user-defined restore point and restore from the newly created restore point to a new data warehouse in a different region. After you have restored, you have the data warehouse online and can pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Premium Storage rate. Another common pattern for a shorter recovery point is to ingest data into primary and secondary instances of a data warehouse in parallel. In this scenario, data is ingested from a source (or sources) and persisted to two separate instances of the data warehouse (primary and secondary). To save on compute costs, you can pause the secondary instance of the warehouse. If you need an active copy of the data warehouse, you can resume, which should take only a few minutes.
-## Data residency
+## Data residency
If your paired data center is located outside of your country/region, you can ensure that your data stays within your region by provisioning your database on locally redundant storage (LRS). If your database has already been provisioned on RA-GRS (Read Only Geographically Redundant Storage, the current default) then you can opt out of geo-backups, however your database will continue to reside on storage that is replicated to a regional pair. To ensure that customer data stays within your region, you can provision or restore your dedicated SQL pool to locally redundant storage. For more information on how to provision or restore to local redundant storage, see [How-to guide for configuring single region residency for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics](single-region-residency.md)
If you are using geo-redundant storage, you receive a separate storage charge. T
For more information about Azure Synapse pricing, see [Azure Synapse pricing](https://azure.microsoft.com/pricing/details/sql-data-warehouse/gen2/). You are not charged for data egress when restoring across regions.
-## Restoring from restore points
+## <a id="restoring-from-restore-points"></a> Restore from restore points
Each snapshot creates a restore point that represents the time the snapshot started. To restore a data warehouse, you choose a restore point and issue a restore command. You can either keep the restored data warehouse and the current one, or delete one of them. If you want to replace the current data warehouse with the restored data warehouse, you can rename it using [ALTER DATABASE](/sql/t-sql/statements/alter-database-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) with the MODIFY NAME option.
-To restore a data warehouse, see [Restore a dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-restore-points.md#create-user-defined-restore-points-through-the-azure-portal).
+- To restore a standalone data warehouse (formerly SQL pool), see [Restore a dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-restore-points.md#create-user-defined-restore-points-through-the-azure-portal).
+- To restore a dedicated SQL pool in a Synapse workspace, see [Restore an existing dedicated SQL pool](../backuprestore/restore-sql-pool.md).
-To restore a deleted data warehouse, see [Restore a deleted database (formerly SQL DW)](sql-data-warehouse-restore-deleted-dw.md), or if the entire server was deleted, see [Restore a data warehouse from a deleted server (formerly SQL DW)](sql-data-warehouse-restore-from-deleted-server.md).
+- To restore a deleted standalone data warehouse (formerly SQL pool), see [Restore a deleted database (formerly SQL DW)](sql-data-warehouse-restore-deleted-dw.md), or if the entire server was deleted, see [Restore a data warehouse from a deleted server (formerly SQL DW)](sql-data-warehouse-restore-from-deleted-server.md).
+- To restore a deleted dedicated SQL pool in a Synapse workspace, see [Restore a dedicated SQL pool from a deleted workspace](../backuprestore/restore-sql-pool-from-deleted-workspace.md).
> [!NOTE] > Table-level restore is not supported in dedicated SQL Pools. You can only recover an entire database from your backup, and then copy the require table(s) by using
To restore a deleted data warehouse, see [Restore a deleted database (formerly S
> - Export the data from the restored backup into your Data Lake by using CETAS [CETAS Example](/sql/t-sql/statements/create-external-table-as-select-transact-sql?view=sql-server-linux-ver16&preserve-view=true#d-use-create-external-table-as-select-exporting-data-as-parquet) > - Import the data by using [COPY](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true) or [Polybase](../sql/load-data-overview.md#options-for-loading-with-polybase)
-## Cross subscription restore
+## Cross-subscription restore
-You can perform a cross-subscription restore by follow the guidance [here](sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell).
+You can perform a [cross-subscription restore](sql-data-warehouse-restore-active-paused-dw.md#restore-an-existing-dedicated-sql-pool-formerly-sql-dw-to-a-different-subscription-through-powershell).
## Geo-redundant restore
You can [restore your dedicated SQL pool](sql-data-warehouse-restore-from-geo-ba
> [!NOTE] > To perform a geo-redundant restore you must not have opted out of this feature.
-## Support Process
+## Support process
You can [submit a support ticket](sql-data-warehouse-get-started-create-support-ticket.md) through the Azure portal for Azure Synapse Analytics.
-## Next steps
+## Related content
-For more information about restore points, see [User-defined restore points](sql-data-warehouse-restore-points.md)
+- [What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics?](sql-data-warehouse-overview-what-is.md)
synapse-analytics Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/reference-collation-types.md
Title: Collation support
-description: Collation types support for Synapse SQL in Azure Synapse Analytics
+description: Collation types support for Synapse SQL in Azure Synapse Analytics.
--- Previously updated : 02/15/2023 Last updated : 01/30/2024+++
-# Database collation support for Synapse SQL in Azure Synapse Analytics
+# Database collation support for Synapse SQL in Azure Synapse Analytics
Collations provide the locale, code page, sort order, and character sensitivity rules for character-based data types. Once chosen, all columns and expressions requiring collation information inherit the chosen collation from the database setting. The default inheritance can be overridden by explicitly stating a different collation for a character-based data type.
The following table shows which collation types are supported by which service.
|:--:|:-:|:--:|::|::| | Non-UTF-8 Collations | Yes | Yes | Yes | Yes | | UTF-8 | Yes | Yes | No | No |
-| Japanese_Bushu_Kakusu_140_* | Yes | Yes | No | No |
-| Japanese_XJIS_140_* | Yes | Yes | No | No |
-| SQL_EBCDIC1141_CP1_CS_AS | No | No | No | No |
-| SQL_EBCDIC277_2_CP1_CS_AS | No | No | No | No |
+| `Japanese_Bushu_Kakusu_140_*` | Yes | Yes | No | No |
+| `Japanese_XJIS_140_*` | Yes | Yes | No | No |
+| `SQL_EBCDIC1141_CP1_CS_AS` | No | No | No | No |
+| `SQL_EBCDIC277_2_CP1_CS_AS` | No | No | No | No |
## Check the current collation
SELECT DATABASEPROPERTYEX(DB_NAME(), 'Collation') AS Collation;
When passed 'Collation' as the property parameter, the DatabasePropertyEx function returns the current collation for the database specified. For more information, see [DATABASEPROPERTYEX](/sql/t-sql/functions/databasepropertyex-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
-## Next steps
+## Check supported collation
+
+To check the list of supported collations in your dedicated SQL pool:
+
+```sql
+USE master
+GO
+SELECT * FROM sys.fn_helpcollations();
+```
+
+Run the [sys.fn_helpcollations](/sql/relational-databases/system-functions/sys-fn-helpcollations-transact-sql?view=azure-sqldw-latest&preserve-view=true) function from the `master` database.
+
+## Related content
Additional information on best practices for dedicated SQL pool and serverless SQL pool can be found in the following articles: -- [Best practices for dedicated SQL pool](./best-practices-dedicated-sql-pool.md)-- [Best practices for serverless SQL pool](./best-practices-serverless-sql-pool.md)
+- [Best practices for dedicated SQL pools in Azure Synapse Analytics](best-practices-dedicated-sql-pool.md)
+- [Best practices for serverless SQL pool in Azure Synapse Analytics](best-practices-serverless-sql-pool.md)
synapse-analytics Synapse Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-service-identity.md
You can easily execute Synapse Spark Notebooks with the system assigned managed
![synapse-run-as-msi-3](https://user-images.githubusercontent.com/81656932/179053008-0f495b93-4948-48c8-9496-345c58187502.png)
+>[!NOTE]
+> Synapse notebooks and Spark job definitions only support the use of system-assigned managed identity through linked services and the [mssparkutils APIs](./spark/apache-spark-secure-credentials-with-tokenlibrary.md). MSAL and other authentication libraries can't use the system-assigned managed identity. You can instead generate a service principal and store the credentials in Key Vault.
+ ## User-assigned managed identity You can create, delete, manage user-assigned managed identities in Microsoft Entra ID. For more details refer to [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
update-manager Guidance Migration Automation Update Management Azure Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/guidance-migration-automation-update-management-azure-update-manager.md
description: Guidance overview on migration from Automation Update Management to
Previously updated : 12/13/2023 Last updated : 01/23/2024
This article provides guidance to move virtual machines from Automation Update Management to Azure Update Manager.
-Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. It is an evolution of [Azure Automation Update management solution](../automation/update-management/overview.md) with new features and functionality, for assessment and deployment of software updates on a single machine or on multiple machines at scale.
+Azure Update Manager provides a SaaS solution to manage and govern software updates to Windows and Linux machines across Azure, on-premises, and multicloud environments. It's an evolution of [Azure Automation Update management solution](../automation/update-management/overview.md) with new features and functionality, for assessment and deployment of software updates on a single machine or on multiple machines at scale.
For the Azure Update Manager, both AMA and MMA aren't a requirement to manage software update workflows as it relies on the Microsoft Azure VM Agent for Azure VMs and Azure connected machine agent for Arc-enabled servers. When you perform an update operation for the first time on a machine, an extension is pushed to the machine and it interacts with the agents to assess missing updates and install updates.
Guidance to move various capabilities is provided in table below:
1 | Patch management for Off-Azure machines. | Could run with or without Arc connectivity. | Azure Arc is a prerequisite for non-Azure machines. | 1. [Create service principal](../app-service/quickstart-php.md#1get-the-sample-repository) </br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 1. [Create service principal](../azure-arc/servers/onboard-service-principal.md#azure-powershell) <br> 2. [Generate installation script](../azure-arc/servers/onboard-service-principal.md#generate-the-installation-script-from-the-azure-portal) </br> 3. [Install agent and connect to Azure](../azure-arc/servers/onboard-service-principal.md#install-the-agent-and-connect-to-azure) | 2 | Enable periodic assessment to check for latest updates automatically every few hours. | Machines automatically receive the latest updates every 12 hours for Windows and every 3 hours for Linux. | Periodic assessment is an update setting on your machine. If it's turned on, the Update Manager fetches updates every 24 hours for the machine and shows the latest update status. | 1. [Single machine](manage-update-settings.md#configure-settings-on-a-single-vm) </br> 2. [At scale](manage-update-settings.md#configure-settings-at-scale) </br> 3. [At scale using policy](periodic-assessment-at-scale.md) | 1. [For Azure VM](../virtual-machines/automatic-vm-guest-patching.md#azure-powershell-when-updating-a-windows-vm) </br> 2.[For Arc-enabled VM](/powershell/module/az.connectedmachine/update-azconnectedmachine?view=azps-10.2.0) | 3 | Static Update deployment schedules (Static list of machines for update deployment). | Automation Update management had its own schedules. | Azure Update Manager creates a [maintenance configuration](../virtual-machines/maintenance-configurations.md) object for a schedule. So, you need to create this object, copying all schedule settings from Automation Update Management to Azure Update Manager schedule. | 1. [Single VM](scheduled-patching.md#schedule-recurring-updates-on-a-single-vm) </br> 2. [At scale](scheduled-patching.md#schedule-recurring-updates-at-scale) </br> 3. [At scale using policy](scheduled-patching.md#onboard-to-schedule-by-using-azure-policy) | [Create a static scope](manage-vms-programmatically.md) |
-4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. which is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope) | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) |
+4 | Dynamic Update deployment schedules (Defining scope of machines using resource group, tags, etc. that is evaluated dynamically at runtime).| Same as static update schedules. | Same as static update schedules. | [Add a dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope) | [Create a dynamic scope]( tutorial-dynamic-grouping-for-scheduled-patching.md#create-a-dynamic-scope) |
5 | Deboard from Azure Automation Update management. | After you complete the steps 1, 2, and 3, you need to clean up Azure Update management objects. | | [Remove Update Management solution](../automation/update-management/remove-feature.md#remove-updatemanagement-solution) </br> | NA | 6 | Reporting | Custom update reports using Log Analytics queries. | Update data is stored in Azure Resource Graph (ARG). Customers can query ARG data to build custom dashboards, workbooks etc. | The old Automation Update Management data stored in Log analytics can be accessed, but there's no provision to move data to ARG. You can write ARG queries to access data that will be stored to ARG after virtual machines are patched via Azure Update Manager. With ARG queries you can, build dashboards and workbooks using following instructions: </br> 1. [Log structure of Azure Resource graph updates data](query-logs.md) </br> 2. [Sample ARG queries](sample-query-logs.md) </br> 3. [Create workbooks](manage-workbooks.md) | NA | 7 | Customize workflows using pre and post scripts. | Available as Automation runbooks. | We recommend that you try out the Public Preview for pre and post scripts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Manage pre and post events (preview)](manage-pre-post-events.md) | | 8 | Create alerts based on updates data for your environment | Alerts can be set up on updates data stored in Log Analytics. | We recommend that you try out the Public Preview for alerts on your non-production machines and use the feature on production workloads once the feature enters General Availability. |[Create alerts (preview)](manage-alerts.md) | |
+## Scripts to migrate from Automation Update Management to Azure Update Manager
+Using migration runbooks, you can automatically migrate all workloads (machines and schedules) from Automation Update Management to Azure Update Manager. This section details on how to run the script, what the script does at the backend, expected behavior, and any limitations, if applicable. The script can migrate all the machines and schedules in one automation account at one go. If you have multiple automation accounts, you have to run the runbook for all the automation accounts.
+
+At a high level, you need to follow the below steps to migrate your machines and schedules from Automation Update Management to Azure Update Manager.
+
+### Prerequisites summary
+
+1. Onboard [non-Azure machines on to Azure Arc](../azure-arc/servers/onboard-service-principal.md).
+1. Download and run the PowerShell script for the creation of User Identity and Role Assignments locally on your system. See detailed instructions in the [step-by-step guide](#step-by-step-guide) as it also has certain prerequisites.
+
+### Steps summary
+
+1. Run migration automation runbook for migrating machines and schedules from Automation Update Management to Azure Update Manager. See detailed instructions in the [step-by-step guide](#step-by-step-guide).
+1. Run cleanup scripts to deboard from Automation Update Management. See detailed instructions in the [step-by-step guide](#step-by-step-guide).
+
+### Unsupported scenarios
+
+- Update schedules having Pre/Post tasks won't be migrated for now.
+- Non-Azure Saved Search Queries won't be migrated; these have to be migrated manually.
+
+For the complete list of limitations and things to note, see the last section of this article.
+
+### Step-by-step guide
+
+The information mentioned in each of the above steps is explained in detail below.
+
+#### Prerequisite 1: Onboard Non-Azure Machines to Arc
+
+**What to do**
+
+Migration automation runbook ignores resources that aren't onboarded to Arc. It's therefore a prerequisite to onboard all non-Azure machines on to Azure Arc before running the migration runbook. Follow the steps to [onboard machines on to Azure Arc](../azure-arc/servers/onboard-service-principal.md).
+
+#### Prerequisite 2: Create User Identity and Role Assignments by running PowerShell script
+
+**A. Prerequisites to run the script**
+
+ - Run the command `Install-Module -Name Az -Repository PSGallery -Force` in PowerShell. The prerequisite script depends on Az.Modules. This step is required if Az.Modules aren't present or updated.
+ - To run this prerequisite script, you must have *Microsoft.Authorization/roleAssignments/write* permissions on all the subscriptions that contain Automation Update Management resources such as machines, schedules, log analytics workspace, and automation account. See [how to assign an Azure role](../role-based-access-control/role-assignments-rest.md#assign-an-azure-role).
+ - You must have the [Update Management Permissions](../automation/automation-role-based-access-control.md).
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/prerequisite-install-module.png" alt-text="Screenshot that shows how the command to install module." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/prerequisite-install-module.png":::
++
+**B. Run the script**
+
+ Download and run the PowerShell script `MigrationPrerequisiteScript` locally. This script takes AutomationAccountResourceId of the Automation account to be migrated as the input.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/run-script.png" alt-text="Screenshot that shows how to download and run the script." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/run-script.png":::
+
+ You can fetch AutomationAccountResourceId by going to **Automation Account** > **Properties**.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id.png" alt-text="Screenshot that shows how to fetch the resource ID." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id.png":::
+
+**C. Verify**
+
+ After you run the script, verify that a user managed identity is created in the automation account. **Automation account** > **Identity** > **User Assigned**.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/script-verification.png" alt-text="Screenshot that shows how to verify that a user managed identity is created." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/script-verification.png":::
+
+**D. Backend operations by the script**
+
+ 1. Updating the Az.Modules for the Automation account, which will be required for running migration and deboarding scripts
+ 1. Creation of User Identity in the same Subscription and resource group as the Automation Account. The name of User Identity will be like *AutomationAccount_aummig_umsi*.
+ 1. Attaching the User Identity to the Automation Account.
+ 1. The script assigns the following permissions to the user managed identity: [Update Management Permissions Required](../automation/automation-role-based-access-control.md#update-management-permissions).
++
+ 1. For this, the script will fetch all the machines onboarded to Automation Update Management under this automation account and parse their subscription IDs to be given the required RBAC to the User Identity.
+ 1. The script gives a proper RBAC to the User Identity on the subscription to which the automation account belongs so that the MRP configs can be created here.
+ 1. The script will assign the required roles for the Log Analytics workspace and solution.
+
+#### Step 1: Migration of machines and schedules
+
+This step involves using an automation runbook to migrate all the machines and schedules from an automation account to Azure Update Manager.
+
+**Follow these steps:**
+
+1. Import migration runbook from the runbooks gallery and publish. Search for **azure automation update** from browse gallery, and import the migration runbook named **Migrate from Azure Automation Update Management to Azure Update Manager** and publish the runbook.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-automation-update-management.png" alt-text="Screenshot that shows how to migrate from Automation Update Management." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/migrate-from-automation-update-management.png":::
+
+ Runbook supports PowerShell 5.1.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/runbook-support.png" alt-text="Screenshot that shows runbook supports PowerShell 5.1 while importing." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/runbook-support.png":::
+
+1. Set Verbose Logging to True for the runbook.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records.png" alt-text="Screenshot that shows how to set verbose log records." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records.png":::
+
+1. Run the runbook and pass the required parameters like AutomationAccountResourceId, UserManagedServiceIdentityClientId, etc.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/run-runbook-parameters.png" alt-text="Screenshot that shows how to run the runbook and pass the required parameters." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/run-runbook-parameters.png":::
+
+ 1. You can fetch AutomationAccountResourceId from **Automation Account** > **Properties**.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id-portal.png" alt-text="Screenshot that shows how to fetch Automation account resource ID." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id-portal.png":::
+
+ 1. You can fetch UserManagedServiceIdentityClientId from **Automation Account** > **Identity** > **User Assigned** > **Identity** > **Properties** > **Client ID**.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-client-id.png" alt-text="Screenshot that shows how to fetch client ID." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-client-id.png":::
+
+ 1. Setting **EnablePeriodicAssessmentForMachinesOnboardedToUpdateManagement** to **TRUE** would enable periodic assessment property on all the machines onboarded to Automation Update Management.
+
+ 1. Setting **MigrateUpdateSchedulesAndEnablePeriodicAssessmentonLinkedMachines** to **TRUE** would migrate all the update schedules in Automation Update Management to Azure Update Manager and would also turn on periodic assessment property to **True** on all the machines linked to these schedules.
+
+ 1. You need to specify **ResourceGroupForMaintenanceConfigurations** where all the maintenance configurations in Azure Update Manager would be created. If you supply a new name, a resource group would be created where all the maintenance configurations would be created. However, if you supply a name with which a resource group already exists, all the maintenance configurations would be created in the existing resource group.
+
+1. Check Azure Runbook Logs for the status of execution and migration status of SUCs.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/log-status.png" alt-text="Screenshot that shows the runbook logs." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-client-id.png":::
+
+**Runbook operations in backend**
+
+The migration of runbook does the following tasks:
+
+- Enables periodic assessment on all machines.
+- All schedules in the automation account are migrated to Azure Update Manager and a corresponding maintenance configuration is created for each of them, having the same properties.
+
+**About the script**
+
+The following is the behavior of the migration script:
+
+- Check if a resource group with the name taken as input is already present in the subscription of the automation account or not. If not, then create a resource group with the name specified by the Cx. This resource group will be used for creating the MRP configs for V2.
+- The script ignores the update schedules that have pre and post scripts associated with them. For pre and post scripts update schedules, migrate them manually.
+- RebootOnly Setting isn't available in Azure Update Manager. Schedules having RebootOnly Setting aren't migrated.
+- Filter out SUCs that are in the errored/expired/provisioningFailed/disabled state and mark them as **Not Migrated**, and print the appropriate logs indicating that such SUCs aren't migrated.
+- The config assignment name is a string that will be in the format **AUMMig_AAName_SUCName**
+- Figure out if this Dynamic Scope is already assigned to the Maintenance config or not by checking against Azure Resource Graph. If not assigned, then only assign with assignment name in the format **AUMMig_ AAName_SUCName_SomeGUID**.
+- A summarized set of logs is printed to the Output stream to give an overall status of machines and SUCs.
+- Detailed logs are printed to the Verbose Stream.
+- Post-migration, a Software Update Configuration can have any one of the following four migration statuses:
+
+ - **MigrationFailed**
+ - **PartiallyMigrated**
+ - **NotMigrated**
+ - **Migrated**
+The below table shows the scenarios associated with each Migration Status.
+
+| **MigrationFailed** | **PartiallyMigrated** | **NotMigrated** | **Migrated** |
+|||||
+|Failed to create Maintenance Configuration for the Software Update Configuration.| Non-Zero number of Machines where Patch-Settings failed to apply.| Failed to get software update configuration from the API due to some client/server error like maybe **internal Service Error**.| |
+| | Non-Zero number of Machines with failed Configuration Assignments.| Software Update Configuration is having reboot setting as reboot only. This isn't supported today in Azure Update Manager.| |
+| | Non-Zero number of Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph.| Software Update Configuration is having Pre/Post Tasks. Currently, Pre/Post in Preview in Azure Update Manager and such schedules won't be migrated.| |
+| | Non-Zero number of Dynamic Scope Configuration assignment failures.| Software Update Configuration isn't having succeeded provisioning state in DB.| |
+| | Software Update Configuration is having Saved Search Queries.| Software Update Configuration is in errored state in DB.| |
+| | | Schedule associated with Software Update Configuration is already expired at the time of migration.| |
+| | | Schedule associated with Software Update Configuration is disabled.| |
+| | | Unhandled exception while migrating software update configuration.| Zero Machines where Patch-Settings failed to apply.<br><br> **And** <br><br> Zero Machines with failed Configuration Assignments. <br><br> **And** <br><br> Zero Dynamic Queries failed to resolve that is failed to execute the query against Azure Resource Graph. <br><br> **And** <br><br> Zero Dynamic Scope Configuration assignment failures. <br><br> **And** <br><br> Software Update Configuration has zero Saved Search Queries.|
+
+To figure out from the table above which scenario/scenarios correspond to why the software update configuration has a specific status, look at the verbose/failed/warning logs to get the error code and error message.
+
+You can also search with the name of the update schedule to get logs specific to it for debugging.
++
+#### Step 2: Deboarding from Automation Update Management solution
+
+**Follow these steps:**
+
+1. Import the migration runbook from runbooks gallery. Search for **azure automation update** from browse gallery, and import the migration runbook named **Deboard from Azure Automation Update Management** and publish the runbook.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-from-automation-update-management.png" alt-text="Screenshot that shows how to import the deaboard migration runbook." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-from-automation-update-management.png":::
+
+ Runbook supports PowerShell 5.1.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-support.png" alt-text="Screenshot that shows the runbook supports PowerShell 5.1 while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-support.png":::
+
+1. Set Verbose Logging to **True** for the Runbook.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records-deboard.png" alt-text="Screenshot that shows log verbose records setting while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/verbose-log-records-deboard.png":::
+
+1. Start the runbook and pass parameters such as Automation AccountResourceId, UserManagedServiceIdentityClientId, etc.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-parameters.png" alt-text="Screenshot that shows how to start runbook and pass parameters while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-parameters.png":::
+
+ You can fetch AutomationAccountResourceId from **Automation Account** > **Properties**.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/fetch-resource-id-deboard.png" alt-text="Screenshot that shows how to fetch resource ID while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-runbook-parameters.png":::
+
+ You can fetch UserManagedServiceIdentityClientId from **Automation Account** > **Identity** > **User Assigned** > **Identity** > **Properties** > **Client ID**.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-fetch-client-id.png" alt-text="Screenshot that shows how to fetch client ID while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-fetch-client-id.png":::
+
+1. Check Azure runbook logs for the status of deboarding of machines and schedules.
+
+ :::image type="content" source="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-debug-logs.png" alt-text="Screenshot that shows how runbook logs while deboarding." lightbox="./media/guidance-migration-automation-update-management-azure-update-manager/deboard-debug-logs.png":::
+
+**Deboarding script operations in the backend**
+
+- Disable all the underlying schedules for all the software update configurations present in this Automation account. This is done to ensure that Patch-MicrosoftOMSComputers Runbook isn't triggered for SUCs that were partially migrated to V2.
+- Delete the Updates Solution from the Linked Log Analytics Workspace for the Automation Account being Deboarded from Automation Update Management in V1.
+- A summarized log of all SUCs disabled and status of removing updates solution from linked log analytics workspace is also printed to the output stream.
+- Detailed logs are printed on the verbose streams.
+
+**Callouts for the migration process:**
+
+- Schedules having pre/post tasks won't be migrated for now.
+- Non-Azure Saved Search Queries won't be migrated.
+- The Migration and Deboarding Runbooks need to have the Az.Modules updated to work.
+- The prerequisite script updates the Az.Modules to the latest version 8.0.0.
+- The StartTime of the MRP Schedule will be equal to the nextRunTime of the Software Update Configuration.
+- Data from Log Analytics won't be migrated.
+- User Managed Identities [don't support](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/managed-identities-faq#can-i-use-a-managed-identity-to-access-a-resource-in-a-different-directorytenant) cross tenant scenarios.
+- RebootOnly Setting isn't available in Azure Update Manager. Schedules having RebootOnly Setting won't be migrated.
+- For Recurrence, Automation schedules support values between (1 to 100) for Hourly/Daily/Weekly/Monthly schedules, whereas Azure Update ManagerΓÇÖs maintenance configuration supports between (6 to 35) for Hourly and (1 to 35) for Daily/Weekly/Monthly.
+ - For example, if the automation schedule has a recurrence of every 100 Hours, then the equivalent maintenance configuration schedule will have it for every 100/24 = 4.16 (Round to Nearest Value) -> Four days will be the recurrence for the maintenance configuration.
+ - For example, if the automation schedule has a recurrence of every 1 hour, then the equivalent maintenance configuration schedule will have it every 6 hours.
+ - Apply the same convention for Weekly and Daily.
+ - If the automation schedule has daily recurrence of say 100 days, then 100/7 = 14.28 (Round to Nearest Value) -> 14 weeks will be the recurrence for the maintenance configuration schedule.
+ - If the automation schedule has weekly recurrence of say 100 weeks, then 100/4.34 = 23.04 (Round to Nearest Value) -> 23 Months will be the recurrence for the maintenance configuration schedule.
+ - If the automation schedule that should recur Every 100 Weeks and has to be Executed on Fridays. When translated to maintenance configuration, it will be Every 23 Months (100/4.34). But there's no way in Azure Update Manager to say that execute every 23 Months on all Fridays of that Month, so the schedule won't be migrated.
+ - If an automation schedule has a recurrence of more than 35 Months, then in maintenance configuration it will always have 35 Months Recurrence.
+ - SUC supports between 30 Minutes to six Hours for the Maintenance Window. MRP supports between 1 hour 30 minutes to 4 hours.
+ - For example, if SUC has a Maintenance Window of 30 Minutes, then the equivalent MRP schedule will have it for 1 hour 30 minutes.
+ - For example, if SUC has a Maintenance Window of 6 hours, then the equivalent MRP schedule will have it for 4 hours.
+- When the migration runbook is executed multiple times, say you did Migrate All automation schedules and then again tried to migrate all the schedules, then migration runbook will run the same logic. Doing it again will update the MRP schedule if any new change is present in SUC. It won't make duplicate config assignments. Also, operations are carried only for automation schedules having enabled schedules. If an SUC was **Migrated** earlier, it will be skipped in the next turn as its underlying schedule will be **Disabled**.
+- In the end, you can resolve more machines from Azure Resource Graph as in Azure Update Manager; You can't check if Hybrid Runbook Worker is reporting or not, unlike in Automation Update Management where it was an intersection of Dynamic Queries and Hybrid Runbook Worker.
+ ## Next steps-- [An overview on Azure Update Manager](overview.md)-- [Check update compliance](view-updates.md) -- [Deploy updates now (on-demand) for single machine](deploy-updates.md) -- [Schedule recurring updates](scheduled-patching.md)+
+- [Guidance on migrating Azure VMs from Microsoft Configuration Manager to Azure Update Manager](./guidance-migration-azure.md)
update-manager Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequsite-for-schedule-patching.md
Title: Configure schedule patching on Azure VMs for business continuity description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager. + Last updated 01/17/2024
update-manager Update Manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/update-manager-faq.md
Title: Azure Update Manager FAQ
description: This article gives answers to frequently asked questions about Azure Update Manager Previously updated : 01/23/2024 Last updated : 01/31/2024 #Customer intent: As an implementer, I want answers to various questions.
An Arc-enabled server is considered managed by Azure Update Manager for days on
An Arc-enabled server managed with Azure Update Manager is not charged in following scenarios: - If the machine is enabled for delivery of Extended Security Updates (ESUs) enabled by Azure Arc.
+ - Microsoft Defender for Servers Plan 2 is enabled for the subscription hosting the Arc-enabled server. However, if customer is using Defender using Security connector, they will be charged.
### Will I be charged if I move from Automation Update Management to Update Manager?
-Customers will not be charged if they onboard their Arc enabled servers to the same subscription that contains their Automation accounts until the LA agent is retired.
+Customers will not be charged for already existing Arc-enabled servers which were using Automation Update Management for free as of Sep 1, 2023. Any new Arc-enabled machines which will be onboarded to Azure Update Manager in the same subscription will also be exempted from charge. This exception will be provided till LA agent retires. Post that date, these customers will be charged.
### I'm a Defender for Server customer and use update recommendations powered by Azure Update Manager namely "periodic assessment should be enabled on your machines" and "system updates should be installed on your machines". Would I be charged for Azure Update Manager?
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
description: Learn about new and updated articles to the Azure Virtual Desktop d
Previously updated : 12/04/2023 Last updated : 01/31/2024 # What's new in documentation for Azure Virtual Desktop We update documentation for Azure Virtual Desktop regularly. In this article we highlight articles for new features and where there have been important updates to existing articles.
+## January 2024
+
+In January 2024, we published the following changes:
+
+- Consolidated articles to [Create and assign an autoscale scaling plan for Azure Virtual Desktop](autoscale-scaling-plan.md) into a single article.
+
+- Added PowerShell commands to [Create and assign an autoscale scaling plan for Azure Virtual Desktop](autoscale-scaling-plan.md).
+
+- Removed the separate documentation section for RemoteApp streaming and combined it with the main Azure Virtual Desktop documentation. Some articles that were previously only in the RemoteApp section are now discoverable in the main Azure Virtual Desktop documentation, such as [Understand and estimate costs for Azure Virtual Desktop](understand-estimate-costs.md) and [Licensing Azure Virtual Desktop](licensing.md).
+ ## December 2023 In December 2023, we published the following changes:
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-rest-api.md
ARM templates let you deploy groups of related resources. In a single template,
}, "vmSku": { "type": "string",
- "defaultValue": "Standard_D1_v2",
+ "defaultValue": "Standard_D2s_v3",
"metadata": { "description": "Size of VMs in the VM Scale Set." }
ARM templates let you deploy groups of related resources. In a single template,
"description": "SSH Key or password for the Virtual Machine. SSH key is recommended." } },
+ "securityType": {
+ "type": "string",
+ "defaultValue": "TrustedLaunch",
+ "allowedValues": [
+ "Standard",
+ "TrustedLaunch"
+ ],
+ "metadata": {
+ "description": "Security Type of the Virtual Machine."
+ }
+ },
"_artifactsLocation": { "type": "string", "defaultValue": "[deployment().properties.templatelink.uri]",
ARM templates let you deploy groups of related resources. In a single template,
"ipConfigName": "[concat(parameters('vmssName'), 'ipconfig')]", "frontEndIPConfigID": "[resourceId('Microsoft.Network/loadBalancers/frontendIPConfigurations', variables('loadBalancerName'),'loadBalancerFrontEnd')]", "osType": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "16.04-LTS",
- "version": "latest"
+ "publisher": "Canonical",
+ "offer": "0001-com-ubuntu-server-focal",
+ "sku": "20_04-lts-gen2",
+ "version": "latest"
}, "imageReference": "[variables('osType')]",
+ "securityProfileJson": {
+ "uefiSettings": {
+ "secureBootEnabled": true,
+ "vTpmEnabled": true
+ },
+ "securityType": "[parameters('securityType')]"
+ },
"linuxConfiguration": { "disablePasswordAuthentication": true, "ssh": {
ARM templates let you deploy groups of related resources. In a single template,
"resources": [ { "type": "Microsoft.Network/networkSecurityGroups",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2023-04-01",
"name": "[variables('networkSecurityGroupName')]", "location": "[parameters('location')]", "properties": {
ARM templates let you deploy groups of related resources. In a single template,
}, { "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2023-04-01",
"name": "[variables('virtualNetworkName')]", "location": "[parameters('location')]", "properties": {
ARM templates let you deploy groups of related resources. In a single template,
}, { "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2023-04-01",
"name": "[variables('publicIPAddressName')]", "location": "[parameters('location')]", "sku": {
ARM templates let you deploy groups of related resources. In a single template,
}, { "type": "Microsoft.Network/loadBalancers",
- "apiVersion": "2020-06-01",
+ "apiVersion": "2023-04-01",
"name": "[variables('loadBalancerName')]", "location": "[parameters('location')]", "sku": {
ARM templates let you deploy groups of related resources. In a single template,
}, { "type": "Microsoft.Compute/virtualMachineScaleSets",
- "apiVersion": "2021-03-01",
+ "apiVersion": "2023-09-01",
"name": "[parameters('vmssName')]", "location": "[parameters('location')]", "sku": {
ARM templates let you deploy groups of related resources. In a single template,
"computerNamePrefix": "[parameters('vmssName')]", "adminUsername": "[parameters('adminUsername')]", "adminPassword": "[parameters('adminPasswordOrKey')]",
- "linuxConfiguration": "[if(equals(parameters('authenticationType'), 'password'), json('null'), variables('linuxConfiguration'))]"
+ "linuxConfiguration": "[if(equals(parameters('authenticationType'), 'password'), null(), variables('linuxConfiguration'))]"
},
+ "securityProfile": "[if(equals(parameters('securityType'), 'TrustedLaunch'), variables('securityProfileJson'), null())]",
"networkProfile": { "networkApiVersion": "[variables('networkApiVersion')]", "networkInterfaceConfigurations": [
ARM templates let you deploy groups of related resources. In a single template,
}, { "type": "Microsoft.Insights/autoscaleSettings",
- "apiVersion": "2015-04-01",
+ "apiVersion": "2022-10-01",
"name": "[concat(parameters('vmssName'), '-autoscalehost')]", "location": "[parameters('location')]", "dependsOn": [
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 01/22/2024 Last updated : 01/30/2024 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
If your script is on a local server, you might still need to open other firewall
### Tips - Output is limited to the last 4,096 bytes.
+- Properly escaping characters will help ensure that strings are parsed correctly. For example, you always need two backslashes to escape a single literal backslash when dealing with file paths. Sample: {"commandToExecute": "C:\\Windows\\System32\\systeminfo.exe >> D:\\test.txt"}
- The highest failure rate for this extension is due to syntax errors in the script. Verify that the script runs without errors. Put more logging into the script to make it easier to find failures. - Write scripts that are idempotent, so that running them more than once accidentally doesn't cause system changes. - Ensure that the scripts don't require user input when they run.
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
Then run installation commands specific for your distribution.
### Ubuntu
-1. Download and install the CUDA drivers from the NVIDIA website.
- > [!NOTE]
- > The example shows the CUDA package path for Ubuntu 20.04. Replace the path specific to the version you plan to use.
- >
- > Visit the [NVIDIA Download Center](https://developer.download.nvidia.com/compute/cuda/repos/) or the [NVIDIA CUDA Resources page](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_network) for the full path specific to each version.
- >
- ```bash
- wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
- sudo dpkg -i cuda-keyring_1.0-1_all.deb
- sudo apt-get update
- sudo apt-get -y install cuda-drivers
-
- ```
-
- The installation can take several minutes.
-
-2. Reboot the VM and proceed to verify the installation.
-
-#### CUDA driver updates
-
-We recommend that you periodically update CUDA drivers after deployment.
-
-```bash
-sudo apt-get update
-sudo apt-get upgrade -y
-sudo apt-get dist-upgrade -y
-sudo apt-get install cuda-drivers
-
-sudo reboot
-```
-
-#### Install CUDA driver on Ubuntu with Secure Boot enabled
-
-With Secure Boot enabled, all Linux kernel modules are required to be signed by the key trusted by the system.
-
-1. Install pre-built Azure Linux kernel based NVIDIA modules and CUDA drivers
+Ubuntu packages NVIDIA proprietary drivers. Those drivers come directly from NVIDIA and are simply packaged by Ubuntu so that they can be automatically managed by the system. Downloading and installing drivers from another source can lead to a broken system. Moreover, installing third-party drivers requires extra-steps on VMs with TrustedLaunch and Secure Boot enabled. They require the user to add a new Machine Owner Key for the system to boot. Drivers from Ubuntu are signed by Canonical and will work with Secure Boot.
+1. Install `ubuntu-drivers` utility:
```bash
- sudo apt-get update
- sudo apt install -y linux-modules-nvidia-525-azure nvidia-driver-525
+ sudo apt update && sudo apt install -y ubuntu-drivers-common
```-
-2. Change preference of NVIDIA packages to prefer NVIDIA repository
-
+2. Install the latest NVIDIA drivers:
```bash
- sudo tee /etc/apt/preferences.d/cuda-repository-pin-600 > <<EOL
- Package: nsight-compute
- Pin: origin *ubuntu.com*
- Pin-Priority: -1
- Package: nsight-systems
- Pin: origin *ubuntu.com*
- Pin-Priority: -1
- Package: nvidia-modprobe
- Pin: release l=NVIDIA CUDA
- Pin-Priority: 600
- Package: nvidia-settings
- Pin: release l=NVIDIA CUDA
- Pin-Priority: 600
- Package: *
- Pin: release l=NVIDIA CUDA
- Pin-Priority: 100
- EOL
+ sudo ubuntu-drivers install
```-
-3. Add CUDA repository
-
- ```bash
- sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/3bf863cc.pub
- ```
-
- ```bash
- sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/ /"
- ```
-
- where `$distro/$arch` should be replaced by one of the following:
-
- ```
- ubuntu2004/arm64
- ubuntu2004/x86_64
- ubuntu2204/arm64
- ubuntu2204/x86_64
- ```
-
- If `add-apt-repository` command is not found, run `sudo apt-get install software-properties-common` to install it.
-
-4. Install kernel headers and development packages, and remove outdated signing key
-
- ```bash
- sudo apt-get install linux-headers-$(uname -r)
- sudo apt-key del 7fa2af80
- ```
-
-5. Install the new cuda-keyring package
-
- ```bash
- wget https://developer.download.nvidia.com/compute/cuda/repos/$distro/$arch/cuda-keyring_1.1-1_all.deb
- sudo dpkg -i cuda-keyring_1.1-1_all.deb
- ```
-
- Note: When prompt on different versions of cuda-keyring, select `Y or I : install the package maintainer's version` to proceed.
-
-6. Update APT repository cache and install NVIDIA GPUDirect Storage
-
+3. Download and install the CUDA toolkit from NVIDIA:
+ > [!NOTE]
+ > The example shows the CUDA package path for Ubuntu 22.04 LTS. Replace the path specific to the version you plan to use.
+ >
+ > Visit the [NVIDIA Download Center](https://developer.download.nvidia.com/compute/cuda/repos/) or the [NVIDIA CUDA Resources page](https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_network) for the full path specific to each version.
+ >
```bash
- sudo apt-get update
- sudo apt-get install -y nvidia-gds
+ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
+ sudo apt install -y ./cuda-keyring_1.1-1_all.deb
+ sudo apt update
+ sudo apt -y install cuda-toolkit-12-3
```
- Note that during the installation you will be prompted for password when configuring secure boot, a password of your choice needs to be provided and then proceed.
-
- ![Secure Boot Password Configuration](./media/n-series-driver-setup/secure-boot-passwd.png)
-
-7. Reboot the VM
+ The installation can take several minutes.
+4. Verify that the GPU is correctly recognized:
```bash
- sudo reboot
+ nvidia-smi
```
-8. Verify NVIDIA CUDA drivers are installed and loaded
+#### NVIDIA driver updates
- ```bash
- dpkg -l | grep -i nvidia
- nvidia-smi
- ```
+We recommend that you periodically update NVIDIA drivers after deployment.
+```bash
+sudo apt update
+sudo apt full-upgrade
+```
### CentOS or Red Hat Enterprise Linux
virtual-machines Nd H100 V5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-h100-v5-series.md
NVIDIA NVLink Interconnect: Supported <br>
>Azure supports Ubuntu 20.04/22.04, RHEL 7.9/8.7/9.3, AlmaLinux 8.8/9.2, and SLES 15 for ND H100 v5 VMs. On Azure marketplace, there are offerings of optimized and pre-configured [Linux VM images](configure.md#vm-images) for HPC/AI workloads with a variety of HPC tools and libraries installed, and thus they are strongly recommended. Currently, Ubuntu-HPC 20.04/22.04 and AlmaLinux-HPC 8.6/8.7 VM images are supported. ## Example
-The ND H100 v5 series supports the following kernel version:
-Ubuntu 20.04: 5.4.0-1046-azure
+[comment]: # (The ND H100 v5 series supports the following kernel version: Ubuntu 20.04: 5.4.0-1046-azure)
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU Memory GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max network bandwidth | Max NICs | |||||-|-|-|--||-|
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
virtual-machines Jboss Eap Single Server Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-single-server-azure-vm.md
Last updated 01/03/2024 -+ # Quickstart: Deploy JBoss EAP Server on an Azure virtual machine using the Azure portal
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 01/22/2024 Last updated : 01/30/2024
virtual-network Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-terraform.md
description: In this quickstart, you create an Azure Virtual Network and Subnets using Terraform. You use Azure CLI to verify the resources. Last updated 1/19/2024-+
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **ApiManagement** | Management traffic for Azure API Management-dedicated deployments. <br/><br/>**Note**: This tag represents the Azure API Management service endpoint for control plane per region. The tag enables customers to perform management operations on the APIs, Operations, Policies, NamedValues configured on the API Management service. | Inbound | Yes | Yes | | **ApplicationInsightsAvailability** | Application Insights Availability. | Inbound | No | Yes | | **AppConfiguration** | App Configuration. | Outbound | No | Yes |
-| **AppService** | Azure App Service. This tag is recommended for outbound security rules to web apps and Function apps.<br/><br/>**Note**: This tag doesn't include IP addresses assigned when using IP-based SSL (App-assigned address). | Outbound | Yes | Yes |
+| **AppService** | Azure App Service. This tag is recommended for outbound security rules to web apps and function apps.<br/><br/>**Note**: This tag doesn't include IP addresses assigned when using IP-based SSL (App-assigned address). | Outbound | Yes | Yes |
| **AppServiceManagement** | Management traffic for deployments dedicated to App Service Environment. | Both | No | Yes | | **AutonomousDevelopmentPlatform** | Autonomous Development Platform | Both | Yes | Yes | | **AzureActiveDirectory** | Microsoft Entra ID. | Outbound | No | Yes |
virtual-network Virtual Network For Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-for-azure-services.md
Deploying services within a virtual network provides the following capabilities:
| Containers | [Azure Kubernetes Service (AKS)](../aks/concepts-network.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Container Instance (ACI)](https://www.aka.ms/acivnet)<br/>[Azure Container Service Engine](https://github.com/Azure/acs-engine) with Azure Virtual Network CNI [plug-in](https://github.com/Azure/acs-engine/tree/master/examples/vnet)<br/>[Azure Functions](../azure-functions/functions-networking-options.md#virtual-network-integration) |No<sup>2</sup><br/> Yes <br/> No <br/> Yes | Web | [API Management](../api-management/api-management-using-with-vnet.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Web Apps](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[App Service Environment](../app-service/overview-vnet-integration.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure Container Apps environments](../container-apps/networking.md)<br/>|Yes <br/> Yes <br/> Yes <br/> Yes <br/> Yes | Hosted | [Azure Dedicated HSM](../dedicated-hsm/index.yml?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>[Azure NetApp Files](../azure-netapp-files/azure-netapp-files-introduction.md?toc=%2fazure%2fvirtual-network%2ftoc.json)<br/>|Yes <br/> Yes <br/>
-| Azure Spring Apps | [Deploy in Azure virtual network (VNet injection)](../spring-apps/how-to-deploy-in-azure-virtual-network.md)<br/>| Yes <br/>
+| Azure Spring Apps | [Deploy in Azure virtual network (VNet injection)](../spring-apps/enterprise/how-to-deploy-in-azure-virtual-network.md)<br/>| Yes <br/>
| Virtual desktop infrastructure| [Azure Lab Services](../lab-services/how-to-connect-vnet-injection.md)<br/>| Yes <br/> | DevOps | [Azure Load Testing](/azure/load-testing/concept-azure-load-testing-vnet-injection)<br/>| Yes <br/>
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
No. Virtual WAN does not support ASN changes for VPN gateways.
[!INCLUDE [ExpressRoute Performance](../../includes/virtual-wan-expressroute-performance.md)]
+### If I connect an ExpressRoute Local circuit to a Virtual WAN hub, will I only be able to access regions in the same metro location as the Local circuit?
+
+Local circuits can only be connected to ExpressRoute gateways in their corresponding Azure region. However, there is no limitation to route traffic to spoke virtual networks in other regions.
++ ### <a name="update-router"></a>Why am I seeing a message and button called "Update router to latest software version" in portal? > [!NOTE]
virtual-wan Virtual Wan Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-site-to-site-portal.md
description: Learn how to use Azure Virtual WAN to create a site-to-site VPN con
Previously updated : 08/09/2023 Last updated : 01/31/2024 # Customer intent: As someone with a networking background, I want to connect my local site to my VNets using Virtual WAN and I don't want to go through a Virtual WAN partner.
Verify that you've met the following criteria before beginning your configuratio
## <a name="hub"></a>Configure virtual hub settings
-A virtual hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. For this tutorial, you begin by filling out the **Basics** tab for the virtual hub and then continue on to fill out the site-to-site tab in the next section. It's also possible to create an empty virtual hub (a virtual hub that doesn't contain any gateways) and then add gateways (S2S, P2S, ExpressRoute, etc.) later. Once a virtual hub is created, you'll be charged for the virtual hub, even if you don't attach any sites or create any gateways within the virtual hub.
+A virtual hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. For this tutorial, you begin by filling out the **Basics** tab for the virtual hub and then continue on to fill out the site-to-site tab in the next section. It's also possible to create an empty virtual hub (a virtual hub that doesn't contain any gateways) and then add gateways (S2S, P2S, ExpressRoute, etc.) later. Once a virtual hub is created, you're charged for the virtual hub, even if you don't attach any sites or create any gateways within the virtual hub.
[!INCLUDE [Create a virtual hub](../../includes/virtual-wan-hub-basics.md)]
-Don't create the virtual hub yet. Continue on to the next section to configure additional settings.
+**Don't create the virtual hub yet**. Continue on to the next section to configure more settings.
## <a name="gateway"></a>Configure a site-to-site gateway
-In this section, you configure site-to-site connectivity settings, and then proceed to create the virtual hub and site-to-site VPN gateway. A virtual hub and gateway can take about 30 minutes to create.
+In this section, you configure site-to-site connectivity settings, and then create the virtual hub and site-to-site VPN gateway. A virtual hub and gateway can take about 30 minutes to create.
[!INCLUDE [Create a gateway](../../includes/virtual-wan-tutorial-s2s-gateway-include.md)]
In this section, you configure site-to-site connectivity settings, and then proc
## <a name="site"></a>Create a site
-In this section, you create a site. Sites correspond to your physical locations. Create as many sites as you need. For example, if you have a branch office in NY, a branch office in London, and a branch office in LA, you'd create three separate sites. These sites contain your on-premises VPN device endpoints. You can create up to 1000 sites per virtual hub in a virtual WAN. If you had multiple virtual hubs, you can create 1000 per each of those virtual hubs. If you have a Virtual WAN partner CPE device, check with them to learn about their automation to Azure. Typically, automation implies a simple click experience to export large-scale branch information into Azure, and setting up connectivity from the CPE to Azure Virtual WAN VPN gateway. For more information, see [Automation guidance from Azure to CPE partners](virtual-wan-configure-automation-providers.md).
+In this section, you create a site. Sites correspond to your physical locations. Create as many sites as you need. These sites contain your on-premises VPN device endpoints.
+
+For example, if you have a branch office in NY, a branch office in London, and a branch office in LA, you'd create three separate sites. You can create up to 1000 sites per virtual hub in a virtual WAN. If you have multiple virtual hubs, you can create 1000 per each virtual hub.
+
+If you have a Virtual WAN partner CPE device, check with them to learn about their automation to Azure. Typically, automation implies a simple click experience to export large-scale branch information into Azure, and setting up connectivity from the CPE to Azure Virtual WAN VPN gateway. For more information, see [Automation guidance from Azure to CPE partners](virtual-wan-configure-automation-providers.md).
[!INCLUDE [Create a site](../../includes/virtual-wan-tutorial-s2s-site-include.md)]
In this section, you connect your VPN site to the virtual hub.
## <a name="vnet"></a>Connect a VNet to the virtual hub
-In this section, you create a connection between the virtual hub and your VNet.
+In this section, you create a connection between the virtual hub and your virtual network.
[!INCLUDE [Connect](../../includes/virtual-wan-connect-vnet-hub-include.md)] ## <a name="device"></a>Download VPN configuration
-Use the VPN device configuration file to configure your on-premises VPN device. The basic steps are listed below.
+Use the VPN device configuration file to configure your on-premises VPN device. Here are the basic steps:
1. From your Virtual WAN page, go to **Hubs -> Your virtual hub -> VPN (Site to site)** page.
Use the VPN device configuration file to configure your on-premises VPN device.
1. Apply the configuration to your on-premises VPN device. For more information, see [VPN device configuration](#vpn-device) in this section.
-1. After you've applied the configuration to your VPN devices, it is not required to keep the storage account that you created.
+1. After you've applied the configuration to your VPN devices, you aren't required to keep the storage account that you created.
### <a name="config-file"></a>About the VPN device configuration file
The device configuration file contains the settings to use when configuring your
* **vpnSiteConfiguration -** This section denotes the device details set up as a site connecting to the virtual WAN. It includes the name and public IP address of the branch device. * **vpnSiteConnections -** This section provides information about the following settings:
- * **Address space** of the virtual hub(s) VNet.<br>Example:
+ * **Address space** of the virtual hub(s) virtual network.<br>Example:
``` "AddressSpace":"10.1.0.0/24" ```
- * **Address space** of the VNets that are connected to the virtual hub.<br>Example:
+ * **Address space** of the virutal networks that are connected to the virtual hub.<br>Example:
``` "ConnectedSubnets":["10.2.0.0/16","10.3.0.0/16"]
The device configuration file contains the settings to use when configuring your
"Instance0":"104.45.18.186" "Instance1":"104.45.13.195" ```
- * **Vpngateway connection configuration details** such as BGP, pre-shared key etc. The PSK is the pre-shared key that is automatically generated for you. You can always edit the connection in the **Overview** page for a custom PSK.
+ * **Vpngateway connection configuration details** such as BGP, preshared key etc. The PSK is the preshared key that is automatically generated for you. You can always edit the connection in the **Overview** page for a custom PSK.
### Example device configuration file
web-application-firewall Migrate Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/migrate-policy.md
# Upgrade Web Application Firewall policies using Azure PowerShell
-This script makes it easy to transition from a WAF config, or a custom rules-only WAF policy, to a full WAF policy. You may see a warning in the portal that says *upgrade to WAF policy*, or you may want the new WAF features such as Geomatch custom rules, per-site WAF policy, and per-URI WAF policy, or the bot mitigation ruleset. To use any of these features, you need a full WAF policy associated to your application gateway.
+This script makes it easy to transition from a WAF config, or a custom rules-only WAF policy, to a full WAF policy. You might see a warning in the portal that says *upgrade to WAF policy*, or you might want the new WAF features such as Geomatch custom rules, per-site WAF policy, and per-URI WAF policy, or the bot mitigation ruleset. To use any of these features, you need a full WAF policy associated to your application gateway.
For more information about creating a new WAF policy, see [Create Web Application Firewall policies for Application Gateway](create-waf-policy-ag.md). For information about migrating, see [upgrade to WAF policy](create-waf-policy-ag.md#upgrade-to-waf-policy).
Use the following steps to run the migration script:
1. Open the following Cloud Shell window, or open one from within the portal. 2. Copy the script into the Cloud Shell window and run it.
-3. The script asks for Subscription ID, Resource Group name, the name of the Application Gateway that the WAF config is associated with, and the name of the new WAF policy that you will create. Once you enter these inputs, the script runs and creates your new WAF policy
+3. The script asks for Subscription ID, Resource Group name, the name of the Application Gateway that the WAF config is associated with, and the name of the new WAF policy that you create. Once you enter these inputs, the script runs and creates your new WAF policy
4. Verify the new WAF policy is associated with your application gateway. Go to the WAF policy in the portal and select the **Associated Application Gateways** tab. Verify the Application Gateway is associated with the WAF policy. > [!NOTE]
function Main() {
return $policy }
-Main
-
-function Main() {
- Login
- $policy = createNewTopLevelWafPolicy -subscriptionId $subscriptionId -resourceGroupName $resourceGroupName -applicationGatewayName $applicationGatewayName -wafPolicyName $wafPolicyName
- return $policy
-}
- Main ``` ## Next steps