Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
advisor | Advisor Assessments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-assessments.md | Title: Use Well Architected Framework assessments in Azure Advisor description: Azure Advisor offers Well Architected Framework assessments (curated and focused Advisor optimization reports) through the Assessments entry in the left menu of the Azure Advisor Portal. Previously updated : 02/18/2024 Last updated : 08/22/2024 #customer intent: As an Advisor user, I want WAF assessments so that I can better understand recommendations. To see all Microsoft assessment choices, go to the [Learn platform > Assessments ## Prerequisites -You can manage access to Advisor WAF assessments using built-in roles. The permissions vary by role. --> [!NOTE] -> These roles must be configured for the relevant subscription to create the assessment and view the corresponding recommendations. --| **Name** | **Description** | -||::| -|Reader|View assessments for a subscription or workload and the corresponding recommendations| -|Contributor|Create assessments for a subscription or workload and triage the corresponding recommendations| ## Access Azure Advisor WAF assessments |
advisor | Advisor Resiliency Reviews | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-resiliency-reviews.md | Title: Azure Advisor resiliency reviews description: Optimize resource resiliency with custom recommendation reviews. Previously updated : 03/8/2024 Last updated : 08/22/2024 Your Microsoft account team works with you to collect information about the work To view or triage recommendations, or to manage recommendations' lifecycles, requires specific role permissions. For definitions, see [Terminology](#terminology). -### Prerequisites to view and triage recommendations --You can manage access to Advisor reviews using built-in roles. The [permissions](/azure/advisor/permissions) vary by role. These roles need to be configured for the subscription that was used to publish the review. --| **Name** | **Description** | **Targeted Subscription** | -||::|::| -|Advisor Reviews Reader|View reviews for a workload and recommendations linked to them.| You need this role for the one subscription your account team used to publish review.| -|Advisor Reviews Contributor|View reviews for a workload and triage recommendations linked to them.| You need this role for the one subscription your account team used to publish review.| --You can manage access to Advisor personalized recommendations using the following roles. These roles need to be configured for the subscriptions included in the workload under a review. --| **Name** | **Description** | -||::| -|Subscription Reader|View reviews for a workload and recommendations linked to them.| -|Subscription Owner<br>Subscription Contributor|View reviews for a workload, triage recommendations linked to those reviews, manage the recommendation lifecycle.| -|Advisor Recommendations Contributor (Assessments and Reviews)|View accepted recommendations, and manage the recommendation lifecycle.| --You can find detailed instructions on how to assign a role using the Azure portal - [Assign Azure roles using the Azure portal - Azure RBAC](/azure/role-based-access-control/role-assignments-portal?tabs=delegate-condition). Additional information is available in [Steps to assign an Azure role - Azure RBAC](/azure/role-based-access-control/role-assignments-steps). ### Access reviews |
advisor | Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/permissions.md | Title: Roles and permissions -description: Learn about Advisor permissions and how they might block your ability to configure subscriptions or postpone or dismiss recommendations. +description: Learn about Advisor permissions, how to manage access to Advisor recommendations and reviews. Previously updated : 05/03/2024 Last updated : 08/22/2024 # Roles and permissions -Azure Advisor provides recommendations based on the usage and configuration of your Azure resources and subscriptions. Advisor uses the [built-in roles](../role-based-access-control/built-in-roles.md) provided by [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md) to manage your access to recommendations and Advisor features. +Learn how to manage access to recommendations and reviews for your organization. -## Roles and their access +## Roles and associated access -The following table defines the roles and the access they have within Advisor. +Advisor uses the built-in roles provided by Azure role-based access control (Azure RBAC). -| Role | View recommendations | Edit rules | Edit subscription configuration | Edit resource group configuration| Dismiss and postpone recommendations| -||::|::|::|::|::| -|Subscription Owner|**X**|**X**|**X**|**X**|**X**| -|Subscription Contributor|**X**|**X**|**X**|**X**|**X**| -|Subscription Reader|**X**|--|--|--|--| -|Resource group Owner|**X**|--|--|**X**|**X**| -|Resource group Contributor|**X**|--|--|**X**|**X**| -|Resource group Reader|**X**|--|--|--|--| -|Resource Owner|**X**|--|--|--|**X**| -|Resource Contributor|**X**|--|--|--|**X**| -|Resource Reader|**X**|--|--|--|--| +Review the following section to learn more about each role and the associated access. ++### Roles to view, dismiss, and postpone recommendations ++| Role | View recommendations | Dismiss and postpone recommendations | +|:|: |: | +| Subscription Reader | X | | +| Subscription Contributor | X | X | +| Subscription Owner | X | X | +| Resource group Reader | X | | +| Resource group Contributor | X | X | +| Resource group Owner | X | X | +| Resource Reader | X | | +| Resource Contributor | X | X | +| Resource Owner | X | X | ++### Roles to edit rules and configurations ++| Role | Edit rules | Edit subscription configuration | Edit resource group configuration | +|:|: |: |: | +| Subscription Contributor | X | X | X | +| Subscription Owner | X | X | X | +| Resource group Contributor | | | X | +| Resource group Owner | | | X | ++> [!NOTE] +> You must have access to the resource associated with the recommendation to view a recommendation. ++To learn more about built-in roles, see [Azure built-in roles](/azure/role-based-access-control/built-in-roles "Azure built-in roles | Azure RBAC | Microsoft Learn"). To learn more about Azure role-based access control (Azure RBAC), see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview "What is Azure role-based access control (Azure RBAC)? | Azure RBAC | Microsoft Learn"). ++++## Available actions to build custom roles ++If your organization requires roles that don't match the Azure built-in roles, create your own custom role. A custom role works like a built-in role and allow you to assign it to users, groups, and service principals at management group, subscription, and resource group scopes. Use the following actions to create your custom role. ++| Action | Details | +|: |: | +| `Microsoft.Advisor/generateRecommendations/action` | Create a Recommendation. | +| `Microsoft.Advisor/register/action` | Register with the Provider. | +| `Microsoft.Advisor/unregister/action` | Unregister with the Provider. | +| `Microsoft.Advisor/advisorScore/read` | Gets Advisor score. | +| `Microsoft.Advisor/configurations/read` | Read Configurations. | +| `Microsoft.Advisor/configurations/write` | Create or update Configuration. | +| `Microsoft.Advisor/generateRecommendations/read` | Get status of `generateRecommendations` action. | +| `Microsoft.Advisor/metadata/read` | Read Metadata. | +| `Microsoft.Advisor/operations/read` | Get operations. | +| `Microsoft.Advisor/recommendations/read` | Read recommendations. | +| `Microsoft.Advisor/recommendations/write` | Create recommendations. | +| `Microsoft.Advisor/recommendations/available/action` | New recommendation is available. | +| `Microsoft.Advisor/recommendations/suppressions/read` | Read Suppressions. | +| `Microsoft.Advisor/recommendations/suppressions/write` | Create or update Suppressions. | +| `Microsoft.Advisor/recommendations/suppressions/delete` | Delete Suppression. | +| `Microsoft.Advisor/suppressions/read` | Read Suppressions. | +| `Microsoft.Advisor/suppressions/write` | Create or update Suppressions. | +| `Microsoft.Advisor/suppressions/delete` | Delete Suppression. | +| `Microsoft.Advisor/assessmentTypes/read` | Reads `AssessmentTypes`. | +| `Microsoft.Advisor/assessments/read` | Reads Assessments. | +| `Microsoft.Advisor/assessments/write` | Create Assessments. | +| `Microsoft.Advisor/resiliencyReviews/read` | Reads `resiliencyReviews`. | +| `Microsoft.Advisor/triageRecommendations/read` | Reads `triageRecommendations`. | +| `Microsoft.Advisor/triageRecommendations/approve/action` | Approves `triageRecommendations`. | +| `Microsoft.Advisor/triageRecommendations/reject/action` | Rejects `triageRecommendations`. | +| `Microsoft.Advisor/triageRecommendations/reset/action` | Resets `triageRecommendations`. | +| `Microsoft.Advisor/workloads/read` | Reads workloads. | > [!NOTE]-> Access to view recommendations is dependent on your access to the recommendation's impacted resource. +> For example, you must have a sufficient permission level for a virtual machine (VM) to view recommendations associated with the VM. ++To learn more about custom roles, see [Azure custom roles](/azure/role-based-access-control/custom-roles "Azure custom roles | Azure RBAC | Microsoft Learn"). ## Permissions and unavailable actions -Lack of proper permissions can block your ability to perform actions in Advisor. You might encounter the following common problems. +If your permission level is too low, your access to the associated action is blocked. Review common problems in the following section. ++### Configure subscription or resource group is blocked -### Unable to configure subscriptions or resource groups +When you try to configure a subscription or resource group, the option to include or exclude is blocked. The blocked status indicates that your permission level for that resource group or subscription is insufficient. To learn how to change your permission level, see [Tutorial: Grant a user access to Azure resources using the Azure portal](/azure/role-based-access-control/quickstart-assign-role-user-portal "Tutorial: Grant a user access to Azure resources using the Azure portal | Azure RBAC | Microsoft Learn"). -When you attempt to configure subscriptions or resource groups in Advisor, you might see that the option to include or exclude is disabled. This status indicates that you don't have a sufficient level of permission for that resource group or subscription. To resolve this problem, learn how to [grant a user access](../role-based-access-control/quickstart-assign-role-user-portal.md). +### Postpone or dismiss is allowed, but sends an error -### Unable to postpone or dismiss a recommendation +When you try to postpone or dismiss a recommendation, you receive an error. The error indicates that your permission level is insufficient. You must have a sufficient permission level to dismiss recommendations. -If you receive an error when you try to postpone or dismiss a recommendation, you might not have sufficient permissions. Dismissing a recommendation means you can't see it again unless it's manually reactivated, so you might potentially overlook important advice for optimizing Azure deployments. It's crucial that only users with sufficient permissions can dismiss recommendations. Make sure that you have at least Contributor access to the affected resource of the recommendation that you want to postpone or dismiss. To resolve this problem, learn how to [grant a user access](../role-based-access-control/quickstart-assign-role-user-portal.md). +> [!TIP] +> After you dismiss a recommendation, you must manually reactivate it before it is added in your list of recommendations. If you dismiss a recommendation, you may miss important advice that optimizes your Azure deployment. ++To postpone or dismiss a recommendation, verify that your permission level for the resource associated with the recommendation is set to Contributor or better. To learn how to change your permission level, see [Tutorial: Grant a user access to Azure resources using the Azure portal](/azure/role-based-access-control/quickstart-assign-role-user-portal "Tutorial: Grant a user access to Azure resources using the Azure portal | Azure RBAC | Microsoft Learn"). ## Related content -This article gave an overview of how Advisor uses Azure RBAC to control user permissions and how to resolve common problems. To learn more about Advisor, see: +This article provided an overview of how Advisor uses Azure role-based access control (Azure RBAC) to control user permissions and how to resolve common problems. To learn more about Advisor, see the following articles. ++* [Introduction to Azure Advisor](./advisor-overview.md "Introduction to Azure Advisor | Azure Advisor | Microsoft Learn") -- [What is Azure Advisor?](./advisor-overview.md)-- [Get started with Azure Advisor](./advisor-get-started.md)+* [Azure Advisor portal basics](./advisor-get-started.md "Azure Advisor portal basics | Azure Advisor | Microsoft Learn") |
ai-services | Cognitive Services Support Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-support-options.md | +## Get solutions to common issues ++In the Azure portal, you can find answers to common AI service issues. ++1. Go to your Azure AI services resource in the Azure portal. You can find it on the list on this page: [Azure AI services](https://ms.portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/AllCognitiveServices). If you're a United States government customer, use the [Azure portal for the United States government](https://portal.azure.us). +1. In the left pane, under **Help**, select **Support + Troubleshooting**. +1. Describe your issue in the text box, and answer the remaining questions in the form. +1. You'll find Learn articles and other resources that might help you resolve your issue. ++ ## Create an Azure support request <div class='icon is-large'> <img alt='Azure support' src='/media/logos/logo_azure.svg'> </div> -Explore the range of [Azure support options and choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal. +Explore the range of Azure support options and [choose the plan](https://azure.microsoft.com/support/plans) that best fits, whether you're a developer just starting your cloud journey or a large organization deploying business-critical, strategic applications. Azure customers can create and manage support requests in the Azure portal. -* [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) -* [Azure portal for the United States government](https://portal.azure.us) +To submit a support request for Azure AI services, follow the instructions on the [New support request](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/NewSupportRequestV3Blade) page in the Azure portal. Select **Cognitive Services** in the **Service type** dropdown field. ## Post a question on Microsoft Q&A |
ai-services | Concept Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-ocr.md | -> For extracting text from PDF, Office, and HTML documents and document images, use the [Document Intelligence Read OCR model](../../ai-services/document-intelligence/concept-read.md) optimized for text-heavy digital and scanned documents with an asynchronous API that makes it easy to power your intelligent document processing scenarios. +> If you want to extract text from PDFs, Office files, or HTML documents and document images, use the [Document Intelligence Read OCR model](../../ai-services/document-intelligence/concept-read.md). It's optimized for text-heavy digital and scanned documents and uses an asynchronous API that makes it easy to power your intelligent document processing scenarios. -OCR traditionally started as a machine-learning-based technique for extracting text from in-the-wild and non-document images like product labels, user-generated images, screenshots, street signs, and posters. For several scenarios, such as single images that aren't text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times. +OCR is a machine-learning-based technique for extracting text from in-the-wild and non-document images like product labels, user-generated images, screenshots, street signs, and posters. The Azure AI Vision OCR service provides a fast, synchronous API for lightweight scenarios where images aren't text-heavy. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times. -## What is Computer Vision v4.0 Read OCR? +## What is Azure AI Vision v4.0 Read OCR? -The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md). +The new Azure AI Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md). > [!TIP]-> You can use the OCR feature through the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistance more details (readable text and object locations) about the image. For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart). +> You can also use the OCR feature in conjunction with the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistant more details (readable text and object locations) about the image. For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart). ## Text extraction example |
ai-services | Overview Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-identity.md | keywords: facial recognition, facial recognition software, facial analysis, face # What is the Azure AI Face service? --The Azure AI Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identification, touchless access control, and face blurring for privacy. +The Azure AI Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many scenarios, such as identification, touchless access control, and automatic face blurring for privacy. You can use the Face service through a client library SDK or by calling the REST API directly. Follow the quickstart to get started. For a more structured approach, follow a Training module for Face. ## Example use cases -**Verify user identity**: Verify a person against a trusted face image. This verification could be used to grant access to digital or physical properties, such as a bank account, access to a building, and so on. In most cases, the trusted face image could come from a government-issued ID such as a passport or driver’s license, or it could come from an enrollment photo taken in person. During verification, liveness detection can play a critical role in verifying that the image comes from a real person, not a printed photo or mask. For more details on verification with liveness, see the [liveness tutorial](./Tutorials/liveness.md). For identity verification without liveness, follow the [quickstart](./quickstarts-sdk/identity-client-library.md). +The following are common use cases for the Face service: ++**Verify user identity**: Verify a person against a trusted face image. This verification could be used to grant access to digital or physical properties such as a bank account, access to a building, and so on. In most cases, the trusted face image could come from a government-issued ID such as a passport or driver’s license, or it could come from an enrollment photo taken in person. During verification, liveness detection can play a critical role in verifying that the image comes from a real person, not a printed photo or mask. For more details on verification with liveness, see the [liveness tutorial](./Tutorials/liveness.md). For identity verification without liveness, follow the [quickstart](./quickstarts-sdk/identity-client-library.md). -**Liveness detection**: Liveness detection is an anti-spoofing feature that checks whether a user is physically present in front of the camera. It's used to prevent spoofing attacks using a printed photo, video, or a 3D mask of the user's face. [Liveness tutorial](./Tutorials/liveness.md) +**Liveness detection**: Liveness detection is an anti-spoofing feature that checks whether a user is physically present in front of the camera. It's used to prevent spoofing attacks using a printed photo, recorded video, or a 3D mask of the user's face. [Liveness tutorial](./Tutorials/liveness.md) **Touchless access control**: Compared to today’s methods like cards or tickets, opt-in face identification enables an enhanced access control experience while reducing the hygiene and security risks from card sharing, loss, or theft. Facial recognition assists the check-in process with a human in the loop for check-ins in airports, stadiums, theme parks, buildings, reception kiosks at offices, hospitals, gyms, clubs, or schools. Face liveness SDK reference docs: - [Swift (iOS)](https://aka.ms/liveness-sdk-ios) - [JavaScript (Web)](https://aka.ms/liveness-sdk-web) -## Face recognition +## Face recognition operations Modern enterprises and apps can use the Face recognition technologies, including Face verification ("one-to-one" matching) and Face identification ("one-to-many" matching) to confirm that a user is who they claim to be. And these images are the candidate faces: ![Five images of people smiling. Images A and B show the same person.](./media/FaceFindSimilar.Candidates.jpg) -To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Find Similar API](/rest/api/face/face-recognition-operations/find-similar) reference documentation. +To find four similar faces, the **matchPerson** mode returns A and B, which show the same person as the target face. The **matchFace** mode returns A, B, C, and D, which is exactly four candidates, even if some aren't the same person as the target or have low similarity. For more information, the [Find Similar API](/rest/api/face/face-recognition-operations/find-similar) reference documentation. ## Group faces The Group operation divides a set of unknown faces into several smaller groups based on similarity. Each group is a disjoint proper subset of the original set of faces. It also returns a single "messyGroup" array that contains the face IDs for which no similarities were found. -All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Facial recognition](concept-face-recognition.md) concepts guide or the [Group API](/rest/api/face/face-recognition-operations/group) reference documentation. +All of the faces in a returned group are likely to belong to the same person, but there can be several different groups for a single person. Those groups are differentiated by another factor, such as expression, for example. For more information, see the [Group API](/rest/api/face/face-recognition-operations/group) reference documentation. ## Input requirements As with all of the Azure AI services resources, developers who use the Face serv Follow a quickstart to code the basic components of a face recognition app in the language of your choice. -- [Face quickstart](quickstarts-sdk/identity-client-library.md)+> [!div class="nextstepaction"] +> [Quickstart](quickstarts-sdk/identity-client-library.md) + |
ai-services | Overview Image Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview-image-analysis.md | For a more structured approach, follow a Training module for Image Analysis. ## Analyze Image -You can analyze images to provide insights about their visual features and characteristics. All of the features in this list are provided by the Analyze Image API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started. +You can analyze images to provide insights about their visual features and characteristics. All of the features in this table are provided by the Analyze Image API. Follow a [quickstart](./quickstarts-sdk/image-analysis-client-library-40.md) to get started. | Name | Description | Concept page | |||| You can analyze images to provide insights about their visual features and chara |**Moderate content in images** (v3.2 only) |You can use Azure AI Vision to detect adult content in an image and return confidence scores for different classifications. The threshold for flagging content can be set on a sliding scale to accommodate your preferences.|[Detect adult content](concept-detecting-adult-content.md)| > [!TIP]-> You can use the Read text and Object detection features of Image Analysis through the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistance more details (readable text and object locations) about the image. For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart). +> You can leverage the Read text and Object detection features of Image Analysis through the [Azure OpenAI](/azure/ai-services/openai/overview) service. The **GPT-4 Turbo with Vision** model lets you chat with an AI assistant that can analyze the images you share, and the Vision Enhancement option uses Image Analysis to give the AI assistant more details about the image (readable text and object locations). For more information, see the [GPT-4 Turbo with Vision quickstart](/azure/ai-services/openai/gpt-v-quickstart). ## Product Recognition (v4.0 preview only) As with all of the Azure AI services, developers using the Azure AI Vision servi ## Next steps -Get started with Image Analysis by following the quickstart guide in your preferred development language: +Get started with Image Analysis by following the quickstart guide in your preferred development language and API version: - [Quickstart (v4.0): Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library-40.md) - [Quickstart (v3.2): Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md) |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/overview.md | keywords: Azure AI Vision, Azure AI Vision applications, Azure AI Vision service # What is Azure AI Vision? -Azure's Azure AI Vision service gives you access to advanced algorithms that process images and return information based on the visual features you're interested in. +The Azure AI Vision service gives you access to advanced algorithms that process images and return information based on the visual features you're interested in. The following table lists the major product categories. | Service|Description| |||-| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on various surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.| +| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on various surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.| |[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library-40.md) to get started.| | [Face](overview-identity.md) | The Face service provides AI algorithms that detect, recognize, and analyze human faces in images. Facial recognition software is important in many different scenarios, such as identification, touchless access control, and face blurring for privacy. Follow the [Face quickstart](quickstarts-sdk/identity-client-library.md) to get started. | | [Video Analysis](intro-to-spatial-analysis-public-preview.md)| Video Analysis includes video-related features like Spatial Analysis and Video Retrieval. Spatial Analysis analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started. [Video Retrieval](/azure/ai-services/computer-vision/how-to/video-retrieval) lets you create an index of videos that you can search with natural language.| Azure's Azure AI Vision service gives you access to advanced algorithms that pro Azure AI Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Azure AI services, Azure AI Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Azure AI Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository. -## Getting started +## Get started Use [Vision Studio](https://portal.vision.cognitive.azure.com/) to try out Azure AI Vision features quickly in your web browser. To get started building Azure AI Vision into your app, follow a quickstart. * [Quickstart: Optical character recognition (OCR)](quickstarts-sdk/client-library.md) * [Quickstart: Image Analysis](quickstarts-sdk/image-analysis-client-library.md)+* [Quickstart: Azure Face](/azure/ai-services/computer-vision/quickstarts-sdk/identity-client-library) * [Quickstart: Spatial Analysis container](spatial-analysis-container.md) ## Image requirements |
ai-services | Image Analysis Client Library 40 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md | Title: "Quickstart: Image Analysis 4.0" -description: Learn how to tag images in your application using Image Analysis 4.0 through a native client SDK in the language of your choice. +description: Learn how to read text from images in your application using Image Analysis 4.0 through a native client SDK in the language of your choice. # Previously updated : 02/27/2024 Last updated : 08/21/2024 zone_pivot_groups: programming-languages-computer-vision-40 |
ai-services | Entity Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/personally-identifiable-information/concepts/entity-categories.md | The entity in this category can have the following subcategories. :::row-end::: -## Category: Quantity +## Subcategory: Age -This category contains the following entities: -- :::column span=""::: - **Entity** -- Quantity -- :::column-end::: - :::column span="2"::: - **Details** -- Numbers and numeric quantities. - - :::column-end::: - :::column span="2"::: - **Supported document languages** -- `en`, `es`, `fr`, `de`, `it`, `zh-hans`, `ja`, `ko`, `pt-pt`, `pt-br` - - :::column-end::: --#### Subcategories --The entity in this category can have the following subcategories. +The PII service supports the Age subcategory within the broader Quantity category (since Age is the personally identifiable piece of information). :::row::: :::column span=""::: |
ai-services | Content Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md | Title: Azure OpenAI Service content filtering description: Learn about the content filtering capabilities of Azure OpenAI in Azure AI services.--++ Previously updated : 06/25/2023 Last updated : 08/22/2024 -Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior. +Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system works by running both the prompt and completion through an ensemble of classification models designed to detect and prevent the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior. -The content filtering models for the hate, sexual, violence, and self-harm categories have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. +The text content filtering models for the hate, sexual, violence, and self-harm categories have been specifically trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality might vary. In all cases, you should do your own testing to ensure that it works for your application. -In addition to the content filtering system, the Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation). +In addition to the content filtering system, Azure OpenAI Service performs monitoring to detect content and/or behaviors that suggest use of the service in a manner that might violate applicable product terms. For more information about understanding and mitigating risks associated with your application, see the [Transparency Note for Azure OpenAI](/legal/cognitive-services/openai/transparency-note?tabs=text). For more information about how data is processed for content filtering and abuse monitoring, see [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation). The following sections provide information about the content filtering categories, the filtering severity levels and their configurability, and API scenarios to be considered in application design and implementation. -## Content filtering categories +## Content filter types The content filtering system integrated in the Azure OpenAI Service contains: * Neural multi-class classification models aimed at detecting and filtering harmful content; the models cover four categories (hate, sexual, violence, and self-harm) across four severity levels (safe, low, medium, and high). Content detected at the 'safe' severity level is labeled in annotations but isn't subject to filtering and isn't configurable. * Other optional classification models aimed at detecting jailbreak risk and known content for text and code; these models are binary classifiers that flag whether user or model behavior qualifies as a jailbreak attack or match to known text or source code. The use of these models is optional, but use of protected material code model may be required for Customer Copyright Commitment coverage. -## Risk categories +### Risk categories <!-- Text and image models support Drugs as an additional classification. This category covers advice related to Drugs and depictions of recreational and non-recreational drugs. Text and image models support Drugs as an additional classification. This catego | Self-Harm | Self-harm describes language related to physical actions intended to purposely hurt, injure, damage oneΓÇÖs body or kill oneself. <br><br> This includes, but isn't limited to: <ul><li>Eating Disorders</li><li>Bullying and intimidation</li></ul> | | Protected Material for Text<sup>*</sup> | Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models. | Protected Material for Code | Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories.+|User Prompt Attacks |User prompt attacks are User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message. Such attacks can vary from intricate roleplay to subtle subversion of the safety objective. | +|Indirect Attacks |Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, are a potential vulnerability where third parties place malicious instructions inside of documents that the Generative AI system can access and process. Requires [document embedding and formatting](#embedding-documents-in-your-prompt). | <sup>*</sup> If you're an owner of text material and want to submit text content for protection, [file a request](https://aka.ms/protectedmaterialsform). -## Prompt Shields --|Type| Description| -|--|--| -|Prompt Shield for User Prompt Attacks |User prompt attacks are User Prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message. Such attacks can vary from intricate roleplay to subtle subversion of the safety objective. | -|Prompt Shield for Indirect Attacks |Indirect Attacks, also referred to as Indirect Prompt Attacks or Cross-Domain Prompt Injection Attacks, are a potential vulnerability where third parties place malicious instructions inside of documents that the Generative AI system can access and process. Requires [document embedding and formatting](#embedding-documents-in-your-prompt). | -- [!INCLUDE [severity-levels text, four-level](../../content-safety/includes/severity-levels-text-four.md)] Configurable content filters for inputs (prompts) and outputs (completions) are * GPT model series * GPT-4 Turbo Vision GA<sup>*</sup> (turbo-2024-04-09)-* GPT-4o +* GPT-4o +* GPT-4o mini * DALL-E 2 and 3 <sup>*</sup>Only available for GPT-4 Turbo Vision GA, does not apply to GPT-4 Turbo Vision preview For enhanced detection capabilities, prompts should be formatted according to th The Chat Completion API is structured by definition. It consists of a list of messages, each with an assigned role. -The safety system parses this structured format and apply the following behavior: +The safety system parses this structured format and applies the following behavior: - On the latest ΓÇ£userΓÇ¥ content, the following categories of RAI Risks will be detected: - Hate - Sexual |
ai-services | Model Retirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-retirements.md | These models are currently available for use in Azure OpenAI Service. | Model | Version | Retirement date | Suggested replacement | | - | - | - | |-| `gpt-35-turbo` | 0301 | No earlier than October 1, 2024 | `gpt-4o-mini` | +| `gpt-35-turbo` | 0301 | No earlier than November 1, 2024 | `gpt-4o-mini` | | `gpt-35-turbo`<br>`gpt-35-turbo-16k` | 0613 | November 1, 2024 | `gpt-4o-mini` | | `gpt-35-turbo` | 1106 | No earlier than Nov 17, 2024 | `gpt-4o-mini` | | `gpt-35-turbo` | 0125 | No earlier than Feb 22, 2025 | `gpt-4o-mini` |-| `gpt-4`<br>`gpt-4-32k` | 0314 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | `gpt-4o` | -| `gpt-4`<br>`gpt-4-32k` | 0613 | **Deprecation:** October 1, 2024 <br> **Retirement:** June 6, 2025 | `gpt-4o` | +| `gpt-4`<br>`gpt-4-32k` | 0314 | **Deprecation:** November 1, 2024 <br> **Retirement:** June 6, 2025 | `gpt-4o` | +| `gpt-4`<br>`gpt-4-32k` | 0613 | **Deprecation:** November 1, 2024 <br> **Retirement:** June 6, 2025 | `gpt-4o` | | `gpt-4` | 1106-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | `gpt-4o`| | `gpt-4` | 0125-preview |To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | `gpt-4o` | | `gpt-4` | vision-preview | To be upgraded to `gpt-4` Version: `turbo-2024-04-09`, starting on November 15, 2024, or later **<sup>1</sup>** | `gpt-4o`| If you're an existing customer looking for information about these models, see [ ## Retirement and deprecation history +### August 22, 2024 ++* Updated `gpt-35-turbo` (0301) retirement date to no earlier than November 1, 2024. +* Updated `gpt4` and `gpt-4-32k` (0314 and 0613) deprecation date to November 1, 2024. + ### August 8, 2024 * Updated `gpt-35-turbo` & `gpt-35-turbo-16k` (0613) model's retirement date to November 1, 2024. |
ai-services | Provisioned Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-migration.md | The capabilities below are rolling out for the Provisioned Managed offering. ## Usability improvement details -Provisioned quota granularity is changing from model-specific to model-independent. Rather than each model and version within subscription and region having its own quota limit, there is a single quota item per subscription and region that limits the total number of PTUs that can be deployed across all supported models and versions. +Provisioned quota granularity is changing from model-specific to model-independent. Rather than each model and version within subscription and region having its own quota limit, there's a single quota item per subscription and region that limits the total number of PTUs that can be deployed across all supported models and versions. ## Model-independent quota -Starting August 12, 2024, existing customers' current, model-specific quota has been converted to model-independent. This happens automatically. No quota is lost in the transition. Existing quota limits are summed and assigned to a new model-independent quota item. +Starting on August 12, 2024, existing customers' current, model-specific quota has been converted to model-independent. This happens automatically. No quota is lost in the transition. Existing quota limits are summed and assigned to a new model-independent quota item. :::image type="content" source="../media/provisioned/consolidation.png" alt-text="Diagram showing quota consolidation." lightbox="../media/provisioned/consolidation.png"::: For existing customers, if the region already contains a quota assignment, the q Customers no longer obtain quota by contacting their sales teams. Instead, they use the self-service quota request form and specify the PTU-Managed quota type. The form is accessible from a link to the right of the quota item. The target is to respond to all quota requests within two business days. -The quota screenshot below shows model-independent quota being used by deployments of different types, as well as the link for requesting additional quota. +The following quota screenshot shows model-independent quota being used by deployments of different types, as well as the link for requesting additional quota. :::image type="content" source="../media/provisioned/quota-request-type.png" alt-text="Screenshot of new request type UI for Azure OpenAI provisioned for requesting more quota." lightbox="../media/provisioned/quota-request-type.png"::: The quota screenshot below shows model-independent quota being used by deploymen Prior to the August update, Azure OpenAI Provisioned was only available to a few customers, and quota was allocated to maximize the ability for them to deploy and use it. With these changes, the process of acquiring quota is simplified for all users, and there is a greater likelihood of running into service capacity limitations when deployments are attempted. A new API and Studio experience are available to help users find regions where the subscription has quota and the service has capacity to support deployments of a desired model. -We also recommend that customers using commitments now create their deployments prior to creating or expanding commitments to cover them. This guarantees that capacity is available prior to creating a commitment and prevents over-purchase of the commitment. To support this, the restriction that prevented deployments from being created larger than their commitments has been removed. This new approach to quota, capacity availability and commitments matches what is provided under the hourly/reservation model, and the guidance to deploy before purchasing a commitment (or reservation, for the hourly model) is the same for both. +We also recommend that customers using commitments now create their deployments before creating or expanding commitments to cover them. This guarantees that capacity is available before creating a commitment and prevents over-purchase of the commitment. To support this, the restriction that prevented deployments from being created larger than their commitments has been removed. This new approach to quota, capacity availability and commitments matches what is provided under the hourly/reservation model, and the guidance to deploy before purchasing a commitment (or reservation, for the hourly model) is the same for both. See the following links for more information. The guidance for reservations and commitments is the same: See the following links for more information. The guidance for reservations and ## New hourly reservation payment model > [!NOTE]-> The following discussion of payment models does not apply to the older "Provisioned Classic (PTU-C)" offering. They only affect the Provisioned (aka Provisioned Managed) offering. Provisioned Classic continues to be governed by the monthly commitment payment model, unchanged from today. +> The following description of payment models does not apply to the older "Provisioned Classic (PTU-C)" offering. They only affect the Provisioned (aka Provisioned Managed) offering. Provisioned Classic continues to be governed by the unchanged monthly commitment payment model. Microsoft has introduced a new "Hourly/reservation" payment model for provisioned deployments. This is in addition to the current **Commitment** payment model, which will continue to be supported at least through the end of 2024. ### Commitment payment model -- Regional, monthly commitment is required to use provisioned (longer terms available contractually).+- A regional, monthly commitment is required to use provisioned (longer terms available contractually). -- Commitments are bound to Azure OpenAI resources, making moving deployments across resources difficult.+- Commitments are bound to Azure OpenAI resources, which makes moving deployments across resources difficult. - Commitments can't be canceled or altered during their term, except to add new PTUs. Microsoft has introduced a new "Hourly/reservation" payment model for provisione - Supports all models, both old and new. > [!IMPORTANT]-> **Models released after August 1, 2024 require the use of the Hourly/Reservation payment model.** They are not deployable on Azure OpenAI resources that have active commitments. To deploy models released after August 1, exiting customers must either: +> **Models released after August 1, 2024 require the use of the Hourly/Reservation payment model.** They are not deployable on Azure OpenAI resources that have active commitments. To deploy models released after August 1, existing customers must either: > - Create deployments on Azure OpenAI resources without commitments.-> - Migrate an existing resources off its commitments. +> - Migrate an existing resource off its commitments. -## Hourly reservation model details +## Payment model framework -Details on the hourly/reservation model can be found in the [Azure OpenAI Provisioned Onboarding Guide](../how-to/provisioned-throughput-onboarding.md). +With the release of the hourly/reserved payment model, payment options are more flexible and the model around provisioned payments has changed. When the one-month commitments were the only way to purchase provisioned, the model was: -### Commitment and hourly reservation coexistence +1. Get a PTU quota from your Microsoft account team. +1. "Purchase" quota from a commitment on the resource where you want to deploy. +1. Create deployments on the resource up to the limit of the commitment. -Customers that have commitments aren't required to use the hourly/reservation model. They can continue to use existing commitments, purchase new commitments, and manage commitments as they do currently. +The key difference between this model and the new model is that previously the only way to pay for provisioned was through a one-month term discount. Now, you can deploy and pay for deployments hourly if you choose and make a separate decision on whether to discount them via **either** a one-month commitment (like before) or an Azure reservation. -A customer can also decide to use both payment models in the same subscription/region. In this case, **the payment model for a deployment depends on the resource to which it is attached.** +With this insight, the new way to think about payment models is the following: -**Deployments on resources with active commitments follow the commitment payment model.** +1. Get a PTU quota using the self-service form. +1. Create deployments using your quota. +1. Optionally purchase or extend a commitment or a reservation to apply a term discount to your deployments. -- The monthly commitment purchase covers the deployed PTUs.+Steps 1 and 2 are the same in all cases. The difference is whether a commitment or an Azure reservation is used as the vehicle to provide the discount. In both models: -- Hourly overage charges are generated if the deployed PTUs ever become greater than the committed PTUs.+* It's possible to deploy more PTUs than you discount. (for example creating a short-term deployment to try a new model is enabled by deploying without purchasing a discount) +* The discount method (commitment or reservation) applies the discounted price to a fixed number of PTUs and has a scope that defines which deployments are counted against the discount. -- All existing discounts attached to the monthly commitment SKU continue to apply. -- **Azure Reservations DO NOT apply additional discounts on top of the monthly commitment SKU**, however they will apply discounts to any overages (this behavior is new).+ |Discount type |Available Scopes (within a region) | + ||| + |Commitment | Azure OpenAI resource | + |Row2 | Resource group, single subscription, management group (group of subscriptions), shared (all subscriptions in a billing account) | -- The **Manage Commitments** page in Studio is used to purchase and manage commitments.+* The discounted price is applied to deployed PTUs up to the number of discounted PTUs in the discount. +* The number of deployed PTUs exceeding the discounted PTUs (or not covered by any discount) are charged the hourly rate. +* The best practice is to create deployments first, and then to apply discounts. This is to guarantee that service. capacity is available to support your deployments prior to creating a term commitment for PTUs you cannot use. -Deployments on resources without commitments (or only expired commitments) follow the Hourly/Reservation payment model. -- Deployments generate hourly charges under the new Hourly/Reservation SKU and meter.-- Azure Reservations can be purchased to discount the PTUs for deployments.-- Reservations are purchased and managed from the Reservation blade of the Azure portal (not within Studio).+> [!NOTE] +> When you follow best practices, you may receive hourly charges between the time you create the deployment and increase your discount (commitment or reservation). +> +> For this reason, we recommend that you be prepared to increase your discount immediately following the deployment. The prerequisites for purchasing an Azure reservations are different than for commitments, and we recommend you validate them prior to deployment if you intend to use them to discount your deployment. For more information, see [Permissions to view and manage Azure reservations](../../../cost-management-billing/reservations/view-reservations.md) ++## Mapping deployments to discounting method ++Customers using Azure OpenAI Provisioned prior to August 2024 can use either or both payment models simultaneously within a subscription. The payment model used for each deployment is determined based on its Azure OpenAI resource: +++**Resource has an active Commitment** ++* The commitment discounts all deployments on the resource up to the number of PTUs on the commitment. Any excess PTUs will be billed hourly. ++**Resource does not have an active commitment** ++* The deployments under the resource are eligible to be discounted by an Azure reservation. For these deployments to be discounted, they must exist within the scope of an active reservation. All deployments within the scope of the reservation (including possibly deployments on other resources in the same or other subscriptions) will be discounted as a group up to the number of PTUs on the reservation. Any excess PTUs will be billed hourly. -If a deployment is on a resource that has a commitment, and that commitment expires. The deployment will automatically shift to be billed. ### Changes to the existing payment mode Customers must reach out to their account teams to schedule a managed migration. - All commitments in a subscription/region must be migrated at the same time. - Needing to coordinate a time for migration with the Microsoft team. - +## Managing Provisioned Throughput Commitments ++Provisioned throughput commitments are created and managed from the **Manage Commitments** menu in Azure OpenAI Studio. You can navigate to this view by selecting **Manage Commitments** from the Quota menu: +++From the **Manage Commitments** view, you can do several things: ++- Purchase new commitments or edit existing commitments. +- Monitor all commitments in your subscription. +- Identify and take action on commitments that might cause unexpected billing. ++The following sections will take you through these tasks. ++## Purchase a Provisioned Throughput Commitment ++With your commitment plan ready, the next step is to create the commitments. Commitments are created manually via Azure OpenAI Studio and require the user creating the commitment to have either the [Contributor or Cognitive Services Contributor role](../how-to/role-based-access-control.md) at the subscription level. ++For each new commitment you need to create, follow these steps: ++1. Launch the Provisioned Throughput purchase dialog by selecting **Quotas** > **Provisioned** > **Manage Commitments**. +++2. Select **Purchase commitment**. ++3. Select the Azure OpenAI resource and purchase the commitment. You will see your resources divided into resources with existing commitments, which you can edit and resources that don't currently have a commitment. ++| Setting | Notes | +||-| +| **Select a resource** | Choose the resource where you'll create the provisioned deployment. Once you have purchased the commitment, you will be unable to use the PTUs on another resource until the current commitment expires. | +| **Select a commitment type** | Select Provisioned. (Provisioned is equivalent to Provisioned Managed) | +| **Current uncommitted provisioned quota** | The number of PTUs currently available for you to commit to this resource. | +| **Amount to commit (PTU)** | Choose the number of PTUs you're committing to. **This number can be increased during the commitment term, but can't be decreased**. Enter values in increments of 50 for the commitment type Provisioned. | +| **Commitment tier for current period** | The commitment period is set to one month. | +| **Renewal settings** | Autorenew at current PTUs <br> Autorenew at lower PTUs <br> Don't autorenew | ++4. Select Purchase. A confirmation dialog will be displayed. After you confirm, your PTUs will be committed, and you can use them to create a provisioned deployment. | +++> [!IMPORTANT] +> A new commitment is billed up-front for the entire term. If the renewal settings are set to auto-renew, then you will be billed again on each renewal date based on the renewal settings. ++### Edit an existing Provisioned Throughput commitment ++From the Manage Commitments view, you can also edit an existing commitment. There are two types of changes you can make to an existing commitment: ++- You can add PTUs to the commitment. +- You can change the renewal settings. ++To edit a commitment, select the current to edit, then select Edit commitment. ++### Adding Provisioned Throughput Units to existing commitments ++Adding PTUs to an existing commitment will allow you to create larger or more numerous deployments within the resource. You can do this at any time during the term of your commitment. +++> [!IMPORTANT] +> When you add PTUs to a commitment, they will be billed immediately, at a pro-rated amount from the current date to the end of the existing commitment term. Adding PTUs does not reset the commitment term. ++### Changing renewal settings ++Commitment renewal settings can be changed at any time before the expiration date of your commitment. Reasons you might want to change the renewal settings include ending your use of provisioned throughput by setting the commitment to not autorenew, or to decrease usage of provisioned throughput by lowering the number of PTUs that will be committed in the next period. ++> [!IMPORTANT] +> If you allow a commitment to expire or decrease in size such that the deployments under the resource require more PTUs than you have in your resource commitment, you will receive hourly overage charges for any excess PTUs. For example, a resource that has deployments that total 500 PTUs and a commitment for 300 PTUs will generate hourly overage charges for 200 PTUs. ++## Monitor commitments and prevent unexpected billings ++The manage commitments pane provides a subscription wide overview of all resources with commitments and PTU usage within a given Azure Subscription. Of particular importance interest are: ++- **PTUs Committed, Deployed and Usage** ΓÇô These figures provide the sizes of your commitments, and how much is in use by deployments. Maximize your investment by using all of your committed PTUs. +- **Expiration policy and date** - The expiration date and policy tell you when a commitment will expire and what will happen when it does. A commitment set to autorenew will generate a billing event on the renewal date. For commitments that are expiring, be sure you delete deployments from these resources prior to the expiration date to prevent hourly overage billingThe current renewal settings for a commitment. +- **Notifications** - Alerts regarding important conditions like unused commitments, and configurations that might result in billing overages. Billing overages can be caused by situations such as when a commitment has expired and deployments are still present, but have shifted to hourly billing. ++## Common Commitment Management Scenarios ++**Discontinue use of provisioned throughput** ++To end use of provisioned throughput, and prevent hourly overage charges after commitment expiration, stop any charges after the current commitments are expired, two steps must be taken: ++1. Set the renewal policy on all commitments to *Don't autorenew*. +2. Delete the provisioned deployments using the quota. ++**Move a commitment/deployment to a new resource in the same subscription/region** ++It isn't possible in Azure OpenAI Studio to directly *move* a deployment or a commitment to a new resource. Instead, a new deployment needs to be created on the target resource and traffic moved to it. There will need to be a commitment purchased established on the new resource to accomplish this. Because commitments are charged up-front for a 30-day period, it's necessary to time this move with the expiration of the original commitment to minimize overlap with the new commitment and ΓÇ£double-billingΓÇ¥ during the overlap. ++There are two approaches that can be taken to implement this transition. ++**Option 1: No-Overlap Switchover** ++This option requires some downtime, but requires no extra quota and generates no extra costs. ++| Steps | Notes | +|-|-| +|Set the renewal policy on the existing commitment to expire| This will prevent the commitment from renewing and generating further charges | +|Before expiration of the existing commitment, delete its deployment | Downtime will start at this point and will last until the new deployment is created and traffic is moved. You'll minimize the duration by timing the deletion to happen as close to the expiration date/time as possible.| +|After expiration of the existing commitment, create the commitment on the new resource|Minimize downtime by executing this and the next step as soon after expiration as possible.| +|Create the deployment on the new resource and move traffic to it|| ++**Option 2: Overlapped Switchover** ++This option has no downtime by having both existing and new deployments live at the same time. This requires having quota available to create the new deployment, and will generate extra costs for the duration of the overlapped deployments. ++| Steps | Notes | +|-|-| +|Set the renewal policy on the existing commitment to expire| Doing so prevents the commitment from renewing and generating further charges.| +|Before expiration of the existing commitment:<br>1. Create the commitment on the new resource.<br>2. Create the new deployment.<br>3. Switch traffic<br>4. Delete existing deployment| Ensure you leave enough time for all steps before the existing commitment expires, otherwise overage charges will be generated (see next section) for options. | ++If the final step takes longer than expected and will finish after the existing commitment expires, there are three options to minimize overage charges. ++- **Take downtime**: Delete the original deployment then complete the move. +- **Pay overage**: Keep the original deployment and pay hourly until you have moved off traffic and deleted the deployment. +- **Reset the original commitment** to renew one more time. This will give you time to complete the move with a known cost. ++Both paying for an overage and resetting the original commitment will generate charges beyond the original expiration date. Paying overage charges might be cheaper than a new one-month commitment if you only need a day or two to complete the move. Compare the costs of both options to find the lowest-cost approach. ++### Move the deployment to a new region and or subscription ++The same approaches apply in moving the commitment and deployment within the region, except that having available quota in the new location will be required in all cases. ++### View and edit an existing resource ++In Azure OpenAI Studio, select **Quota** > **Provisioned** > **Manage commitments** and select a resource with an existing commitment to view/change it. |
ai-services | Dall E Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/dall-e-quickstart.md | -> The image generation API creates an image from a text prompt. It does not edit existing images or create variations. +> The image generation API creates an image from a text prompt. It does not edit or create variations from existing images. ::: zone pivot="programming-language-studio" zone_pivot_groups: openai-quickstart-dall-e ::: zone-end - ::: zone pivot="programming-language-powershell" [!INCLUDE [PowerShell quickstart](includes/dall-e-powershell.md)] |
ai-services | Gpt V Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/gpt-v-quickstart.md | |
ai-services | Fine Tuning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/fine-tuning.md | In contrast to few-shot learning, fine tuning improves the model by training on We use LoRA, or low rank approximation, to fine-tune models in a way that reduces their complexity without significantly affecting their performance. This method works by approximating the original high-rank matrix with a lower rank one, thus only fine-tuning a smaller subset of "important" parameters during the supervised training phase, making the model more manageable and efficient. For users, this makes training faster and more affordable than other techniques. +> [!NOTE] +> Azure OpenAI currently only supports text-to-text fine-tuning for all supported models including GPT-4o mini. + ::: zone pivot="programming-language-studio" [!INCLUDE [Azure OpenAI Studio fine-tuning](../includes/fine-tuning-studio.md)] |
ai-services | Gpt With Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md | -The GPT-4 Turbo with Vision model answers general questions about what's present in the images. You can also show it video if you use [Vision enhancement](#use-vision-enhancement-with-video). +The GPT-4 Turbo with Vision model answers general questions about what's present in images. You can also show it video if you use [Vision enhancement](#use-vision-enhancement-with-video). > [!TIP] > To use GPT-4 Turbo with Vision, you call the Chat Completion API on a GPT-4 Turbo with Vision model that you have deployed. If you're not familiar with the Chat Completion API, see the [GPT-4 Turbo & GPT-4 how-to guide](/azure/ai-services/openai/how-to/chatgpt?tabs=python&pivots=programming-language-chat-completions). The API response should look like the following. } ``` -Every response includes a `"finish_details"` field. It has the following possible values: +Every response includes a `"finish_reason"` field. It has the following possible values: - `stop`: API returned complete model output. - `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit. - `content_filter`: Omitted content due to a flag from our content filters. -## Detail parameter settings in image processing: Low, High, Auto +### Detail parameter settings in image processing: Low, High, Auto The _detail_ parameter in the model offers three choices: `low`, `high`, or `auto`, to adjust the way the model interprets and processes images. The default setting is auto, where the model decides between low or high based on the size of the image input. - `low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial. The chat responses you receive from the model should now include enhanced inform "choices": [ {- "finish_details": { + "finish_reason": { "type": "stop", "stop": "<|fim_suffix|>" }, The chat responses you receive from the model should now include enhanced inform } ``` -Every response includes a `"finish_details"` field. It has the following possible values: +Every response includes a `"finish_reason"` field. It has the following possible values: - `stop`: API returned complete model output. - `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit. - `content_filter`: Omitted content due to a flag from our content filters. The chat responses you receive from the model should include information about t } ``` -Every response includes a `"finish_details"` field. It has the following possible values: +Every response includes a `"finish_reason"` field. It has the following possible values: - `stop`: API returned complete model output. - `length`: Incomplete model output due to the `max_tokens` input parameter or model's token limit. - `content_filter`: Omitted content due to a flag from our content filters. |
ai-services | Provisioned Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md | Learn more about the purchase model and how to purchase a reservation: * [Azure OpenAI provisioned onboarding guide](./provisioned-throughput-onboarding.md) * [Guide for Azure OpenAI provisioned reservations](../concepts/provisioned-throughput.md) +## Optionally purchase a reservation ++Following the creation of your deployment, you might want to purchase a term discount via an Azure Reservation. An Azure Reservation can provide a substantial discount on the hourly rate for users intending to use the deployment beyond a few days. ++For more information on purchasing a reservation, see [Save costs with Microsoft Azure OpenAI service Provisioned Reservations](../../../cost-management-billing/reservations/azure-openai.md). + ## Make your first inferencing calls The inferencing code for provisioned deployments is the same a standard deployment type. The following code snippet shows a chat completions call to a GPT-4 model. For your first time using these models programmatically, we recommend starting with our [quickstart guide](../quickstart.md). Our recommendation is to use the OpenAI library with version 1.0 or greater since this includes retry logic within the library. |
ai-services | Provisioned Throughput Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md | To assist customers with purchasing the correct reservation amounts. The total n :::image type="content" source="../media/provisioned/available-quota.png" alt-text="A screenshot showing available PTU quota." lightbox="../media/provisioned/available-quota.png"::: +Managing Azure Reservations ++After a reservation is created, it is a best practice monitor it to ensure it is receiving the usage you are expecting. This may be done via the Azure Reservation Portal or Azure Monitor. Details on these topics and others can be found here: ++* [View Azure reservation utilization](../../../cost-management-billing/reservations/reservation-utilization.md) +* [View Azure Reservation purchase and refund transactions](../../../cost-management-billing/reservations/view-purchase-refunds.md) +* [View amortized benefit costs](../../../cost-management-billing/reservations/view-amortized-costs.md) +* [Charge back Azure Reservation costs](../../../cost-management-billing/reservations/charge-back-usage.md) +* [Automatically renew Azure reservations](../../../cost-management-billing/reservations/reservation-renew.md) ## Next steps |
ai-services | Use Your Data Securely | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md | The [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains) ### Enable managed identity -To allow your Azure AI Search and Storage Account to recognize your Azure OpenAI service via Microsoft Entra ID authentication, you need to assign a managed identity for your Azure OpenAI service. The easiest way is to toggle on system assigned managed identity on Azure portal. +To allow your Azure AI Search and Storage Account to recognize your Azure OpenAI Service via Microsoft Entra ID authentication, you need to assign a managed identity for your Azure OpenAI Service. The easiest way is to toggle on system assigned managed identity on Azure portal. :::image type="content" source="../media/use-your-data/openai-managed-identity.png" alt-text="A screenshot showing the system assigned managed identity option in the Azure portal." lightbox="../media/use-your-data/openai-managed-identity.png"::: To set the managed identities via the management API, see [the management API reference documentation](/rest/api/aiservices/accountmanagement/accounts/update#identity). This step can be skipped only if you have a [shared private link](#create-shared You can disable public network access of your Azure OpenAI resource in the Azure portal. -To allow access to your Azure OpenAI service from your client machines, like using Azure OpenAI Studio, you need to create [private endpoint connections](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#use-private-endpoints) that connect to your Azure OpenAI resource. +To allow access to your Azure OpenAI Service from your client machines, like using Azure OpenAI Studio, you need to create [private endpoint connections](/azure/ai-services/cognitive-services-virtual-networks?tabs=portal#use-private-endpoints) that connect to your Azure OpenAI resource. ## Configure Azure AI Search To enable the developers to use these resources to build applications, the admin ## Configure gateway and client -To access the Azure OpenAI service from your on-premises client machines, one of the approaches is to configure Azure VPN Gateway and Azure VPN Client. +To access the Azure OpenAI Service from your on-premises client machines, one of the approaches is to configure Azure VPN Gateway and Azure VPN Client. Follow [this guideline](/azure/vpn-gateway/tutorial-create-gateway-portal#VNetGateway) to create virtual network gateway for your virtual network. |
ai-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md | The following quotas are adjustable for Standard (S0) resources. The Free (F0) r - Text to speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices - Speech translation [concurrent request limit](#real-time-speech-to-text-and-speech-translation) -Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity. +Before requesting a quota increase (where applicable), check your current TPS (transactions per second) and ensure that it's necessary to increase the quota. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity. -Let's look at an example. Suppose that your application receives response code 429, which indicates that there are too many requests. Your application receives this response even though your workload is within the limits defined by the [Quotas and limits reference](#quotas-and-limits-reference). The most likely explanation is that Speech service is scaling up to your demand and didn't reach the required scale yet. Therefore the service doesn't immediately have enough resources to serve the request. In most cases, this throttled state is transient. +Let's look at an example. Suppose that your application receives response code 429, which indicates that there are too many requests. Your application receives this response even though your workload is within the limits defined by the [Quotas and limits reference](#quotas-and-limits-reference). The most likely explanation is that Speech service is scaling up to your demand and didn't reach the required scale yet. Therefore the service doesn't immediately have enough resources to serve the request. In such cases, increasing the quota wonΓÇÖt help. In most cases, the Speech service will scale up soon, and the issue causing response code 429 will be resolved. ### General best practices to mitigate throttling during autoscaling |
ai-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-voice.md | Any audio included in the SSML document must meet these requirements: * The audio must not contain any customer-specific or other sensitive information. > [!NOTE]-> The `audio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) (Preview) instead. +> The `audio` element is not supported by the [Long Audio API](migrate-to-batch-synthesis.md#text-inputs). For long-form text to speech, use the [batch synthesis API](batch-synthesis.md) instead. The following table describes the usage of the `audio` element's attributes: |
ai-services | Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech.md | When you use the personal voice feature, you're billed for both profile storage When using the text-to-speech avatar feature, charges will be incurred based on the length of video output and will be billed per second. However, for the real-time avatar, charges are based on the time when the avatar is active, regardless of whether it is speaking or remaining silent, and will also be billed per second. To optimize costs for real-time avatar usage, refer to the tips provided in the [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar#chat-sample) (search "Use Local Video for Idle"). Avatar hosting is billed per second per endpoint. You can suspend your endpoint to save costs. If you want to suspend your endpoint, you can delete it directly. To use it again, simply redeploy the endpoint. +## Monitor Azure text to speech metrics ++Monitoring key metrics associated with text to speech services is crucial for managing resource usage and controlling costs. This section will guide you on how to find usage information in the Azure portal and provide detailed definitions of the key metrics. For more details on Azure monitor metrics, refer to [Azure Monitor Metrics overview](/azure/azure-monitor/essentials/data-platform-metrics). ++### How to find usage information in the Azure portal ++To effectively manage your Azure resources, it's essential to access and review usage information regularly. Here's how to find the usage information: ++1. Go to the [Azure portal](https://ms.portal.azure.com/) and sign in with your Azure account. ++1. Navigate to **Resources** and select your resource you wish to monitor. ++1. Select **Metrics** under **Monitoring** from the left-hand menu. ++ :::image type="content" source="media/text-to-speech/monitoring-metrics.png" alt-text="Screenshot of selecting metrics option under monitoring."::: ++1. Customize metric views. ++ You can filter data by resource type, metric type, time range, and other parameters to create custom views that align with your monitoring needs. Additionally, you can save the metric view to dashboards by selecting **Save to dashboard** for easy access to frequently used metrics. ++1. Set up alerts. ++ To manage usage more effectively, set up alerts by navigating to the **Alerts** tab under **Monitoring** from the left-hand menu. Alerts can notify you when your usage reaches specific thresholds, helping to prevent unexpected costs. ++### Definition of metrics ++Below is a table summarizing the key metrics for Azure text to speech services. ++| **Metric name** | **Description** | +|-|--| +| **Synthesized Characters** | Tracks the number of characters converted into speech, including prebuilt neural voice and custom neural voice. For details on billable characters, see [Billable characters](#billable-characters). | +| **Video Seconds Synthesized** | Measures the total duration of video synthesized, including batch avatar synthesis, real-time avatar synthesis, and custom avatar synthesis. | +| **Avatar Model Hosting Seconds** | Tracks the total time in seconds that your custom avatar model is hosted. | +| **Voice Model Hosting Hours** | Tracks the total time in hours that your custom neural voice model is hosted. | +| **Voice Model Training Minutes** | Measures the total time in minutes for training your custom neural voice model. | + ## Reference docs * [Speech SDK](speech-sdk.md) |
ai-services | Get Documents Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-documents-status.md | Request headers are: |Headers|Description|Condition| | | ||-|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|Required| -|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| -|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|• **Required**| +|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|***Required***| +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |***Required*** when using a regional (geographic) resource like **West US**| +|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.| ***Required***| ## Response status codes |
ai-services | Get Supported Glossary Formats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-supported-glossary-formats.md | Request headers are: |Headers|Description|Condition| | | ||-|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|Required| -|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| -|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|• **Required**| +|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|***Required***| +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| +|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.| ***Required***| |
ai-services | Get Translation Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-translation-status.md | Request headers are: |Headers|Description|Condition| | | ||-|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|Required| -|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| -|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|• **Required**| +|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|***Required***| +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| +|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|***Required***| ## Response status codes |
ai-services | Get Translations Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/get-translations-status.md | Request headers are: |Headers|Description|Condition| | | ||-|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|Required| -|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| -|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|• **Required**| +|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|***Required***| +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| +|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|***Required***| ## Response status codes |
ai-services | Start Batch Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/document-translation/reference/start-batch-translation.md | Request headers are: |Headers|Description|Condition| | | ||-|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|Required| -|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |• ***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| -|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|• **Required**| +|**Ocp-Apim-Subscription-Key**|Your Translator service API key from the Azure portal.|***Required***| +|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |***Required*** when using a regional (geographic) resource like **West US**.</br>&bullet.| +|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|***Required***| ## BatchRequest (body) |
ai-studio | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/ai-services/get-started.md | Azure AI Studio supports the following AI | Service | Description | | | |-| ![Azure OpenAI Service icon](~/reusable-content/ce-skilling/azure/medi). | +| ![Azure OpenAI Service icon](~/reusable-content/ce-skilling/azure/medi). | | ![Content Safety icon](~/reusable-content/ce-skilling/azure/media/ai-services/content-safety.svg) [Content Safety](../../ai-services/content-safety/index.yml) | An AI service that detects unwanted contents.<br/><br/>Go to **Home** > **AI Services** > **Content Safety**. | | ![Document Intelligence icon](~/reusable-content/ce-skilling/azure/media/ai-services/document-intelligence.svg) [Document Intelligence](../../ai-services/document-intelligence/index.yml) | Turn documents into intelligent data-driven solutions.<br/><br/>Go to **Home** > **AI Services** > **Vision + Document**. | | ![Face icon](~/reusable-content/ce-skilling/azure/medi) | Detect and identify people and emotions in images.<br/><br/>Go to **Home** > **AI Services** > **Vision + Document**. | |
ai-studio | Evaluation Metrics Built In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/evaluation-metrics-built-in.md | |
ai-studio | Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md | The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a The following models are available: * [Meta-Llama-3.1-405B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-405B-Instruct)-* [Meta-Llama-3.1-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-70B-Instruct) -* [Meta-Llama-3.1-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-8B-Instruct) +* [Meta-Llama-3.1-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-70B-Instruct/version/1/registry/azureml-meta) +* [Meta-Llama-3.1-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-8B-Instruct/version/1/registry/azureml-meta) # [Meta Llama-3](#tab/meta-llama-3) Meta developed and released the Meta Llama 3 family of large language models (LL The following models are available: -* [Meta-Llama-3-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-70B-Instruct) -* [Meta-Llama-3-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-8B-Instruct) +* [Meta-Llama-3-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-70B-Instruct/version/6/registry/azureml-meta) +* [Meta-Llama-3-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-8B-Instruct/version/6/registry/azureml-meta) # [Meta Llama-2](#tab/meta-llama-2) Meta has developed and publicly released the Llama 2 family of large language mo The following models are available: -* [Llama-2-70b-chat](https://aka.ms/azureai/landing/Llama-2-70b-chat) -* [Llama-2-13b-chat](https://aka.ms/azureai/landing/Llama-2-13b-chat) -* [Llama-2-7b-chat](https://aka.ms/azureai/landing/Llama-2-7b-chat) +* [Llama-2-70b-chat](https://ai.azure.com/explore/models/Llama-2-70b-chat/version/20/registry/azureml-meta) +* [Llama-2-13b-chat](https://ai.azure.com/explore/models/Llama-2-13b-chat/version/20/registry/azureml-meta) +* [Llama-2-7b-chat](https://ai.azure.com/explore/models/Llama-2-7b-chat/version/24/registry/azureml-meta) The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a The following models are available: * [Meta-Llama-3.1-405B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-405B-Instruct)-* [Meta-Llama-3.1-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-70B-Instruct) -* [Meta-Llama-3.1-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-8B-Instruct) +* [Meta-Llama-3.1-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-70B-Instruct/version/1/registry/azureml-meta) +* [Meta-Llama-3.1-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-8B-Instruct/version/1/registry/azureml-meta) # [Meta Llama-3](#tab/meta-llama-3) Meta developed and released the Meta Llama 3 family of large language models (LL The following models are available: -* [Meta-Llama-3-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-70B-Instruct) -* [Meta-Llama-3-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-8B-Instruct) +* [Meta-Llama-3-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-70B-Instruct/version/6/registry/azureml-meta) +* [Meta-Llama-3-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-8B-Instruct/version/6/registry/azureml-meta) # [Meta Llama-2](#tab/meta-llama-2) Meta has developed and publicly released the Llama 2 family of large language mo The following models are available: -* [Llama-2-70b-chat](https://aka.ms/azureai/landing/Llama-2-70b-chat) -* [Llama-2-13b-chat](https://aka.ms/azureai/landing/Llama-2-13b-chat) -* [Llama-2-7b-chat](https://aka.ms/azureai/landing/Llama-2-7b-chat) +* [Llama-2-70b-chat](https://ai.azure.com/explore/models/Llama-2-70b-chat/version/20/registry/azureml-meta) +* [Llama-2-13b-chat](https://ai.azure.com/explore/models/Llama-2-13b-chat/version/20/registry/azureml-meta) +* [Llama-2-7b-chat](https://ai.azure.com/explore/models/Llama-2-7b-chat/version/24/registry/azureml-meta) The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a The following models are available: * [Meta-Llama-3.1-405B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-405B-Instruct)-* [Meta-Llama-3.1-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-70B-Instruct) -* [Meta-Llama-3.1-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-8B-Instruct) +* [Meta-Llama-3.1-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-70B-Instruct/version/1/registry/azureml-meta) +* [Meta-Llama-3.1-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-8B-Instruct/version/1/registry/azureml-meta) # [Meta Llama-3](#tab/meta-llama-3) Meta developed and released the Meta Llama 3 family of large language models (LL The following models are available: -* [Meta-Llama-3-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-70B-Instruct) -* [Meta-Llama-3-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-8B-Instruct) +* [Meta-Llama-3-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-70B-Instruct/version/6/registry/azureml-meta) +* [Meta-Llama-3-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-8B-Instruct/version/6/registry/azureml-meta) # [Meta Llama-2](#tab/meta-llama-2) Meta has developed and publicly released the Llama 2 family of large language mo The following models are available: -* [Llama-2-70b-chat](https://aka.ms/azureai/landing/Llama-2-70b-chat) -* [Llama-2-13b-chat](https://aka.ms/azureai/landing/Llama-2-13b-chat) -* [Llama-2-7b-chat](https://aka.ms/azureai/landing/Llama-2-7b-chat) +* [Llama-2-70b-chat](https://ai.azure.com/explore/models/Llama-2-70b-chat/version/20/registry/azureml-meta) +* [Llama-2-13b-chat](https://ai.azure.com/explore/models/Llama-2-13b-chat/version/20/registry/azureml-meta) +* [Llama-2-7b-chat](https://ai.azure.com/explore/models/Llama-2-7b-chat/version/24/registry/azureml-meta) The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a The following models are available: * [Meta-Llama-3.1-405B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-405B-Instruct)-* [Meta-Llama-3.1-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-70B-Instruct) -* [Meta-Llama-3.1-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3.1-8B-Instruct) +* [Meta-Llama-3.1-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-70B-Instruct/version/1/registry/azureml-meta) +* [Meta-Llama-3.1-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3.1-8B-Instruct/version/1/registry/azureml-meta) # [Meta Llama-3](#tab/meta-llama-3) Meta developed and released the Meta Llama 3 family of large language models (LL The following models are available: -* [Meta-Llama-3-70B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-70B-Instruct) -* [Meta-Llama-3-8B-Instruct](https://aka.ms/azureai/landing/Meta-Llama-3-8B-Instruct) +* [Meta-Llama-3-70B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-70B-Instruct/version/6/registry/azureml-meta) +* [Meta-Llama-3-8B-Instruct](https://ai.azure.com/explore/models/Meta-Llama-3-8B-Instruct/version/6/registry/azureml-meta) # [Meta Llama-2](#tab/meta-llama-2) Meta has developed and publicly released the Llama 2 family of large language mo The following models are available: -* [Llama-2-70b-chat](https://aka.ms/azureai/landing/Llama-2-70b-chat) -* [Llama-2-13b-chat](https://aka.ms/azureai/landing/Llama-2-13b-chat) -* [Llama-2-7b-chat](https://aka.ms/azureai/landing/Llama-2-7b-chat) +* [Llama-2-70b-chat](https://ai.azure.com/explore/models/Llama-2-70b-chat/version/20/registry/azureml-meta) +* [Llama-2-13b-chat](https://ai.azure.com/explore/models/Llama-2-13b-chat/version/20/registry/azureml-meta) +* [Llama-2-7b-chat](https://ai.azure.com/explore/models/Llama-2-7b-chat/version/24/registry/azureml-meta) For more examples of how to use Meta Llama, see the following examples and tutor | Description | Language | Sample | |-|-|- | | CURL request | Bash | [Link](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests) |-| Azure AI Inference package for JavaScript | JavaScript | [Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples) | +| Azure AI Inference package for JavaScript | JavaScript | [Link](https://github.com/Azure/azureml-examples/blob/main/sdk/typescript/README.md) | | Azure AI Inference package for Python | Python | [Link](https://aka.ms/azsdk/azure-ai-inference/python/samples) | | Python web requests | Python | [Link](https://aka.ms/meta-llama-3.1-405B-instruct-webrequests) | | OpenAI SDK (experimental) | Python | [Link](https://aka.ms/meta-llama-3.1-405B-instruct-openai) | It is a good practice to start with a low number of instances and scale up as ne * [Deploy models as serverless APIs](deploy-models-serverless.md) * [Consume serverless API endpoints from a different Azure AI Studio project or hub](deploy-models-serverless-connect.md) * [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)-* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace) +* [Plan and manage costs (marketplace)](costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace) |
ai-studio | Deploy Models Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless.md | In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**. 1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region. :::image type="content" source="../media/deploy-monitor/serverless/deployment-name.png" alt-text="A screenshot showing how to specify the name of the deployment you want to create." lightbox="../media/deploy-monitor/serverless/deployment-name.png":::+ > [!TIP] + > The **Content filter (preview)** option is enabled by default. Leave the default setting for the service to detect harmful content such as hate, self-harm, sexual, and violent content. For more information about content filtering, see [Content filtering in Azure AI Studio](../concepts/content-filtering.md). 1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page. |
ai-studio | Prompt Flow Tools Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/prompt-flow-tools-overview.md | To discover more custom tools developed by the open-source community such as [Az ## Next steps - [Create a flow](../flow-develop.md)-- [Build your own copilot using prompt flow](../../tutorials/deploy-copilot-ai-studio.md)+- [Get started building a chat app using the prompt flow SDK](../../quickstarts/get-started-code.md) |
ai-studio | Get Started Playground | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/get-started-playground.md | In the [Azure AI Studio](https://ai.azure.com) playground you can observe how yo [!INCLUDE [Chat without your data](../includes/chat-without-data.md)] -Next, you can add your data to the model to help it answer questions about your products. Try the [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md) and [Build and deploy a question and answer copilot with prompt flow in Azure AI Studio](../tutorials/deploy-copilot-ai-studio.md) tutorials to learn more. +Next, you can add your data to the model to help it answer questions about your products. Try the [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md) tutorial to learn more. ## Related content - [Build a custom chat app in Python using the prompt flow SDK](./get-started-code.md). - [Deploy an enterprise chat web app](../tutorials/deploy-chat-web-app.md).-- [Build and deploy a question and answer copilot with prompt flow in Azure AI Studio](../tutorials/deploy-copilot-ai-studio.md). |
ai-studio | Deploy Chat Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md | The steps in this tutorial are: - You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Specifically, the `product_info_11.md` file contains product information about the TrailWalker hiking shoes that's relevant for this tutorial example. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine. +- You need to have **Microsoft.Web** resource provider registered in the selected subscription, to be able to deploy to a web app. + ## Add your data and try the chat model again In the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) (that's a prerequisite for this tutorial), you can observe how your model responds without your data. Now you add your data to the model to help it answer questions about your products. Publishing creates an Azure App Service in your subscription. It might incur cos To deploy the web app: +> [!NOTE] +> You need to have **Microsoft.Web** resource provider registered in the selected subscription, to be able to deploy to a web app. + 1. Complete the steps in the previous section to [add your data](#add-your-data-and-try-the-chat-model-again) to the playground. > [!NOTE] If you delete the Cosmos DB resource but keep the chat history option enabled on ## Related content -- [Build and deploy a question and answer copilot with prompt flow in Azure AI Studio.](./deploy-copilot-ai-studio.md).+- [Get started building a chat app using the prompt flow SDK](../quickstarts/get-started-code.md) - [Build your own copilot with the prompt flow SDK.](./copilot-sdk-build-rag.md). |
ai-studio | Deploy Copilot Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md | - Title: Build and deploy a question and answer copilot with prompt flow in Azure AI Studio- -description: Use this article to build and deploy a question and answer copilot with prompt flow in Azure AI Studio ---- - build-2024 - Previously updated : 5/21/2024------# Tutorial: Build and deploy a question and answer copilot with prompt flow in Azure AI Studio ---In this [Azure AI Studio](https://ai.azure.com) tutorial, you use generative AI and prompt flow to build, configure, and deploy a copilot for your retail company called Contoso. Your retail company specializes in outdoor camping gear and clothing. --The copilot should answer questions about your products and services. It should also answer questions about your customers. For example, the copilot can answer questions such as "How much do the TrailWalker hiking shoes cost?" and "How many TrailWalker hiking shoes did Daniel Wilson buy?". --The steps in this tutorial are: --1. Add your data to the chat playground. -1. Create a prompt flow from the playground. -1. Customize prompt flow with multiple data sources. -1. Evaluate the flow using a question and answer evaluation dataset. -1. Deploy the flow for consumption. --## Prerequisites --- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.-- An [AI Studio hub](../how-to/create-azure-ai-resource.md), [project](../how-to/create-projects.md), and [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already.--- An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. --- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Specifically, the `product_info_11.md` file contains product information about the TrailWalker hiking shoes that's relevant for this tutorial example. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.--## Add your data and try the chat model again --In the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) (that's a prerequisite for this tutorial), you can observe how your model responds without your data. Now you add your data to the model to help it answer questions about your products. ---## Create a prompt flow from the playground --Now you might ask "How can I further customize this copilot?" You might want to add multiple data sources, compare different prompts or the performance of multiple models. A [prompt flow](../how-to/prompt-flow.md) serves as an executable workflow that streamlines the development of your LLM-based AI application. It provides a comprehensive framework for managing data flow and processing within your application. You use prompt flow to optimize the messages that are sent to the copilot's chat model. --In this section, you learn how to transition to prompt flow from the playground. You export the playground chat environment including connections to the data that you added. Later in this tutorial, you [evaluate the flow](#evaluate-the-flow-using-a-question-and-answer-evaluation-dataset) and then [deploy the flow](#deploy-the-flow) for [consumption](#use-the-deployed-flow). --> [!NOTE] -> The changes made in prompt flow aren't applied backwards to update the playground environment. --You can create a prompt flow from the playground by following these steps: -1. Go to your project in [AI Studio](https://ai.azure.com). -1. Select **Playgrounds** > **Chat** from the left pane. -1. Since we're using our own data, you need to select **Add your data**. You should already have an index named *product-info* that you created previously in the chat playground. Select it from the **Select available project index** dropdown. Otherwise, [first create an index with your product data](#add-your-data-and-try-the-chat-model-again) and then return to this step. -1. Select **Prompt flow** from the menu above the chat session pane. -1. Enter a folder name for your prompt flow. Then select **Open**. AI Studio exports the playground chat environment to prompt flow. The export includes the connections to the data that you added. -- :::image type="content" source="../media/tutorials/chat/prompt-flow-from-playground.png" alt-text="Screenshot of the open in prompt flow dialog." lightbox="../media/tutorials/chat/prompt-flow-from-playground.png"::: --Within a flow, nodes take center stage, representing specific tools with unique capabilities. These nodes handle data processing, task execution, and algorithmic operations, with inputs and outputs. By connecting nodes, you establish a seamless chain of operations that guides the flow of data through your application. For more information, see [prompt flow tools](../how-to/prompt-flow.md#prompt-flow-tools). --To facilitate node configuration and fine-tuning, a visual representation of the workflow structure is provided through a DAG (Directed Acyclic Graph) graph. This graph showcases the connectivity and dependencies between nodes, providing a clear overview of the entire workflow. The nodes in the graph shown here are representative of the playground chat experience that you exported to prompt flow. -- :::image type="content" source="../media/tutorials/chat/prompt-flow-overview-graph.png" alt-text="Screenshot of the default graph exported from the playground to prompt flow." lightbox="../media/tutorials/chat/prompt-flow-overview-graph.png"::: --In prompt flow, you should also see: -- **Save** button: You can save your prompt flow at any time by selecting **Save** from the top menu. Be sure to save your prompt flow periodically as you make changes in this tutorial. -- **Start compute session** button: You need to start a compute session to run your prompt flow. You can start the session later in the tutorial. You incur costs for compute instances while they are running. For more information, see [how to create a compute session](../how-to/create-manage-compute-session.md).---You can return to the prompt flow anytime by selecting **Prompt flow** from **Tools** in the left menu. Then select the prompt flow folder that you created previously. ---## Customize prompt flow with multiple data sources --Previously in the [AI Studio](https://ai.azure.com) chat playground, you [added your data](#add-your-data-and-try-the-chat-model-again) to create one search index that contained product data for the Contoso copilot. So far, users can only inquire about products with questions such as "How much do the TrailWalker hiking shoes cost?". But they can't get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" To enable this scenario, we add another index with customer information to the flow. --### Create the customer info index --To proceed, you need a local copy of example customer information. For more information and links to example data, see the [prerequisites](#prerequisites). --Follow these instructions on how to create a new index. You'll return to your prompt flow later in this tutorial to add the customer info to the flow. You can open a new tab in your browser to follow these instructions and then return to your prompt flow. --1. Go to your project in [AI Studio](https://ai.azure.com). -1. Select **Index** from the left menu. Notice that you already have an index named *product-info* that you created previously in the chat playground. -- :::image type="content" source="../media/tutorials/chat/add-index-new.png" alt-text="Screenshot of the indexes page with the button to create a new index." lightbox="../media/tutorials/chat/add-index-new.png"::: --1. Select **+ New index**. You're taken to the **Create an index** wizard. --1. On the **Source data** page, select **Upload files** from the **Data source** dropdown. Then select **Upload** > **Upload files** to browse your local files. -1. Select the customer info files that you downloaded or created previously. See the [prerequisites](#prerequisites). Then select **Next**. -- :::image type="content" source="../media/tutorials/chat/add-index-dataset-upload-folder.png" alt-text="Screenshot of the customer data source selection options." lightbox="../media/tutorials/chat/add-index-dataset-upload-folder.png"::: --1. Select the same [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) (*contosooutdooraisearch*) that you used for your product info index. Then select **Next**. -1. Enter **customer-info** for the index name. -- :::image type="content" source="../media/tutorials/chat/add-index-settings.png" alt-text="Screenshot of the Azure AI Search service and index name." lightbox="../media/tutorials/chat/add-index-settings.png"::: --1. Select a virtual machine to run indexing jobs. The default option is **Auto select**. Then select **Next**. -1. On the **Search settings** page under **Vector settings**, deselect the **Add vector search to this search resource** checkbox. This setting helps determine how the model responds to requests. Then select **Next**. - - > [!NOTE] - > If you add vector search, more options would be available here for an additional cost. ---1. Review the details you entered, and select **Create**. -- :::image type="content" source="../media/tutorials/chat/add-index-review.png" alt-text="Screenshot of the review and finish index creation page." lightbox="../media/tutorials/chat/add-index-review.png"::: -- > [!NOTE] - > You use the *customer-info* index and the *contosooutdooraisearch* connection to your Azure AI Search service in prompt flow later in this tutorial. If the names you enter differ from what's specified here, make sure to use the names you entered in the rest of the tutorial. --1. You're taken to the index details page where you can see the status of your index creation. -- :::image type="content" source="../media/tutorials/chat/add-index-created-details.png" alt-text="Screenshot of the customer info index details." lightbox="../media/tutorials/chat/add-index-created-details.png"::: --For more information on how to create an index, see [Create an index](../how-to/index-add.md). --### Create a compute session that's needed for prompt flow --After you're done creating your index, return to your prompt flow and start the compute session. Prompt flow requires a compute session to run. -1. Go to your project. -1. Select **Prompt flow** from **Tools** in the left menu. Then select the prompt flow folder that you created previously. -1. Select **Start compute session** from the top menu. --To create a compute instance and a compute session, you can also follow the steps in [how to create a compute session](../how-to/create-manage-compute-session.md). --To complete the rest of the tutorial, make sure that your compute session is running. --> [!IMPORTANT] -> You're charged for compute instances while they are running. To avoid incurring unnecessary Azure costs, pause the compute instance when you're not actively working in prompt flow. For more information, see [how to start and stop compute](../how-to/create-manage-compute.md#start-or-stop-a-compute-instance). --### Add customer information to the flow --After you're done creating your index, return to your prompt flow and follow these steps to add the customer info to the flow: --1. Make sure you have a compute session running. If you don't have one, see [create a compute session](#create-a-compute-session-thats-needed-for-prompt-flow) in the previous section. -1. Select **+ More tools** from the top menu and then select **Index Lookup** from the list of tools. -- :::image type="content" source="../media/tutorials/chat/add-tool-index-lookup.png" alt-text="Screenshot of selecting the index lookup tool in prompt flow." lightbox="../media/tutorials/chat/add-tool-index-lookup.png"::: --1. Name the new node **queryCustomerIndex** and select **Add**. -1. Select the **mlindex_content** textbox in the **queryCustomerIndex** node. -- :::image type="content" source="../media/tutorials/chat/index-lookup-mlindex-content.png" alt-text="Screenshot of the mlindex_content textbox in the index lookup node." lightbox="../media/tutorials/chat/index-lookup-mlindex-content.png"::: -- The **Generate** dialog opens. You use this dialog to configure the **queryCustomerIndex** node to connect to your *customer-info* index. --1. For the **index_type** value, select **Azure AI Search**. -1. Select or enter the following values: -- | Name | Value | - |-|--| - | **acs_index_connection** | The name of your Azure AI Search service connection (such as *contosooutdooraisearch*) | - | **acs_index_name** | *customer-info* | - | **acs_content_field** | *content* | - | **acs_metadata_field** | *meta_json_string* | - | **semantic_configuration** | *azuremldefault* | - | **embedding_type** | *None* | --1. Select **Save** to save your settings. -1. Select or enter the following values for the **queryCustomerIndex** node: -- | Name | Value | - |-|--| - | **queries** | *${extractSearchIntent.output}* | - | **query_type** | *Keyword* | - | **topK** | *5* | -- You can see the **queryCustomerIndex** node is connected to the **extractSearchIntent** node in the graph. -- :::image type="content" source="../media/tutorials/chat/connect-to-search-intent.png" alt-text="Screenshot of the prompt flow node for retrieving product info." lightbox="../media/tutorials/chat/connect-to-search-intent.png"::: --1. Select **Save** from the top menu to save your changes. Remember to save your prompt flow periodically as you make changes. --### Connect the customer info to the flow --In the next section, you aggregate the product and customer info to output it in a format that the large language model can use. But first, you need to connect the customer info to the flow. ----1. Replace all instances of **querySearchResource** with **queryProductIndex** in the graph. We're renaming the node to better reflect that it retrieves product info and contrasts with the **queryCustomerIndex** node that you added to the flow. -1. Rename and replace all instances of **chunkDocuments** with **chunkProductDocuments** in the graph. -1. Rename and replace all instances of **selectChunks** with **selectProductChunks** in the graph. -1. Copy and paste the **chunkProductDocuments** and **selectProductChunks** nodes to create similar nodes for the customer info. Rename the new nodes **chunkCustomerDocuments** and **selectCustomerChunks** respectively. -1. Within the **chunkCustomerDocuments** node, replace the `${queryProductIndex.output}` input with `${queryCustomerIndex.output}`. -1. Within the **selectCustomerChunks** node, replace the `${chunkProductDocuments.output}` input with `${chunkCustomerDocuments.output}`. -1. Select **Save** from the top menu to save your changes. --- By now, the `flow.dag.yaml` file should include nodes (among others) that look similar to the following example: - - ```yaml - - name: chunkProductDocuments - type: python - source: - type: code - path: chunkProductDocuments.py - inputs: - data_source: Azure AI Search - max_tokens: 1050 - queries: ${extractSearchIntent.output} - query_type: Keyword - results: ${queryProductIndex.output} - top_k: 5 - use_variants: false - - name: selectProductChunks - type: python - source: - type: code - path: filterChunks.py - inputs: - min_score: 0.3 - results: ${chunkProductDocuments.output} - top_k: 5 - use_variants: false - - name: chunkCustomerDocuments - type: python - source: - type: code - path: chunkCustomerDocuments.py - inputs: - data_source: Azure AI Search - max_tokens: 1050 - queries: ${extractSearchIntent.output} - query_type: Keyword - results: ${queryCustomerIndex.output} - top_k: 5 - use_variants: false - - name: selectCustomerChunks - type: python - source: - type: code - path: filterChunks.py - inputs: - min_score: 0.3 - results: ${chunkCustomerDocuments.output} - top_k: 5 - use_variants: false - ``` --### Aggregate product and customer info --At this point, the prompt flow only uses the product information. -- **extractSearchIntent** extracts the search intent from the user's question.-- **queryProductIndex** retrieves the product info from the *product-info* index.-- The **LLM** tool (for large language models) receives a formatted reply via the **chunkProductDocuments** > **selectProductChunks** > **formatGeneratedReplyInputs** nodes.--You need to connect and aggregate the product and customer info to output it in a format that the **LLM** tool can use. Follow these steps to aggregate the product and customer info: --1. Select **Python** from the list of tools. -1. Name the tool **aggregateChunks** and select **Add**. -1. Copy and paste the following Python code to replace all contents in the **aggregateChunks** code block. -- ```python - from promptflow import tool - from typing import List - - @tool - def aggregate_chunks(input1: List, input2: List) -> str: - interleaved_list = [] - for i in range(max(len(input1), len(input2))): - if i < len(input1): - interleaved_list.append(input1[i]) - if i < len(input2): - interleaved_list.append(input2[i]) - return interleaved_list - ``` --1. Select the **Validate and parse input** button to validate the inputs for the **aggregateChunks** node. If the inputs are valid, prompt flow parses the inputs and creates the necessary variables for you to use in your code. -- :::image type="content" source="../media/tutorials/chat/aggregate-chunks-validate.png" alt-text="Screenshot of the prompt flow node for aggregating product and customer information." lightbox="../media/tutorials/chat/aggregate-chunks-validate.png"::: --1. Edit the **aggregateChunks** node to connect the product and customer info. Set the **inputs** to the following values: -- | Name | Type | Value | - |-|-|--| - | **input1** | list | *${selectProductChunks.output}* | - | **input2** | list | *${selectCustomerChunks.output}* | -- :::image type="content" source="../media/tutorials/chat/aggregate-chunks-inputs.png" alt-text="Screenshot of the inputs to edit in the aggregate chunks node." lightbox="../media/tutorials/chat/aggregate-chunks-inputs.png"::: --1. Select the **shouldGenerateReply** node from the graph. Select or enter `${aggregateChunks.output}` for the **chunks** input. -1. Select the **formatGenerateReplyInputs** node from the graph. Select or enter `${aggregateChunks.output}` for the **chunks** input. -1. Select the **outputs** node from the graph. Select or enter `${aggregateChunks.output}` for the **chunks** input. -1. Select **Save** from the top menu to save your changes. Remember to save your prompt flow periodically as you make changes. --Now you can see the **aggregateChunks** node in the graph. The node connects the product and customer info to output it in a format that the **LLM** tool can use. ---### Chat in prompt flow with product and customer info --By now you have both the product and customer info in prompt flow. You can chat with the model in prompt flow and get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" Before proceeding to a more formal evaluation, you can optionally chat with the model to see how it responds to your questions. --1. Continue from the previous section with the **outputs** node selected. Make sure that the **reply** output has the **Chat output** radio button selected. Otherwise, the full set of documents are returned in response to the question in chat. -1. Select **Chat** from the top menu in prompt flow to try chat. -1. Enter "How many TrailWalker hiking shoes did Daniel Wilson buy?" and then select the right arrow icon to send. - - > [!NOTE] - > It might take a few seconds for the model to respond. You can expect the response time to be faster when you use a deployed flow. --1. The response is what you expect. The model uses the customer info to answer the question. -- :::image type="content" source="../media/tutorials/chat/chat-with-data-customer.png" alt-text="Screenshot of the assistant's reply with product and customer grounding data." lightbox="../media/tutorials/chat/chat-with-data-customer.png"::: --## Evaluate the flow using a question and answer evaluation dataset --In [AI Studio](https://ai.azure.com), you want to evaluate the flow before you [deploy the flow](#deploy-the-flow) for [consumption](#use-the-deployed-flow). --In this section, you use the built-in evaluation to evaluate your flow with a question and answer evaluation dataset. The built-in evaluation uses AI-assisted metrics to evaluate your flow: groundedness, relevance, and retrieval score. For more information, see [built-in evaluation metrics](../concepts/evaluation-metrics-built-in.md). --### Create an evaluation --You need a question and answer evaluation dataset that contains questions and answers that are relevant to your scenario. Create a new file locally named **qa-evaluation.jsonl**. Copy and paste the following questions and answers (`"truth"`) into the file. --```json -{"question": "What color is the CozyNights Sleeping Bag?", "truth": "Red", "chat_history": [], } -{"question": "When did Daniel Wilson order the BaseCamp Folding Table?", "truth": "May 7th, 2023", "chat_history": [] } -{"question": "How much does TrailWalker Hiking Shoes cost? ", "truth": "$110", "chat_history": [] } -{"question": "What kind of tent did Sarah Lee buy?", "truth": "SkyView 2 person tent", "chat_history": [] } -{"question": "What is Melissa Davis's phone number?", "truth": "555-333-4444", "chat_history": [] } -{"question": "What is the proper care for trailwalker hiking shoes?", "truth": "After each use, remove any dirt or debris by brushing or wiping the shoes with a damp cloth.", "chat_history": [] } -{"question": "Does TrailMaster Tent come with a warranty?", "truth": "2 years", "chat_history": [] } -{"question": "How much did David Kim spend on the TrailLite Daypack?", "truth": "$240", "chat_history": [] } -{"question": "What items did Amanda Perez purchase?", "truth": "TrailMaster X4 Tent, TrekReady Hiking Boots (quantity 3), CozyNights Sleeping Bag, TrailBlaze Hiking Pants, RainGuard Hiking Jacket, and CompactCook Camping Stove", "chat_history": [] } -{"question": "What is the Brand for TrekReady Hiking Boots", "truth": "TrekReady", "chat_history": [] } -{"question": "How many items did Karen Williams buy?", "truth": "three items of the Summit Breeze Jacket", "chat_history": [] } -{"question": "France is in Europe", "truth": "Sorry, I can only truth questions related to outdoor/camping gear and equipment", "chat_history": [] } -``` --Now that you have your evaluation dataset, you can evaluate your flow by following these steps: --1. Select **Evaluate** > **Built-in evaluation** from the top menu in prompt flow. - - :::image type="content" source="../media/tutorials/chat/evaluate-built-in-evaluation.png" alt-text="Screenshot of the option to create a built-in evaluation from prompt flow." lightbox="../media/tutorials/chat/evaluate-built-in-evaluation.png"::: -- You're taken to the **Create a new evaluation** wizard. --1. Enter a name for your evaluation and select a compute session. -1. Select **Question and answer without context** from the scenario options. -1. Select the flow to evaluate. In this example, select *Contoso outdoor flow* or whatever you named your flow. Then select **Next**. -- :::image type="content" source="../media/tutorials/chat/evaluate-basic-scenario.png" alt-text="Screenshot of selecting an evaluation scenario." lightbox="../media/tutorials/chat/evaluate-basic-scenario.png"::: --1. Select **Add your dataset** on the **Configure test data** page. -- :::image type="content" source="../media/tutorials/chat/evaluate-add-dataset.png" alt-text="Screenshot of the option to use a new or existing dataset." lightbox="../media/tutorials/chat/evaluate-add-dataset.png"::: --1. Select **Upload file**, browse files, and select the **qa-evaluation.jsonl** file that you created previously. --1. After the file is uploaded, you need to configure your data columns to match the required inputs for prompt flow to execute a batch run that generate output for evaluation. Enter or select the following values for each data set mapping for prompt flow. -- :::image type="content" source="../media/tutorials/chat/evaluate-map-data-source.png" alt-text="Screenshot of the prompt flow evaluation dataset mapping." lightbox="../media/tutorials/chat/evaluate-map-data-source.png"::: -- | Name | Description | Type | Data source | - |-|-|--|--| - | **chat_history** | The chat history | list | *${data.chat_history}* | - | **query** | The query | string | *${data.question}* | --1. Select **Next**. --1. Select the metrics you want to use to evaluate your flow. In this example, select Coherence, Fluency, GPT similarity, and F1 score. --1. Select a connection and model to use for evaluation. In this example, select **gpt-35-turbo-16k**. Then select **Next**. -- :::image type="content" source="../media/tutorials/chat/evaluate-metrics.png" alt-text="Screenshot of selecting evaluation metrics." lightbox="../media/tutorials/chat/evaluate-metrics.png"::: -- > [!NOTE] - > Evaluation with AI-assisted metrics needs to call another GPT model to do the calculation. For best performance, use a model that supports at least 16k tokens such as gpt-4-32k or gpt-35-turbo-16k model. If you didn't previously deploy such a model, you can deploy another model by following the steps in [the AI Studio chat playground quickstart](../quickstarts/get-started-playground.md#deploy-a-chat-model). Then return to this step and select the model you deployed. --1. You need to configure your data columns to match the required inputs to generate evaluation metrics. Enter the following values to map the dataset to the evaluation properties: -- | Name | Description | Type | Data source | - |-|-|--|--| - | **question** | A query seeking specific information. | string | *${data.question}* | - | **answer** | The response to question generated by the model as answer. | string | ${run.outputs.reply} | - | **documents** | String with context from retrieved documents. | string | ${run.outputs.documents} | --1. Select **Next**. --1. Review the evaluation details and then select **Submit**. You're taken to the **Metric evaluations** page. --### View the evaluation status and results --Now you can view the evaluation status and results by following these steps: --1. After you [create an evaluation](#create-an-evaluation), if you aren't there already go to the **Evaluation**. On the **Metric evaluations** page, you can see the evaluation status and the metrics that you selected. You might need to select **Refresh** after a couple of minutes to see the **Completed** status. -- :::image type="content" source="../media/tutorials/chat/evaluate-status-completed.png" alt-text="Screenshot of the metric evaluations page." lightbox="../media/tutorials/chat/evaluate-status-completed.png"::: --1. Stop your compute session in prompt flow. Go to your prompt flow and select **Compute session running** > **Stop compute session** from the top menu. -- :::image type="content" source="../media/tutorials/chat/compute-session-stop.png" alt-text="Screenshot of the button to stop a compute session in prompt flow." lightbox="../media/tutorials/chat/compute-session-stop.png"::: -- > [!TIP] - > Once the evaluation is in **Completed** status, you don't need a compute session to complete the rest of this tutorial. You can stop your compute instance to avoid incurring unnecessary Azure costs. For more information, see [how to start and stop compute](../how-to/create-manage-compute.md#start-or-stop-a-compute-instance). --1. Select the name of the evaluation (such as *evaluation_evaluate_from_flow_variant_0*) to see the evaluation metrics. -- :::image type="content" source="../media/tutorials/chat/evaluate-view-results-detailed.png" alt-text="Screenshot of the detailed metrics results page." lightbox="../media/tutorials/chat/evaluate-view-results-detailed.png"::: --For more information, see [view evaluation results](../how-to/evaluate-flow-results.md). --## Deploy the flow --Now that you [built a flow](#create-a-prompt-flow-from-the-playground) and completed a metrics-based [evaluation](#evaluate-the-flow-using-a-question-and-answer-evaluation-dataset), it's time to create your online endpoint for real-time inference. That means you can use the deployed flow to answer questions in real time. --Follow these steps to deploy a prompt flow as an online endpoint from [AI Studio](https://ai.azure.com). --1. Have a prompt flow ready for deployment. If you don't have one, see the previous sections or [how to build a prompt flow](../how-to/flow-develop.md). -1. Optional: Select **Chat** to test if the flow is working correctly. Testing your flow before deployment is recommended best practice. --1. Select **Deploy** on the flow editor. -- :::image type="content" source="../media/tutorials/chat/deploy-from-flow.png" alt-text="Screenshot of the deploy button from a prompt flow editor." lightbox = "../media/tutorials/chat/deploy-from-flow.png"::: --1. Provide the requested information on the **Basic Settings** page in the deployment wizard. Select **Next** to proceed to the advanced settings pages. -- :::image type="content" source="../media/tutorials/chat/deploy-basic-settings.png" alt-text="Screenshot of the basic settings page in the deployment wizard." lightbox = "../media/tutorials/chat/deploy-basic-settings.png"::: --1. On the **Advanced settings - Endpoint** page, leave the default settings and select **Next**. -1. On the **Advanced settings - Deployment** page, leave the default settings and select **Next**. -1. On the **Advanced settings - Outputs & connections** page, make sure all outputs are selected under **Included in endpoint response**. -- :::image type="content" source="../media/tutorials/chat/deploy-advanced-outputs-connections.png" alt-text="Screenshot of the advanced settings page in the deployment wizard." lightbox = "../media/tutorials/chat/deploy-advanced-outputs-connections.png"::: --1. Select **Review + Create** to review the settings and create the deployment. -1. Select **Create** to deploy the prompt flow. -- :::image type="content" source="../media/tutorials/chat/deploy-review-create.png" alt-text="Screenshot of the review prompt flow deployment settings page." lightbox = "../media/tutorials/chat/deploy-review-create.png"::: --For more information, see [how to deploy a flow](../how-to/flow-deploy.md). --## Use the deployed flow --Your copilot application can use the deployed prompt flow to answer questions in real time. You can use the REST endpoint or the SDK to use the deployed flow. --1. To view the status of your deployment in [AI Studio](https://ai.azure.com), select **Deployments** from the left navigation. -- :::image type="content" source="../media/tutorials/chat/deployments-state-updating.png" alt-text="Screenshot of the prompt flow deployment state in progress." lightbox = "../media/tutorials/chat/deployments-state-updating.png"::: -- Once the deployment is created successfully, you can select the deployment to view the details. -- > [!NOTE] - > If you see a message that says "Currently this endpoint has no deployments" or the **State** is still *Updating*, you might need to select **Refresh** after a couple of minutes to see the deployment. --1. Optionally, the details page is where you can change the authentication type or enable monitoring. -- :::image type="content" source="../media/tutorials/chat/deploy-authentication-monitoring.png" alt-text="Screenshot of the prompt flow deployment details page." lightbox = "../media/tutorials/chat/deploy-authentication-monitoring.png"::: --1. Select the **Consume** tab. You can see code samples and the REST endpoint for your copilot application to use the deployed flow. -- :::image type="content" source="../media/tutorials/chat/deployments-score-url-samples.png" alt-text="Screenshot of the prompt flow deployment endpoint and code samples." lightbox = "../media/tutorials/chat/deployments-score-url-samples.png"::: --## Clean up resources --To avoid incurring unnecessary Azure costs, you should delete the resources you created in this tutorial if they're no longer needed. To manage resources, you can use the [Azure portal](https://portal.azure.com?azure-portal=true). --You can also [stop or delete your compute instance](../how-to/create-manage-compute.md#start-or-stop-a-compute-instance) in [AI Studio](https://ai.azure.com) as needed. ---## Next steps --* Learn more about [prompt flow](../how-to/prompt-flow.md). -* [Deploy an enterprise chat web app](./deploy-chat-web-app.md). |
api-management | Howto Use Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-use-analytics.md | If you need to configure one, the following are brief steps to send gateway logs 1. Enter a descriptive name for the diagnostic setting. 1. In **Logs**, select **Logs related to ApiManagement Gateway**. 1. In **Destination details**, select **Send to Log Analytics** and select a Log Analytics workspace in the same or a different subscription. If you need to create a workspace, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).-1. Accept defaults for other settings, or customize as needed. Select **Save**. +1. Make sure **Resource specific** is selected as the destination table. +1. Select **Save**. ### Access the dashboard Available operations return report records by API, geography, API operations, pr * For an introduction to Azure Monitor features in API Management, see [Tutorial: Monitor published APIs](api-management-howto-use-azure-monitor.md) * For detailed HTTP logging and monitoring, see [Monitor your APIs with Azure API Management, Event Hubs, and Moesif](api-management-log-to-eventhub-sample.md). * Learn about integrating [Azure API Management with Azure Application Insights](api-management-howto-app-insights.md).-* Learn about [Built-in API analytics dashboard retirement (March 2027)](breaking-changes/analytics-dashboard-retirement-march-2027.md) +* Learn about [Built-in API analytics dashboard retirement (March 2027)](breaking-changes/analytics-dashboard-retirement-march-2027.md) |
app-service | Tutorial Connect Msi Key Vault Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-python.md | + + Title: 'Tutorial: Python connect to Azure services securely with Key Vault' +description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a Python web app +ms.devlang: python +# ms.devlang: python, azurecli + Last updated : 08/23/2024++++++++# Tutorial: Secure Cognitive Service connection from Python App Service using Key Vault +++## Configure Python app ++Clone the sample repository locally and deploy the sample application to App Service. Replace *\<app-name>* with a unique name. ++```azurecli-interactive +# Clone and prepare sample application +git clone https://github.com/Azure-Samples/app-service-language-detector.git +cd app-service-language-detector/python +zip -r default.zip . ++# Save app name as variable for convenience +appName=<app-name> ++az appservice plan create --resource-group $groupName --name $appName --sku FREE --location $region --is-linux +az webapp create --resource-group $groupName --plan $appName --name $appName --runtime "python:3.11" +az webapp config appsettings set --resource-group $groupName --name $appName --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true +az webapp deploy --resource-group $groupName --name $appName --src-path ./default.zip +``` ++The preceding commands: ++* Create a linux app service plan +* Create a web app for Python 3.11 +* Configure the web app to install the python packages on deployment +* Upload the zip file, and install the python packages ++## Configure secrets as app settings + |
application-gateway | Tutorial Ingress Controller Add On Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md | You can use Azure CLI or portal to enable the [application gateway ingress contr In this tutorial, you learn how to: > [!div class="checklist"]-> * Create a resource group -> * Create a new AKS cluster -> * Create a new application gateway -> * Enable the AGIC add-on in the existing AKS cluster through Azure CLI -> * Enable the AGIC add-on in the existing AKS cluster through Azure portal -> * Peer the application gateway virtual network with the AKS cluster virtual network -> * Deploy a sample application using AGIC for ingress on the AKS cluster -> * Check that the application is reachable through application gateway +> +> * Create a resource group. +> * Create a new AKS cluster. +> * Create a new application gateway. +> * Enable the AGIC add-on in the existing AKS cluster through Azure CLI. +> * Enable the AGIC add-on in the existing AKS cluster through Azure portal. +> * Peer the application gateway virtual network with the AKS cluster virtual network. +> * Deploy a sample application using AGIC for ingress on the AKS cluster. +> * Check that the application is reachable through application gateway. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] If you'd like to continue using Azure CLI, you can continue to enable the AGIC a appgwId=$(az network application-gateway show --name myApplicationGateway --resource-group myResourceGroup -o tsv --query "id") az aks enable-addons --name myCluster --resource-group myResourceGroup --addon ingress-appgw --appgw-id $appgwId ```-## Enable the AGIC add-on in existing AKS cluster through Azure portal -If you'd like to use Azure portal to enable AGIC add-on, go to [(https://aka.ms/azure/portal/aks/agic)](https://aka.ms/azure/portal/aks/agic) and navigate to your AKS cluster through the portal link. Select the **Networking** menu item under **Settings**. From there, go to the **Virtual network integration** tab within your AKS cluster. You'll see an **Application gateway ingress controller** section, which allows you to enable and disable the ingress controller add-on. Select the **Manage** button, then the checkbox next to **Enable ingress controller**. Select the application gateway you created, **myApplicationGateway** and then select **Save**. +## Enable the AGIC add-on in existing AKS cluster through Azure portal +1. From the [Azure portal home page](https://portal.azure.com/), navigate to your AKS cluster resource. +2. In the service menu, under **Settings**, select **Networking** > **Virtual network integration**. +3. Under **Application Gateway ingress controller**, select **Manage**. +4. On the **Application Gateway ingress controller** page, select the **checkbox** to enable the ingress controller, and then select your existing application gateway from the dropdown list. +5. Select **Save**. > [!IMPORTANT] > If you use an application gateway in a different resource group than the AKS cluster resource group, the managed identity **_ingressapplicationgateway-{AKSNAME}_** that is created must have **Network Contributor** and **Reader** roles set in the application gateway resource group. |
automation | Runbook Input Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runbook-input-parameters.md | In the label beneath the input box, you can see the properties that have been se #### Start a runbook using an SDK and assign parameters -* **Azure Resource Manager method:** You can start a runbook using the SDK of a programming language. Below is a C# code snippet for starting a runbook in your Automation account. You can view all the code at our [GitHub repository](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/automation/Microsoft.Azure.Management.Automation/tests/TestSupport/AutomationTestBase.cs). +* **Azure Resource Manager method:** You can start a runbook using the SDK of a programming language. Below is a C# code snippet for starting a runbook in your Automation account. You can view all the code at our [GitHub repository](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/automation/Azure.ResourceManager.Automation/tests/AutomationManagementTestBase.cs). ```csharp public Job StartRunbook(string runbookName, IDictionary<string, string> parameters = null) In the label beneath the input box, you can see the properties that have been se } ``` -* **Azure classic deployment model method:** You can start a runbook by using the SDK of a programming language. Below is a C# code snippet for starting a runbook in your Automation account. You can view all the code at our [GitHub repository](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/automation/Microsoft.Azure.Management.Automation/tests/TestSupport/AutomationTestBase.cs). +* **Azure classic deployment model method:** You can start a runbook by using the SDK of a programming language. Below is a C# code snippet for starting a runbook in your Automation account. You can view all the code at our [GitHub repository](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/automation/Azure.ResourceManager.Automation/tests/AutomationManagementTestBase.cs). ```csharp public Job StartRunbook(string runbookName, IDictionary<string, string> parameters = null) |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | +## August 13, 2024 ++**Image tag**: `v1.32.0_2024-08-13` ++For complete release version information, review [Version log](version-log.md#august-13-2024). + ## July 9, 2024 **Image tag**: `v1.31.0_2024-07-09` |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | -## July 9 2024 +## August 13, 2024 ++|Component|Value| +|--|--| +|Container images tag |`v1.32.0_2024-08-13`| +|**CRD names and version:**| | +|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| +|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| +|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2| +|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| +|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4| +|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3| +|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6| +|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13| +|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2| +|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1| +|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1| +|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5| +|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5| +|Azure Resource Manager (ARM) API version|2023-11-01-preview| +|`arcdata` Azure CLI extension version|1.5.17 ([Download](https://aka.ms/az-cli-arcdata-ext))| +|Arc-enabled Kubernetes helm chart extension version|1.32.0| +|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))| +|SQL Database version | 972 | ++## July 9, 2024 |Component|Value| |--|--| |
azure-functions | Functions Core Tools Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md | description: Reference documentation that supports the Azure Functions Core Tool - ignite-2023 Previously updated : 08/20/2023 Last updated : 08/22/2024 # Azure Functions Core Tools reference Gets settings from a specific function app. func azure functionapp fetch-app-settings <APP_NAME> ``` +`func azure functionapp fetch-app-settings` supports these optional arguments: ++| Option | Description | +| | -- | +| **`--access-token`** | Lets you use a specific access token when performing authenticated `azure` actions. | +| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). | +| **`--management-url`** | Sets the management URL for your cloud. Use this when running in a sovereign cloud. | +| **`--slot`** | Optional name of a specific slot to which to publish. | +| **`--subscription`** | Sets the default subscription to use. | + For more information, see [Download application settings](functions-run-local.md#download-application-settings). Settings are downloaded into the local.settings.json file for the project. On-screen values are masked for security. You can protect settings in the local.settings.json file by [enabling local encryption](functions-run-local.md#encrypt-the-local-settings-file). Returns a list of the functions in the specified function app. ```command func azure functionapp list-functions <APP_NAME> ```++`func azure functionapp list-functions` supports these optional arguments: ++| Option | Description | +| | -- | +| **`--access-token`** | Lets you use a specific access token when performing authenticated `azure` actions. | +| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). | +| **`--management-url`** | Sets the management URL for your cloud. Use this when running in a sovereign cloud. | +| **`--show-keys`** | Shows HTTP function endpoint URLs that include their default access keys. These URLs can be used to access function endpoints with `function` level [HTTP authentication](functions-bindings-http-webhook-trigger.md#http-auth). | +| **`--slot`** | Optional name of a specific slot to which to publish. | +| **`--subscription`** | Sets the default subscription to use. | + ## func azure functionapp logstream Connects the local command prompt to streaming logs for the function app in Azure. func azure functionapp logstream <APP_NAME> The default timeout for the connection is 2 hours. You can change the timeout by adding an app setting named [SCM_LOGSTREAM_TIMEOUT](functions-app-settings.md#scm_logstream_timeout), with a timeout value in seconds. Not yet supported for Linux apps in the Consumption plan. For these apps, use the `--browser` option to view logs in the portal. -The `deploy` action supports the following options: +The `func azure functionapp logstream` command supports these optional arguments: | Option | Description | | | -- |+| **`--access-token`** | Lets you use a specific access token when performing authenticated `azure` actions. | +| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). | | **`--browser`** | Open Azure Application Insights Live Stream for the function app in the default browser. |+| **`--management-url`** | Sets the management URL for your cloud. Use this when running in a sovereign cloud. | +| **`--slot`** | Optional name of a specific slot to which to publish. | +| **`--subscription`** | Sets the default subscription to use. | For more information, see [Enable streaming execution logs in Azure Functions](streaming-logs.md). The following publish options apply, based on version: | **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.| | **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](#func-azure-storage-fetch-connection-string). | | **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. |+| **`--show-keys`** | Shows HTTP function endpoint URLs that include their default access keys. These URLs can be used to access function endpoints with `function` level [HTTP authentication](functions-bindings-http-webhook-trigger.md#http-auth). | | **`--slot`** | Optional name of a specific slot to which to publish. | | **`--subscription`** | Sets the default subscription to use. | |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | If you don't have these tools installed, you need to instead [get a valid access ## <a name="project-file-deployment"></a>Deploy project files ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-typescript"-To publish your local code to a function app in Azure, use the [`func azure functionapp publish publish`](./functions-core-tools-reference.md#func-azure-functionapp-publish) command, as in the following example: +To publish your local code to a function app in Azure, use the [`func azure functionapp publish`](./functions-core-tools-reference.md#func-azure-functionapp-publish) command, as in the following example: ``` func azure functionapp publish <FunctionAppName> |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | -We strongly recommended to always update to the latest version, or opt in to the [Automatic Extension Update](/azure/virtual-machines/automatic-extension-upgrade) feature. +We strongly recommended to always update to the latest version, or opt in to the [Automatic Extension Update](/azure/virtual-machines/automatic-extension-upgrade) feature. A version is not automatically rolled out until it meets a high quality bar which can take as long as 5 weeks after the initial release. [//]: # "DON'T change the format (column schema, etc.) of the table without consulting glinuxagent alias. The [Azure Monitor Linux Agent Troubleshooting Tool](https://github.com/Azure/azure-linux-extensions/blob/master/AzureMonitorAgent/ama_tst/AMA-Troubleshooting-Tool.md) parses the table at runtime to determine the latest version of AMA; altering the format could degrade some of the functions of the tool." ## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|-| August 2024 | **Windows**<ul><li>Added columns to the SecurityEvent table: Keywords, Opcode, Correlation, ProcessId, ThreadId, EventRecordId.</li><li>AMA: Support AMA Client Installer for selected partners.</li></ul>**Linux Features**<ul><li>Enable Dynamic Linking of OpenSSL 1.1 in all regions</li><li>Add Computer field to Custom Logs</li><li>Add EventHub upload support for Custom Logs </li><li>Reliability improvement for upload task scheduling</li><li>Added support for SUSE15 SP5, and AWS 3 distributions</li></ul>**Linux Fixes**<ul><li>Fix Direct upload to storage for perf counters when no other destination is configured. You don't see perf counters If storage was the only configured destination for perf counters, they wouldn't see perf counters in their blob or table.</li><li>Fluent-Bit updated to version 3.0.7. This fixes the issue with Fluent-Bit creating junk files in the root directory on process shutdown.</li><li>Fix proxy for system-wide proxy using http(s)_proxy env var </li><li>Support for syslog hostnames that are up to 255characters</li><li>Stop sending rows longer than 1MB. This exceeds ingestion limits and destabilizes the agent. Now the row is gracefully dropped and a diagnostic message is written.</li><li>Set max disk space used for rsyslog spooling to 1GB. There was no limit before which could lead to high memory usage.</li><li>Use random available TCP port when there is a port conflict with AMA port 28230 and 28330 . This resolved issues where port 28230 and 28330 were already in uses by the customer which prevented data upload to Azure.</li></ul>| 1.29 | 1.32.6 | +| August 2024 | **Windows**<ul><li>Added columns to the SecurityEvent table: Keywords, Opcode, Correlation, ProcessId, ThreadId, EventRecordId.</li><li>AMA: Support AMA Client Installer support for W365 Azure Virtual Desktop (AVD) tenants/partners.</li></ul>**Linux Features**<ul><li>Enable Dynamic Linking of OpenSSL 1.1 in all regions</li><li>Add Computer field to Custom Logs</li><li>Add EventHub upload support for Custom Logs </li><li>Reliability improvement for upload task scheduling</li><li>Added support for SUSE15 SP5, and AWS 3 distributions</li></ul>**Linux Fixes**<ul><li>Fix Direct upload to storage for perf counters when no other destination is configured. You don't see perf counters If storage was the only configured destination for perf counters, they wouldn't see perf counters in their blob or table.</li><li>Fluent-Bit updated to version 3.0.7. This fixes the issue with Fluent-Bit creating junk files in the root directory on process shutdown.</li><li>Fix proxy for system-wide proxy using http(s)_proxy env var </li><li>Support for syslog hostnames that are up to 255characters</li><li>Stop sending rows longer than 1MB. This exceeds ingestion limits and destabilizes the agent. Now the row is gracefully dropped and a diagnostic message is written.</li><li>Set max disk space used for rsyslog spooling to 1GB. There was no limit before which could lead to high memory usage.</li><li>Use random available TCP port when there is a port conflict with AMA port 28230 and 28330 . This resolved issues where port 28230 and 28330 were already in uses by the customer which prevented data upload to Azure.</li></ul>| 1.29 | 1.32.6 | | June 2024 |**Windows**<ul><li>Fix encoding issues with Resource ID field.</li><li>AMA: Support new ingestion endpoint for GovSG environment.</li><li>Upgrade AzureSecurityPack version to 4.33.0.1.</li><li>Upgrade Metrics Extension version to 2.2024.517.533.</li><li>Upgrade Health Extension version to 2024.528.1.</li></ul>**Linux**<ul><li>Coming Soon</li></ul>| 1.28.2 | | | May 2024 |**Windows**<ul><li>Upgraded Fluent-bit version to 3.0.5. This Fix resolves as security issue in fluent-bit (NVD - CVE-2024-4323 (nist.gov)</li><li>Disabled Fluent-bit logging that caused disk exhaustion issues for some customers. Example error is Fluentbit log with "[C:\projects\fluent-bit-2e87g\src\flb_scheduler.c:72 errno=0] No error" fills up the entire disk of the server.</li><li>Fixed AMA extension getting stuck in deletion state on some VMs that are using Arc. This fix improves reliability.</li><li>Fixed AMA not using system proxy, this issue is a bug introduced in 1.26.0. The issue was caused by a new feature that uses the Arc agentΓÇÖs proxy settings. When the system proxy as set as None the proxy was broken in 1.26.</li><li>Fixed Windows Firewall Logs log file rollover issues</li></ul>| 1.27.0 | | | April 2024 |**Windows**<ul><li>In preparation for the May 17 public preview of Firewall Logs, the agent completed the addition of a profile filter for Domain, Public, and Private Logs. </li><li>AMA running on an Arc enabled server will default to using the Arc proxy settings if available.</li><li>The AMA VM extension proxy settings override the Arc defaults.</li><li>Bug fix in MSI installer: Symptom - If there are spaces in the fluent-bit config path, AMA wasn't recognizing the path properly. AMA now adds quotes to configuration path in fluent-bit.</li><li>Bug fix for Container Insights: Symptom - custom resource ID weren't being honored.</li><li>Security issue fix: skip the deletion of files and directory whose path contains a redirection (via Junction point, Hard links, Mount point, OB Symlinks etc.).</li><li>Updating MetricExtension package to 2.2024.328.1744.</li></ul>**Linux**<ul><li>AMA 1.30 now available in Arc.</li><li>New distribution support Debian 12, RHEL CIS L2.</li><li>Fix for mdsd version 1.30.3 in persistence mode, which converted positive integers to float/double values ("3.0", "4.0") to type ulong which broke Azure stream analytics.</li></ul>| 1.26.0 | 1.31.1 | |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | N/A ## Update > [!NOTE]-> The recommendation is to enable [Automatic Extension Upgrade](/azure/virtual-machines/automatic-extension-upgrade) to update installed extensions to the released (latest) version across all regions. Upgrades are issued in batches, so you may see some of your virtual machines, scale-sets or Arc-enabled servers get upgraded before others. If you need to upgrade an extension immediately, you may use the manual instructions below. +> The recommendation is to enable [Automatic Extension Upgrade](/azure/virtual-machines/automatic-extension-upgrade) to update installed extensions to the stable version across all regions. A version is not automatically rolled out until it meets a high quality bar which can take as long as 5 weeks after the initial release. Upgrades are issued in batches, so you may see some of your virtual machines, scale-sets or Arc-enabled servers get upgraded before others. If you need to upgrade an extension immediately, you may use the manual instructions below. + #### [Portal](#tab/azure-portal) To perform a one-time update of the agent, you must first uninstall the existing agent version. Then install the new version as described. |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Last updated 08/13/2024 Migration is a complex task. Start planning your migration to Azure Monitor Agent using the information in this article as a guide. > [!IMPORTANT]-> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). You can expect the following when you use the MMA or OMS agent after this date. +> The Log Analytics agent will be [retired on **August 31, 2024**](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). The depreciation does not apply to MMA agent connected exclusively to an On-Prem SCOM installation. +> +> You can expect the following when you use the MMA or OMS agent after August 31, 2024. > - **Data upload:** Cloud ingestion services will gradually reduce support for MMA agents, which may result in decreased support and potential compatibility issues for MMA agents over time. Ingestion for MMA will be unchanged until February 1 2025. > - **Installation:** The ability to install the legacy agents will be removed from the Azure Portal and installation policies for legacy agents will be removed. You can still install the MMA agents extension as well as perform offline installations. > - **Customer Support:** You will not be able to get support for legacy agent issues. |
azure-monitor | Azure Monitor Agent Mma Removal Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md | The utility works in two steps: 2. *Removal*: The utility removes the legacy agent from machines listed in the CSV file. You should edit the list of machine in the CSV file to ensure that only machines you want the agent removed from are present. +>!Note +> The removal does not work on MMA agents that were installed using the MSI installer. It only works on the VM extensions. +> + ## Prerequisites Do all the setup steps on an Internet connected machine. You need: |
azure-monitor | Change Analysis Track Outages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-track-outages.md | Visit the web app URL to view the following error: In the Azure portal, navigate to the Change Analysis overview page. Since you triggered a web app outage, you can see an entry of change for `AzureStorageConnection`: - Since the connection string is a secret value, we hide it on the overview page for security purposes. With sufficient permission to read the web app, you can select the change to view details around the old and new values: :::image type="content" source="./media/change-analysis/view-change-details.png" alt-text="Screenshot of viewing change details for troubleshooting."::: Knowing what changed in your application's networking resources is critical due The sample application includes a virtual network to make sure the application remains secure. Via the Azure portal, you can view and assess the network changes captured by Change Analysis. -- ## Next steps Learn more about [Change Analysis](./change-analysis.md). |
azure-monitor | Kubernetes Monitoring Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md | After the policy is assigned to the subscription, whenever you create a new clus ---- ## Enable full monitoring with Azure portal-Using the Azure portal, you can enable both Managed Prometheus and Container insights at the same time. --> [!NOTE] -> If you want to enabled Managed Prometheus without Container insights, then [enable it from the Azure Monitor workspace](./kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) as described below. --### New AKS cluster (Prometheus and Container insights) --When you create a new AKS cluster in the Azure portal, you can enable Prometheus, Container insights, and Grafana from the **Integrations** tab. In the Azure Monitor section, select either **Default configuration** or **Custom configuration** if you want to specify which workspaces to use. You can perform additional configuration once the cluster is created. ---### Existing cluster (Prometheus and Container insights) --This option enables Container insights and optionally Prometheus and Grafana on an existing AKS cluster. -1. Either select **Insights** from the cluster's menu OR select **Containers** from the **Monitor** menu, **Unmonitored clusters** tab, and click **Enable** next to a cluster. - 1. If Container insights isn't enabled for the cluster, then you're presented with a screen identifying which of the features have been enabled. Click **Configure monitoring**. +### New AKS cluster (Prometheus, Container insights, and Grafana) - :::image type="content" source="media/aks-onboard/configure-monitoring-screen.png" lightbox="media/aks-onboard/configure-monitoring-screen.png" alt-text="Screenshot that shows the configuration screen for a cluster."::: +When you create a new AKS cluster in the Azure portal, you can enable Prometheus, Container insights, and Grafana from the **Monitoring** tab. Make sure that you check the **Enable Container Logs**, **Enable Prometheus metrics**, and **Enable Grafana** checkboxes. - 2. If Container insights has already been enabled on the cluster, select the **Monitoring Settings** button to modify the configuration. - :::image type="content" source="media/aks-onboard/monitor-settings-button.png" lightbox="media/aks-onboard/monitor-settings-button.png" alt-text="Screenshot that shows the monitoring settings button for a cluster."::: +### Existing cluster (Prometheus, Container insights, and Grafana) -2. **Container insights** will be enabled. **Select** the checkboxes for **Enable Prometheus metrics** and **Enable Grafana** if you also want to enable them for the cluster. If you have existing Azure Monitor workspace and Grafana workspace, then they're selected for you. -- :::image type="content" source="media/prometheus-metrics-enable/configure-container-insights.png" lightbox="media/prometheus-metrics-enable/configure-container-insights.png" alt-text="Screenshot that shows the dialog box to configure Container insights with Prometheus and Grafana."::: --3. Click **Advanced settings** to select alternate workspaces or create new ones. The **Cost presets** setting allows you to modify the default collection details to reduce your monitoring costs. See [Enable cost optimization settings in Container insights](./container-insights-cost-config.md) for details. -- :::image type="content" source="media/aks-onboard/advanced-settings.png" lightbox="media/aks-onboard/advanced-settings.png" alt-text="Screenshot that shows the advanced settings dialog box."::: --4. Click **Configure** to save the configuration. +1. Navigate to your AKS cluster in the Azure portal. +2. In the service menu, under **Monitoring**, select **Insights** > **Configure monitoring**. +3. Container insights is already enabled. Select the **Enable Prometheus metrics** and **Enable Grafana** checkboxes. If you have existing Azure Monitor workspace and Grafana workspace, then they're selected for you. +4. Select **Advanced settings** if you want to select alternate workspaces or create new ones. The **Cost presets** setting allows you to modify the default collection details to reduce your monitoring costs. See [Enable cost optimization settings in Container insights](./container-insights-cost-config.md) for details. +5. Select **Configure**. ### Existing cluster (Prometheus only) -This option enables Prometheus metrics on a cluster without enabling Container insights. --1. Open the **Azure Monitor workspaces** menu in the Azure portal and select your workspace. -1. Select **Monitored clusters** in the **Managed Prometheus** section to display a list of AKS clusters. -1. Select **Configure** next to the cluster you want to enable. -- :::image type="content" source="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" lightbox="media/prometheus-metrics-enable/azure-monitor-workspace-configure-prometheus.png" alt-text="Screenshot that shows an Azure Monitor workspace with a Prometheus configuration."::: --### Existing cluster (Add Prometheus) ---1. Select **Containers** from the **Monitor** menu, **Monitored clusters** tab, and click **Configure** next to a cluster in the **Managed Prometheus** column. -+1. Navigate to your AKS cluster in the Azure portal. +2. In the service menu, under **Monitoring**, select **Insights** > **Configure monitoring**. +3. Select the **Enable Prometheus metrics** checkbox. +4. Select **Advanced settings** if you want to select alternate workspaces or create new ones. The **Cost presets** setting allows you to modify the default collection details to reduce your monitoring costs. +5. Select **Configure**. ## Enable Windows metrics collection (preview) |
azure-monitor | Profiler Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-settings.md | Title: Configure Application Insights Profiler | Microsoft Docs description: Use the Application Insights Profiler settings pane to see Profiler status and start profiling sessions ms.contributor: Charles.Weininger Previously updated : 07/11/2024 Last updated : 08/23/2024 # Configure Application Insights Profiler Memory % | Percentage of memory used while Profiler was running. [enable-app-insights]: ./media/profiler-settings/enable-app-insights-blade-01.png [update-site-extension]: ./media/profiler-settings/update-site-extension-01.png [change-and-save-appinsights]: ./media/profiler-settings/change-and-save-app-insights-01.png-[app-settings-for-profiler]: ./media/profiler-settings/app-settings-for-profiler-01.png [check-for-extension-update]: ./media/profiler-settings/check-extension-update-01.png [profiler-timeout]: ./media/profiler-settings/profiler-time-out.png |
azure-monitor | Create Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/scom-manage-instance/create-key-vault.md | Azure Key Vault is a cloud service that provides a secure store for keys, secret 1. In the Azure portal, search for and select **Key vaults**. - :::image type="Key vaults in portal" source="media/create-key-vault/azure-portal-key-vaults-inline.png" alt-text="Screenshot that shows the icon for key vaults in the Azure portal." lightbox="media/create-key-vault/azure-portal-key-vaults-expanded.png"::: + :::image type="Key vaults in portal" source="media/create-key-vault/azure-portal-key-vaults.png" alt-text="Screenshot that shows the icon for key vaults in the Azure portal."::: The **Key vaults** page opens. 1. Select **Create**. - :::image type="Key vault" source="media/create-key-vault/key-vaults-inline.png" alt-text="Screenshot that shows the Create button for creating a key vault." lightbox="media/create-key-vault/key-vaults-expanded.png"::: - 1. For **Basics**, do the following: - **Project details**: - **Subscription**: Select the subscription. Azure Key Vault is a cloud service that provides a secure store for keys, secret - **Purge protection**: We recommend enabling this feature to have a mandatory retention period. :::image type="Create a key vault" source="media/create-key-vault/create-a-key-vault.png" alt-text="Screenshot that shows basic information for creating a key vault.":::+ 1. Select **Next**. For now, no change is required in access configuration. Access configuration is done in the [step 5](create-user-assigned-identity.md). 1. For **Networking**, do the following: Azure Key Vault is a cloud service that provides a secure store for keys, secret - Under **Public Access**, for **Allow access from**, select **All networks**. :::image type="Networking tab" source="media/create-key-vault/networking-inline.png" alt-text="Screenshot that shows selections for enabling public access on the Networking tab." lightbox="media/create-key-vault/networking-expanded.png":::+ 1. Select **Next**. 1. For **Tags**, select the tags if required and select **Next**. 1. For **Review + create**, review the selections and select **Create** to create the key vault. - :::image type="Tab for reviewing selections before creating a key vault" source="media/create-key-vault/review.png" alt-text="Screenshot that shows the tab for reviewing selections before you create a key vault."::: - ## Next steps - [Create a user-assigned identity](create-user-assigned-identity.md) |
azure-monitor | Snapshot Debugger App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger-app-service.md | After you've deployed your .NET App Services web app: 1. Snapshot Debugger is now enabled. - :::image type="content" source="./media/snapshot-debugger/snapshot-debugger-app-setting.png" alt-text="Screenshot showing App Setting for Snapshot Debugger."::: - ## Disable Snapshot Debugger To disable Snapshot Debugger for your App Services resource: |
azure-resource-manager | Resource Name Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md | In the following tables, the term alphanumeric refers to: > [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |-> | certificateOrders | resource group | 3-30 | Alphanumerics. | +> | certificateOrders | resource group | 3-50 | Alphanumerics. | ## Microsoft.CognitiveServices |
azure-web-pubsub | Concept Service Internals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-service-internals.md | Azure Web PubSub Service provides an easy way to publish/subscribe messages usin - Clients can be written in any language that has Websocket support. - Both text and binary messages are supported within one connection.-- There's a simple protocol for clients to do direct client-to-client message publishing.+- A simple protocol allows clients to publish massages directly to each other. - The service manages the WebSocket connections for you. ## Terms var client2 = new WebSocket( ); ``` -A simple WebSocket client follows a client<->server architecture, as the below sequence diagram shows: +A simple WebSocket client follows a client<->server architecture, as the following sequence diagram shows: ![Diagram showing the sequence for a client connection.](./media/concept-service-internals/simple-client-sequence.png) 1. When the client starts a WebSocket handshake, the service tries to invoke the `connect` event handler for WebSocket handshake. Developers can use this handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. 2. When the client is successfully connected, the service invokes a `connected` event handler. It works as a notification and doesn't block the client from sending messages. Developers can use this handler to do data storage and can respond with messages to the client. The service also pushes a `connected` event to all concerning event listeners, if any.-3. When the client sends messages, the service triggers a `message` event to the event handler to handle the messages sent. This event is a general event containing the messages sent in a WebSocket frame. Your code needs to dispatch the messages inside this event handler. If the event handler returns non-successful response code for, the service drops the client connection. The service also pushes a `message` event to all concerning event listeners, if any. If the service can't find any registered servers to receive the messages, the service also drops the connection. +3. When the client sends messages, the service triggers a `message` event to the event handler. This event contains the messages sent in a WebSocket frame. Your code needs to dispatch the messages inside this event handler. If the event handler returns a nonsuccessful response code, the service drops the client connection. The service also pushes a `message` event to all concerned event listeners, if any. If the service can't find any registered servers to receive the messages, the service also drops the client connection. 4. When the client disconnects, the service tries to trigger the `disconnected` event to the event handler once it detects the disconnect. The service also pushes a `disconnected` event to all concerning event listeners, if any. #### Scenarios A PubSub WebSocket client can: [PubSub WebSocket Subprotocol](./reference-json-webpubsub-subprotocol.md) contains the details of the `json.webpubsub.azure.v1` subprotocol. -You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the _server_ is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the _event_ the message belongs. +You noticed that for a [simple WebSocket client](#the-simple-websocket-client), the _server_ is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the _event_ the message belongs. #### Scenarios Client events fall into two categories: Synchronous events block the client workflow. - `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. - `message`: This event is triggered when a client sends a message.+ - Asynchronous events (non-blocking)- Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail. + Asynchronous events don't block the client workflow. Instead, they send a notification to the server. When such an event trigger fails, the service logs the error detail. - `connected`: This event is triggered when a client connects to the service successfully. - `disconnected`: This event is triggered when a client disconnected with the service. The following graph describes the workflow. ![Diagram showing the client authentication workflow.](./media/concept-service-internals/client-connect-workflow.png) -As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's _authorized_ to. The `role`s of the client determines the _initial_ permissions the client have: +A client can publish to other clients only when it's _authorized_ to. The `role`s of the client determines the _initial_ permissions the client has: | Role | Permission | | - | | For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/ #### Authentication/Authorization between service and webhook +To establish secure authentication and authorization between your service and webhook, consider the following options and steps: + - Anonymous mode - Simple authentication that `code` is provided through the configured Webhook URL. - Use Microsoft Entra authorization. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details.- - Step1: Enable Identity for the Web PubSub service - - Step2: Select from existing Microsoft Entra application that stands for your webhook web app ++1. Enable Identity for the Web PubSub service. +1. Select from existing Microsoft Entra application that stands for your webhook web app. ### Connection manager Currently we support [**Event Hubs**](https://azure.microsoft.com/products/event You need to register event listeners beforehand, so that when a client event is triggered, the service can push the event to the corresponding event listeners. See [this doc](./howto-develop-event-listener.md#configure-an-event-listener) for how to configure an event listener with an event hub endpoint. -You can configure multiple event listeners. The order of the event listeners doesn't matter. If an event matches with multiple event listeners, it will be sent to all the listeners it matches. See the following diagram for an example. Let's say you configure four event listeners at the same time. Then a client event that matches with three of those listeners will be sent to three listeners, leaving the rest one untouched. +You can configure multiple event listeners. The order in which you configure them doesn't affect their functionality. If an event matches multiple listeners, the event is dispatched to all matching listeners. See the following diagram for an example. For example, if you configure four event listeners simultaneously, each listener that receives a match processes the event. A client event that matches with three of those listeners is sent to three listeners, with the remaining listener ignored. :::image type="content" source="media/concept-service-internals/event-listener-data-flow.svg" alt-text="Event listener data flow diagram sample"::: -You can combine an [event handler](#event-handler) and event listeners for the same event. In this case, both event handler and event listeners will receive the event. +You can combine an [event handler](#event-handler) and event listeners for the same event. In this case, both event handler and event listeners receive the event. Web PubSub service delivers client events to event listeners using [CloudEvents AMQP extension for Azure Web PubSub](reference-cloud-events-amqp.md). ### Summary -You may have noticed that the _event handler role_ handles communication from the service to the server while _the manager role_ handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol. +The _event handler role_ handles communication from the service to the server while _the manager role_ handles communication from the server to the service. Once you combine the two roles, the data flow between service and server looks similar to the following diagram using the HTTP protocol. ![Diagram showing the Web PubSub service bi-directional workflow.](./media/concept-service-internals/http-service-server.png) |
azure-web-pubsub | Howto Authorize From Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-application.md | This sample shows how to assign a `Web PubSub Service Owner` role to a service p ![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png) +7. For oauth2/v2.0/token endpoint, pass the 'scope' instead of 'resource' ++ ``` + client_id: *your client id* + client_secret: *your client secret* + grant_type: client_credentials + scope: https://webpubsub.azure.com/.default + ``` + ## Sample codes using Microsoft Entra authorization We officially support 4 programming languages: |
cloud-shell | Faq Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/faq-troubleshooting.md | description: This article answers common questions and explains how to troubleshoot Cloud Shell issues. ms.contributor: jahelmic Previously updated : 08/14/2024 Last updated : 08/22/2024 tags: azure-resource-manager command that requires elevated permissions. - `*.console.azure.com` - `*.servicebus.windows.net` +### Accessing Cloud Shell from VNET Isolation with a Private DNS Zone - Failed to request a terminal ++- **Details**: Cloud Shell uses Azure Relay for terminal connections. Cloud Shell can fail to + request a terminal due to DNS resolution problems. This failure can be caused when you launch a + nonisolated Cloud Shell session from within a VNet-isolated environment that includes a private + DNS Zone for the servicebus domain. ++- **Resolution**: There are two ways to resolve this problem. You can follow the instructions in + [Deploy Cloud Shell in a virtual network][01]. Or, you can add a DNS record for the Azure Relay + instance that Cloud Shell uses. ++ The following steps show you how to identify the DNS name of the Cloud Shell instance and how to + create a DNS record for that name. ++ 1. Try to start Cloud Shell using your web browser. Use the browser's Developer Tools to find the + Azure Relay instance name. In Microsoft Edge or Google Chrome, hit the <kbd>F12</kbd> key to + open the Developer Tools. Select the **Network** tab. Find the **Search** box in the top right + corner. Search for `terminals?` to find the request for a Cloud Shell terminal. Select the one + of the request entries found by the search. In the **Headers** tab, find the hostname in the + **Request URL**. The name is similar to + `ccon-prod-<region-name>-aci-XX.servicebus.windows.net`. ++ The following screenshot shows the Developer Tools in Microsoft Edge for a successful request + for a terminal. The hostname is `ccon-prod-southcentalus-aci-02.servicebus.windows.net`. In + your case, the request should be unsuccessful, but you can find the hostname you need to + resolve. ++ [![Screenshot of the browser developer tools.](media/faq-troubleshooting/devtools-small.png)](media/faq-troubleshooting/devtools-large.png#lightbox) ++ 1. From a host outside of your private network, run the `nslookup` command to find the IP address + of the hostname as found in the previous step. ++ ```bash + nslookup ccon-prod-southcentalus-aci-02.servicebus.windows.net + ``` ++ The results should look similar to the following example: ++ ```Output + Server: 168.63.129.16 + Address: 168.63.129.16#53 ++ Non-authoritative answer: + ccon-prod-southcentralus-aci-02.servicebus.windows.net canonical name = ns-sb2-prod-sn3-012.cloudapp.net. + Name: ns-sb2-prod-sn3-012.cloudapp.net + Address: 40.84.152.91 + ``` ++ 1. Add an A record for the public IP in the Private DNS Zone of the VNET isolated setup. For this + example, the DNS record would have the following properties: ++ - Name: ccon-prod-southcentralus-aci-02 + - Type: A + - TTL: 1 hour + - IP Address: 40.84.152.91 ++ For more information about creating DNS records in a private DNS zone, see + [Manage DNS record sets and records with Azure DNS][02]. + ## Managing Cloud Shell ### Manage personal data Use the following steps to delete your user settings. entry point is `ux.console.azure.us`; there's no corresponding `shell.azure.us`. - **Resolution**: Restrict access to `ux.console.azure.com` or `ux.console.azure.us` from your network. The Cloud Shell icon still exists in the Azure portal, but you can't connect to the- service. + service. ++<!-- link references --> +[01]: /azure/cloud-shell/vnet/overview +[02]: /azure/dns/dns-operations-recordsets-portal |
communication-services | Message Analysis Transparency Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/message-analysis/message-analysis-transparency-faq.md | + + Title: Message Analysis transparency FAQ ++description: Learn about the Message Analysis transparency FAQ ++++ Last updated : 07/15/2024+++++# Message Analysis: Responsible AI transparency FAQ +++## What is Message Analysis? ++Message Analysis is an AI feature that analyses the incoming customer messages to extract insights that help developers enhance customer interactions. It detects the language, determines the intent (like a service question or complaint), and identifies key topics. Message Analysis can help businesses understand how well their communication strategies are working and improve their interactions with customers. ++## What can Message Analysis do? ++Message Analysis leverages advanced AI capabilities with Azure OpenAI to offer multifaceted functionality for customer interaction. It uses Azure OpenAI services to process messages received through platforms like WhatsApp. HereΓÇÖs what it does: ++* Language Detection: Identifies the language of the message, provides confidence scores, and translate the message into English if the original message isn't in English. +* Intent Recognition: Analyzes the message to determine the customerΓÇÖs purpose, such as seeking help or providing feedback. +* Key Phrase Extraction: Extracts important terms and names from the message, which can be crucial for context. ++This combination of features enables businesses to tailor their responses and better manage customer interactions. ++## What are Message AnalysisΓÇÖs intended uses? ++* Providing Message Analysis for agents or departments helps businesses resolve issues efficiently and provide a seamless end-user experience. ++* Providing immediate feedback to customers by recognizing their needs. ++* Enhancing the efficiency of customer service teams by prioritizing messages based on urgency or emotion. ++* Improving the quality of customer interactions by understanding the context and nuances of their queries or comments. ++## How was Message Analysis evaluated? What metrics are used to measure performance? ++* Pre-Deployment Testing: ++ * Unit Testing: Develop and run unit tests for each component of the system to ensure they function correctly in isolation. ++ * Integration Testing: Test the integration of different system components, such as the interaction between the webhook receiver, Azure OpenAI API, and Event Grid. Testing helps to identify issues where components interact. ++* Validation and Verification: ++ * Manual Verification: Conduct manual testing sessions where team members simulate real-world use cases to see how well the system processes and analyzes messages. ++ * Bug Bashing: Organize bug bashing events where team members and stakeholders work together to find as many issues as possible within a short time. These events can help uncover unexpected bugs or usability problems. ++* Feedback in Production: ++ * User Feedback: Collect and analyze feedback from end-users. This direct input can provide insights into how well the feature meets user needs and expectations. ++ * User Surveys and Interviews: Conduct surveys and interviews with users to gather qualitative data on the systemΓÇÖs performance and the user experience. ++## What are the limitations of Message Analysis? How can users minimize the impact of Message AnalysisΓÇÖs limitations when using the system? ++* False positives: ++ * The system may occasionally generate false positive analyses, particularly when dealing with ambiguous, conflicting, or sarcastic content, and culturally specific phrases and idioms from customer messages that it cannot accurately interpret. ++* Unsupported languages/ Translation Issues: ++ * If the model doesn't support the language, it can't be detected correctly or translated properly. There may also be misleading translations in the supported languages that you need to correct or build your own translation models. ++ ++## Which operational factors and settings enable effective and responsible use of Message Analysis? ++* Explicit Meta-Prompt Components: Enhance the system's prompts with explicit meta-prompt components that guide the AI in understanding the context of the conversation better. This approach can improve the relevance and accuracy of the analysis by providing clearer instructions on what the system should focus on during its assessments. ++* Canned Responses for Sensitive Messages: Flags sensitive topics or questions in the analysis response. This helps ensure that replies are respectful and legally compliant, reducing the risk of errors or inappropriate responses generated by the AI. ++* Phased Release Plan: To gather feedback and ensure system stability, implement a staged rollout starting with a preview involving a limited user base before a full deployment. This phased approach enables real-time adjustments and risk management based on actual user experiences. ++* Update Incident Response Plan: Regularly update the incident response plan to include procedures that address the integration of new features or potential new threats. This strategy ensures the team is prepared to handle unexpected situations effectively and can maintain system integrity and user trust. ++* Rollback Plan: Develop a rollback strategy that enables quick reversion to a previous stable state if the new feature leads to unexpected issues. To ensure rapid response capabilities during critical situations, implement this strategy in the deployment pipelines. ++* Feedback Analysis: To gather actionable insights, regularly collect and analyze feedback from users, particularly from Contoso. This feedback is crucial for continuous improvement and helps the development team understand the real-world impact of the features, leading to more targeted and effective updates. ++## Next steps +- [Enable Message Analysis With Azure OpenAI Quick Start](../../../quickstarts/advanced-messaging/message-analysis/message-analysis-with-azure-openai-quickstart.md) +- [Handle Advanced Messaging events](../../../quickstarts/advanced-messaging/whatsapp/handle-advanced-messaging-events.md) +- [Send WhatsApp template messages](../whatsapp/template-messages.md) |
communication-services | Message Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/message-analysis/message-analysis.md | + + Title: Connect Azure Communication Services to Azure OpenAI services for Message Analysis ++description: Provides a conceptual doc for connecting Azure Communication Services to Azure AI services for Message Analysis. +++ Last updated : 07/27/2024+++++# Connect Azure Communication Services to Azure OpenAI services for Message Analysis +++Azure Communication Services Advanced Messaging enables developers to create workflows for incoming messages within Azure Communication Services using event triggers. These triggers can initiate actions rooted in tailored business logic. Developers can analyze and gain insights from inbound messages to enhance customer service experience. With the integration of the AI-driven event trigger, developers can utilize AI analysis to bolster customer support. Content analysis is streamlined using Azure OpenAI Services, which also supports various AI model preferences. +In addition, there's no need for developers and businesses to handle credentials themselves. When linking your Azure AI services, managed identities are utilized to access resources owned by you. +You can incorporate Azure OpenAI capabilities into your app's messaging system by activating the Message Analysis feature within your Communication Service resources on the Azure portal. When enabling the feature, you're going to configure an Azure OpenAI endpoint, and selecting the preferred model. This efficient method empowers developers to meet their needs and scale effectively for meeting analytical objectives without the need to invest considerable time and effort into developing and maintaining a custom AI solution for interpreting message content. ++> [!NOTE] +> This integration is supported in limited regions for Azure AI services. For more information about which regions are supported please view the limitations section at the bottom of this document. This integration only supports Multi-service Cognitive Service resource. We recommend if you're creating a new Azure AI Service resource you create a Multi-service Cognitive Service resource or when you're connecting an existing resource confirm that it is a Multi-service Cognitive Service resource. ++## Common Scenarios for Message Analysis +Developers are now able to deliver differentiated customer experiences and modernize the internal processes by easily integrating the Azure OpenAI to the message flow. Some of the key use cases that you can incorporate in your applications are listed below. ++### Language Detection ++Identifies the language of the message, provides confidence scores, and translate the message into English if the original message isn't in English ++### Intent Recognition +Analyzes the message to determine the customerΓÇÖs purpose, such as seeking help or providing feedback. ++### Key Phrase Extraction +Extracts important terms and names from the message, which can be crucial for context. ++### Build Automation ++As a business, I can build automation on top of incoming WhatsApp messages. ++## Azure AI services regions supported ++This integration between Azure Communication Services and Azure AI services is only supported in the following regions: +- centralus +- northcentralus +- southcentralus +- westcentralus +- eastus +- eastus2 +- westus +- westus2 +- westus3 +- canadacentral +- northeurope +- westeurope +- uksouth +- southafricanorth +- centralindia +- eastasia +- southeastasia +- australiaeast +- brazilsouth +- uaenorth ++## Next steps +- [Enable Message Analysis With Azure OpenAI Quick Start](../../../quickstarts/advanced-messaging/message-analysis/message-analysis-with-azure-openai-quickstart.md) +- [Handle Advanced Messaging events](../../../quickstarts/advanced-messaging/whatsapp/handle-advanced-messaging-events.md) +- [Send WhatsApp template messages](../whatsapp/template-messages.md) |
communication-services | Sdk Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md | Publishing locations for individual SDK packages: - Support for Android API Level 21 or Higher - Support for Java 7 or higher - Support for Android Studio 2.0-- **Android Auto (AAOS)** and **IoT devices running Android** are currently not supported++##### Android platform support ++The Android ecosystem is extensive, encompassing various versions and specialized platforms designed for diverse types of devices. The next table lists the Android platforms currently available: ++| Devices | Description | Support | +| -- | --| -- | +| Phones and tablets | Standard devices running [Android Commercial](https://developer.android.com/get-started). | Fully support with [the video resolution](./voice-video-calling/calling-sdk-features.md?#supported-video-resolutions). | +| TV apps or gaming | Apps running running [Android TV](https://developer.android.com/tv), optimized for the TV experience, focused on streaming services and gaming. |Audio-only support | +| Smartwatches or wearables devices | Simple user interface and lower power consumption, designed to operate on small screens with limited hardware, using [Wear OS](https://wearos.google.com/). |Audio-only support | +| Automobile | Car head units running [Android Automotive OS (AAOS)](https://source.android.com/docs/automotive/start/what_automotive). |Audio-only support | +| Mirror auto applications | Apps that allow driver to mirror their phone to a carΓÇÖs built-in screens, running [Android Auto](https://www.android.com/auto/). | Audio-only support | +| Custom devices | Custom devices or applications using [Android Open Source Project (AOSP)](https://source.android.com/), running custom operating systems for specialized hardware, like ruggedized devices, kiosks, or smart glasses; devices where performance, security, or customization is critical. |Audio-only support | ++> [!NOTE] +> We **only support video calls on phones and tablets**. For use cases involving video on non-standard devices or platforms (such as smart glasses or custom devices), we suggest [contacting us](https://github.com/Azure/communication) early in your development process to help determine the most suitable integration approach. ++In case that you found issues during your implementation we encourage you to visit [the troubleshooting guide](./troubleshooting-info.md?#accessing-support-files-in-the-calling-sdk). #### iOS Calling SDK support In the future we may retire versions of the Communication Services SDKs, and we **You've integrated the v24 version of the SMS REST API into your application. Azure Communication releases v25.** -You'll get three years warning before these APIs stop working and are forced to update to v25. This update might require a code change. +You get three years warning before these APIs stop working and are forced to update to v25. This update might require a code change. **You've integrated the v2.02 version of the Calling SDK into your application. Azure Communication releases v2.05.** |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | For example, this iframe allows both camera and microphone access: - Support for Android API Level 21 or Higher - Support for Java 7 or higher - Support for Android Studio 2.0-- **Android Auto** and **IoT devices running Android** are currently not supported++We highly recommend identifying and validating your scenario by visiting the supported [Android platforms](../sdk-options.md?#android-platform-support) ## iOS Calling SDK support |
communication-services | Message Analysis With Azure Openai Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/advanced-messaging/message-analysis/message-analysis-with-azure-openai-quickstart.md | + + Title: Message Analysis with Azure OpenAI ++description: "In this quickstart, you learn how to enable Message Analysis with Azure OpenAI" +++++ Last updated : 07/05/2024+++# Quickstart: Enable Message Analysis with Azure OpenAI +++Azure Communication Services enables you to receive Message Analysis results using your own Azure OpenAI resource. ++## Prerequisites ++- [Azure account with an active subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- [Register Event Grid Resource Provider](../../sms/handle-sms-events.md#register-an-event-grid-resource-provider). +- [Create an Azure Communication Services resource](../../create-communication-resource.md). +- [WhatsApp Channel under Azure Communication Services resource](../whatsapp/connect-whatsapp-business-account.md). +- [Azure OpenAI resource](../../../../ai-services/openai/how-to/create-resource.md). ++## Setup ++1. **Connect Azure Communication Services with Azure OpenAI ++ a. Open your Azure Communication Services resource and click the **Cognitive Services** tab. ++ b. If system-assigned managed identity isn't enabled, you'll need to enable it. ++ c. In the Cognitive Services tab, click **Enable Managed Identity**. + + :::image type="content" source="./media/get-started/enabled-identity.png" lightbox="./media/get-started/enabled-identity.png" alt-text="Screenshot of Enable Managed Identity button."::: ++ d. Enable system assigned identity. This action begins the creation of the identity. A pop-up alert notifies you that the request is being processed. + + :::image type="content" source="./media/get-started/enable-system-identity.png" lightbox="./media/get-started/enable-system-identity.png" alt-text="Screenshot of enable managed identity."::: ++ + e. When managed identity is enabled, the Cognitive Service tab displays a **Connect cognitive service** button to connect the two services. + + :::image type="content" source="./media/get-started/cognitive-services.png" lightbox="./media/get-started/cognitive-services.png" alt-text="Screenshot of Connect cognitive services button."::: ++ f. Click **Connect cognitive service**, then select the Subscription, Resource Group, and Resource, and click **Connect** in the context pane. + + :::image type="content" source="./media/get-started/choose-options.png" lightbox="./media/get-started/choose-options.png" alt-text="Screenshot of Subscription, Resource Group, and Resource in pane."::: ++2. **Enable Message Analysis:** ++ a. Go to the **Channels** page of the **Advanced Messaging** tab in your Azure Communication Services resource. ++ :::image type="content" source="./media/get-started/channels-page.png" lightbox="./media/get-started/channels-page.png" alt-text="Screenshot that shows the channels page."::: + ++ b. Select the channel of your choice to enable Message Analysis on. The system displays a channel details dialog. ++ :::image type="content" source="./media/get-started/channel-details-list.png" lightbox="./media/get-started/channel-details-list.png" alt-text="Screenshot that shows the channel details page."::: +++ c. Toggle **Allow Message Analysis**. Select one of the connected Azure OpenAI services and choose the desired deployment model for the Message Analysis feature. Then click **Save**. ++ :::image type="content" source="./media/get-started/enable-message-analysis.png" lightbox="./media/get-started/enable-message-analysis.png" alt-text="Screenshot that shows how to enable Message Analysis."::: +++3. **Set up Event Grid subscription:** ++ Subscribe to Advanced Message Analysis Completed event by creating or modifying an event subscription. See [Subscribe to Advanced Messaging events](../whatsapp/handle-advanced-messaging-events.md#set-up-event-grid-viewer) for more details on creating event subscriptions. ++ :::image type="content" source="./media/get-started/create-event-subscription-message-analysis.png" lightbox="./media/get-started/create-event-subscription-message-analysis.png" alt-text="Screenshot that shows how to create Message Analysis event subscription properties."::: + +4. **See Message Analysis in action** ++ a. Send a message from WhatsApp Customer to Contoso business phone number. + + b. Receive the Message Analysis event in the Event Grid Viewer that you set up in Step **3**. Details on the AdvancedMessageAnalysisCompleted event schema can be found at [Azure Communication Services - Advanced Messaging events](../../../../../articles/event-grid/communication-services-advanced-messaging-events.md#microsoftcommunicationadvancedmessageanalysiscompletedpreview-event) ++ :::image type="content" source="./media/get-started/event-grid-viewer.png" lightbox="./media/get-started/event-grid-viewer.png" alt-text="Screenshot that shows Message Analysis event being received at Event Grid Viewer."::: ++## Clean up resources ++If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../create-communication-resource.md#clean-up-resources). +++## Learn more about responsible AI +- [Message Analysis Transparency FAQ](../../../concepts/advanced-messaging/message-analysis/message-analysis-transparency-faq.md) +- [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai) +- [Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) +- [Microsoft Azure Learning courses on responsible AI](https://learn.microsoft.com/training/paths/responsible-ai-business-principles) |
communication-services | Handle Advanced Messaging Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/advanced-messaging/whatsapp/handle-advanced-messaging-events.md | The Event Grid Viewer is a sample site that allows you to view incoming events f - `Subscription` - Select the subscription that contains your Azure Communication Services resource. This specific subscription isn't required, but it will make it easier to clean up after you're done with the quickstart. - `Resource Group` - Select the resource group that contains your Azure Communication Services resource. This specific resource group isn't required, but it will make it easier to clean up after you're done with the quickstart. - `Region` - Select the resource group that contains your Azure Communication Services resource. This specific region isn't required, but is recommended.- - 'Site Name' - Create a name that is globally unique. This site name is used to create a domain to connect to your Event Grid Viewer. - - 'Hosting Plan Name' - Create any name to identify your hosting plan. - - 'Sku' - The sku F1 can be used for development and testing purposes. If you encounter validation errors creating your Event Grid Viewer that say there's no more capacity for the F1 plan, try selecting a different region. For more information about skus, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) + - `Site Name` - Create a name that is globally unique. This site name is used to create a domain to connect to your Event Grid Viewer. + - `Hosting Plan Name` - Create any name to identify your hosting plan. + - `Sku` - The sku F1 can be used for development and testing purposes. If you encounter validation errors creating your Event Grid Viewer that say there's no more capacity for the F1 plan, try selecting a different region. For more information about skus, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/windows/) :::image type="content" source="./media/handle-advanced-messaging-events/custom-deployment.png" lightbox="./media/handle-advanced-messaging-events/custom-deployment.png" alt-text="Screenshot that shows Custom deployment of Events Viewer web app and properties you need to provide to successfully deploy."::: The Event Grid Viewer is a sample site that allows you to view incoming events f - Event types - Select the two Advanced messaging events from the list. :::image type="content" source="./media/handle-advanced-messaging-events/create-event-subscription.png" lightbox="./media/handle-advanced-messaging-events/create-event-subscription.png" alt-text="Screenshot that shows create event subscription properties.":::+ ++ - Optional: Select the AdvancedMessageAnalysisCompleted event, currently in public preview, to receive Message Analysis events. Instruction on how to enable Message Analysis can be found at [Enable Message Analysis with Azure OpenAI](../message-analysis/message-analysis-with-azure-openai-quickstart.md) + + [!INCLUDE [Public Preview Notice](../../../includes/public-preview-include.md)] + + :::image type="content" source="../message-analysis/media/get-started/create-event-subscription-message-analysis.png" lightbox="../message-analysis/media/get-started/create-event-subscription-message-analysis.png" alt-text="Screenshot that shows how to create Message Analysis event subscription properties."::: + - For endpoint type, select **"Webhook"** and enter the URL for the Event Grid Viewer we created in the **Setup Event Grid Viewer** step with the path `/api/updates` appended. For example: `https://{{site-name}}.azurewebsites.net/api/updates`. |
container-apps | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/troubleshooting.md | The following table lists issues you might encounter while using Azure Container | Responses not as expected | The container app endpoint responds to requests, but the responses aren't as expected. | [Verify traffic is routed to the correct revision](#verify-traffic-is-routed-to-the-correct-revision)<br><br>[Verify you're using unique tags when deploying images to the container registry](/azure/container-registry/container-registry-image-tag-version) | | Missing parameters error | You receive error messages about missing parameters when you run `az containerapp` commands in the Azure CLI, or run cmdlets from the `Az.App` module in Azure PowerShell. | [Verify latest version of Azure Container Apps extension is installed](#verify-latest-version-of-azure-container-apps-extension-is-installed) | | Preview features not available | [Preview features](./whats-new.md) are not available when you run `az containerapp` commands in the Azure CLI. | [Verify Azure Container Apps extension allows preview features](#verify-azure-container-apps-extension-allows-preview-features) |+| Deleting your app or environment doesn't work | This issue is often accompanied by a message such as **provisioningState: ScheduledForDelete**. | [Manually delete the associated VNet](#manually-delete-the-vnet-being-used-by-the-azure-container-apps-environment) | ## View logs If [preview features](./whats-new.md) are not available when you run `az contain az extension add --name containerapp --upgrade --allow-preview true ``` ++## Manually delete the VNet being used by the Azure Container Apps environment ++If you receive the message **provisioningState: ScheduledForDelete**, but your environment fails to actually delete, make sure to delete your associated VNet manually. ++1. Identify the VNet being used by the environment you're trying to delete. Replace the \<PLACEHOLDERS\> with your values. ++ ```azurecli + az containerapp env show --resource-group <RESOURCE_GROUP> --name <ENVIRONMENT> + ``` ++ In the output, look for `infrastructureSubnetId` and note down the VNet ID. An example VNet ID is `vNet::myVNet.id`. ++2. Delete the VNet manually: ++ ```azurecli + az network vnet delete --resource-group <RESOURCE_GROUP> --name <VNET_ID> + ``` ++3. Delete the Azure Container Apps environment: ++ ```azurecli + az containerapp env delete --resource-group <RESOURCE_GROUP> --name <ENVIRONMENT> --yes + ``` + ## Next steps > [!div class="nextstepaction"] |
container-registry | Container Registry Tasks Base Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-base-images.md | See the following tutorials for scenarios to automate application image builds a [base-alpine]: https://hub.docker.com/_/alpine/ [base-dotnet]: https://hub.docker.com/_/microsoft-dotnet [base-node]: https://hub.docker.com/_/node/-[base-windows]: https://hub.docker.com/r/microsoft/nanoserver/ [sample-archive]: https://github.com/Azure-Samples/acr-build-helloworld-node/archive/master.zip [terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ |
cost-management-billing | Understand Cosmosdb Reservation Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-cosmosdb-reservation-charges.md | Title: Understand reservation discount in Azure Cosmos DB description: Learn how reservation discount is applied to provisioned throughput (RU/s) in Azure Cosmos DB. --+ Last updated 05/14/2024 |
cost-management-billing | Manage Savings Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/manage-savings-plan.md | You can't make the following types of changes after purchase: - Term length - Billing frequency +> [!NOTE] +> A billing administer can fully manage savings plans. However, after purchase, a savings planΓÇÖs directory canΓÇÖt get changed. + To learn more, see [Savings plan permissions](permission-view-manage.md). _Permission needed to manage a savings plan is separate from subscription permission._ ## Change the savings plan scope If you purchased/were added to a savings plan, or were assigned savings plan RBA Selectable scopes must be from Enterprise offers (MS-AZR-0017P or MS-AZR-0148P), Microsoft Customer Agreements, and Microsoft Partner Agreements. -If you aren't a billing administrator and you change from shared to single scope, you may only select a subscription where you're the owner. Only subscriptions within the same billing account/profile as the savings plan can be selected. +If you aren't a billing administrator and you change from shared to single scope, you can only select a subscription where you're the owner. Only subscriptions within the same billing account/profile as the savings plan can be selected. If all subscriptions are moved out of a management group, the scope of the savings plan is automatically changed to **Shared**. To delegate the administrator, contributor, or reader roles to a specific saving 1. Go to **Home** > **Savings plans**. 1. Select **Role assignment** from the top navigation bar. -## Cancellations, exchanges and trade-ins +## Cancellations, exchanges, and trade-ins Unlike reservations, you can't cancel or exchange savings plans. You can trade-in select compute reservations for a savings plan. To learn more, visit [reservation trade-in](reservation-trade-in.md). ## Change Billing subscription-Currently, the billing subscription used for monthly payments of a savings plan cannot be changed. +Currently, the billing subscription used for monthly payments of a savings plan can't be changed. ## Transfer a savings plan Although you can't cancel, exchange, or refund a savings plan, you can transfer it from one supported agreement to another. For more information about supported transfers, see [Azure product transfer hub](../manage/subscription-transfer.md#product-transfer-support). For Microsoft Partner Agreement partners: - Notifications are sent to the partner. ## Need help?-If you have Azure savings plan for compute questions, contact your account team or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft will only provide answers to expert support requests in English for questions about Azure savings plan for compute. +If you have Azure savings plan for compute questions, contact your account team or [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Temporarily, Microsoft only provides answers to expert support requests in English for questions about Azure savings plan for compute. ## Next steps |
cost-management-billing | Understand Azure Marketplace Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-azure-marketplace-charges.md | -External services are published by third-party software vendors in the Azure Marketplace. For example, SendGrid is an external service that you can purchase in Azure, but is not published by Microsoft. Some Microsoft products are sold through Azure marketplace, too. +Third-party software vendors in the Azure Marketplace publish external services. For example, SendGrid is an external service that you can purchase in Azure, but Microsoft doesn't publish it. Some Microsoft products are sold through Azure Marketplace, too. ## How external services are billed - If you have a Microsoft Customer Agreement (MCA) or Microsoft Partner Agreement (MPA), your third-party services are billed with the rest of your Azure services. [Check your billing account type](#check-billing-account-type) to see if you have access to an MCA or MPA.-- If you don't have an MCA or MPA, your external services are billed separately from your Azure services. You'll receive two invoices each billing period: one invoice for Azure services and another for Marketplaces purchases.-- Each external service has a different billing model. Some services are billed in a pay-as-you-go fashion while others have fixed monthly charges.-- You can't use monthly free credits for external services. If you're using an Azure subscription that includes [free credits](https://azure.microsoft.com/pricing/spending-limits/), they can't be applied to charges from external services. When you provision a new external service or resource, a warning is shown:-- :::image type="content" border="true" source="./media/understand-azure-marketplace-charges/credit-warning.png" alt-text="Screenshot showing a warning that Marketplace charges are billed separately."::: +- If you don't have an MCA or MPA, your external services are billed separately from your Azure services. You receive two invoices each billing period: one invoice for Azure services and another for Marketplaces purchases. +- Each external service has a different billing model. Some services are billed in a pay-as-you-go fashion while others have monthly charges that are fixed. +- You can't use monthly free credits for external services. If you're using an Azure subscription that includes [free credits](https://azure.microsoft.com/pricing/spending-limits/), they can't be applied to charges from external services. When you provision a new external service or resource, a warning is shown. You can choose to continue with the provisioning or cancel it. ## External spending for EA customers -EA customers can see external service spending in the [Azure portal](https://portal.azure.com). Navigate to the Usage + charges menu to view and download Azure Marketplace charges. +EA customers can see external service spending in the [Azure portal](https://portal.azure.com). To view and download Azure Marketplace charges, navigate to the **Usage + charges** menu. ## View and download invoices for external services You can view and download your Azure Marketplace invoices from the Azure portal 1. Search for **Cost Management + Billing**. 1. In the left menu, select **Invoices**. 1. In the subscription drop-down, select the subscription associated with your Marketplace services.-1. Review the **Type** column in the list of invoices. If an invoice is for a Marketplace service, the type will be **Azure Marketplace and Reservations**. +1. Review the **Type** column in the list of invoices. If an invoice is for a Marketplace service, the type is **Azure Marketplace and Reservations**. - :::image type="content" border="true" source="./media/understand-azure-marketplace-charges/marketplace-type-twd.png" alt-text="Screenshot showing billing invoices with Azure Marketplace and Reservations hightlighted.."::: + :::image type="content" border="true" source="./media/understand-azure-marketplace-charges/marketplace-type-twd.png" alt-text="Screenshot showing billing invoices with Azure Marketplace and Reservations highlighted."::: -1. To filter by type so that you are only looking at invoices for Azure Marketplace and Reservations, select the **Type** filter. Then select **Azure Marketplace and Reservations** in the drop-down. +1. To filter by type so that you're only looking at invoices for Azure Marketplace and Reservations, select the **Type** filter. Then select **Azure Marketplace and Reservations** in the drop-down. :::image type="content" border="true" source="./media/understand-azure-marketplace-charges/type-filter.png" alt-text="Screenshot showing the Azure Marketplace and Reservation selected in the drop-down."::: -1. Select the download icon on the right for the invoice you want to download. +1. Select the download symbol for the invoice that you want to download. :::image type="content" border="true" source="./media/understand-azure-marketplace-charges/download-icon-marketplace.png" alt-text="Screenshot showing the download symbol selected for invoice."::: If you want to cancel your external service order, delete the resource in the [A 1. Check the box next to the resource you want to delete. 1. Select **Delete** in the command bar. :::image type="content" border="true" source="./media/understand-azure-marketplace-charges/delete-button.png" alt-text="Screenshot showing the All resources page where you select Delete.":::-1. Type *'Yes'* in the confirmation blade. +1. Type *'Yes'* in the confirmation windows. :::image type="content" border="true" source="./media/understand-azure-marketplace-charges/delete-resource.PNG" alt-text="Screenshot showing the Delete resources page where you type Yes to delete the resource."::: 1. Select **Delete**. |
data-factory | Connector Deprecation Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-plan.md | Last updated 10/11/2023 This article describes future deprecations for some connectors of Azure Data Factory. > [!NOTE]-> "Deprecated" means we intend to remove the connector from a future release. Unless they are in *Preview*, connectors remain fully supported until they are officially removed. This deprecation notification can span a few months or longer. After removal, the connector will no longer work. This notice is to allow you sufficient time to plan and update your code before the connector is removed. +> "Deprecated" means we intend to remove the connector from a future release. Unless they are in *Preview*, connectors remain fully supported until they are officially deprecated. This deprecation notification can span a few months or longer. After removal, the connector will no longer work. This notice is to allow you sufficient time to plan and update your code before the connector is deprecated. ## Legacy connectors with updated connectors or drivers available now -The following legacy connectors are deprecated, but new updated versions are available in Azure Data Factory. You can update existing data sources to use the new connectors moving forward. +The following legacy connectors or legacy driver versions will be deprecated, but new updated versions are available in Azure Data Factory. You can update existing data sources to use the new connectors moving forward. -- [Google Ads/Adwords](connector-google-adwords.md)-- [Google BigQuery](connector-google-bigquery-legacy.md)-- [MariaDB](connector-mariadb.md)-- [MongoDB](connector-mongodb-legacy.md)-- [MySQL](connector-mysql.md)-- [Salesforce (Service Cloud)](connector-salesforce-service-cloud-legacy.md)-- [ServiceNow](connector-servicenow.md)-- [Snowflake](connector-snowflake-legacy.md)--## Use the generic ODBC connector to replace deprecated connectors --If legacy connectors are deprecated with no updated connectors available, you can still use the generic [ODBC Connector](connector-odbc.md), which enables you to continue using these data sources with their native ODBC drivers. This can enable you to continue using them indefinitely into the future. +- [Google Ads/Adwords](connector-google-adwords.md#upgrade-the-google-ads-driver-version) +- [Google BigQuery](connector-google-bigquery.md#upgrade-the-google-bigquery-linked-service) +- [MariaDB](connector-mariadb.md#upgrade-the-mariadb-driver-version) +- [MongoDB](connector-mongodb.md#upgrade-the-mongodb-linked-service) +- [MySQL](connector-mysql.md#upgrade-the-mysql-driver-version) +- [Salesforce](connector-salesforce.md#upgrade-the-salesforce-linked-service) +- [Salesforce Service Cloud](connector-salesforce-service-cloud.md#upgrade-the-salesforce-service-cloud-linked-service) +- [ServiceNow](connector-servicenow.md#upgrade-your-servicenow-linked-service) +- [Snowflake](connector-snowflake.md#upgrade-the-snowflake-linked-service) ## Connectors to be deprecated on December 31, 2024 -The following connectors are scheduled for deprecation at the end of December 2024 and have no updated replacement connectors. You should plan to migrate to alternative solutions for linked services that use these connectors before the deprecation date. +The following connectors are scheduled for deprecation on December 31, 2024. You should plan to migrate to alternative solutions for linked services that use these connectors before the deprecation date. -- [Amazon Marketplace Web Service (MWS)](connector-amazon-marketplace-web-service.md) - [Azure Database for MariaDB](connector-azure-database-for-mariadb.md) - [Concur (Preview)](connector-concur.md) - [Hbase](connector-hbase.md) - [Magento (Preview)](connector-magento.md) - [Marketo (Preview)](connector-marketo.md) - [Paypal (Preview)](connector-paypal.md)-- [Phoenix (Preview)](connector-phoenix.md)-- [Zoho (Preview)](connector-zoho.md)+- [Phoenix](connector-phoenix.md) +++## Connectors deprecated ++The following connector was deprecated. ++- [Amazon Marketplace Web Service (MWS)](connector-amazon-marketplace-web-service.md) ++## Options to replace deprecated connectors ++If legacy connectors are deprecated with no updated connectors available, you can still use the +[ODBC Connector](connector-odbc.md) which enables you to continue using these data sources with their native ODBC drivers, or other alternatives. This can enable you to continue using them indefinitely into the future. ## Related content |
data-factory | Connector Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md | Request 100: `Header(id->100)`<br/> *Step 1*: Input `{id}` in **Additional headers**. -*Step 2*: Set **Pagination rules** as **"Headers.{id}" : "RARNGE:0:100:10"**. +*Step 2*: Set **Pagination rules** as **"Headers.{id}" : "RANGE:0:100:10"**. :::image type="content" source="media/connector-rest/pagination-rule-example-3.png" alt-text="Screenshot showing the pagination rule to send multiple requests whose variables are in Headers."::: |
digital-twins | Concepts Data Explorer Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-explorer-plugin.md | For instance, if you want to represent a property with three fields for roll, pi * View the plugin documentation for the Kusto Query Language in Azure Data Explorer: [azure_digital_twins_query_request plugin](/azure/data-explorer/kusto/query/azure-digital-twins-query-request-plugin) * View sample queries using the plugin, including a walkthrough that runs the queries in an example scenario: [Azure Digital Twins query plugin for Azure Data Explorer: Sample queries and walkthrough](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/adt-adx-queries) --* Read about another strategy for analyzing historical data in Azure Digital Twins: [Integrate with Azure Time Series Insights](how-to-integrate-time-series-insights.md) |
digital-twins | Concepts Data Ingress Egress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md | Azure Digital Twins is typically used together with other services to create fle Azure Digital Twins can receive data from upstream services such as [IoT Hub](../iot-hub/about-iot-hub.md) or [Logic Apps](../logic-apps/logic-apps-overview.md), which are used to deliver telemetry and notifications. -Azure Digital Twins can also use [event routes](concepts-route-events.md) to send data to downstream services, such as [Azure Maps](../azure-maps/about-azure-maps.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), for storage, workflow integration, analytics, and more. +Azure Digital Twins can also use [event routes](concepts-route-events.md) to send data to downstream services, such as [Azure Maps](../azure-maps/about-azure-maps.md), for storage, workflow integration, analytics, and more. ## Data ingress There are two main egress options in Azure Digital Twins. Digital twin data can ### Endpoints -To send Azure Digital Twins data to most Azure services, such as [Azure Maps](../azure-maps/about-azure-maps.md), [Time Series Insights](../time-series-insights/overview-what-is-tsi.md), or [Azure Storage](../storage/common/storage-introduction.md), start by attaching the destination service to an *endpoint*. +To send Azure Digital Twins data to most Azure services, such as [Azure Maps](../azure-maps/about-azure-maps.md) or [Azure Storage](../storage/common/storage-introduction.md), start by attaching the destination service to an *endpoint*. Endpoints can be instances of any of these Azure * [Event Hubs](../event-hubs/event-hubs-about.md) Endpoints can be instances of any of these Azure The endpoint is attached to an Azure Digital Twins instance using management APIs or the Azure portal, and can carry data along from the instance to other listening services. For more information about Azure Digital Twins endpoints, see [Endpoints and event routes](concepts-route-events.md). -For detailed instructions on how to send Azure Digital Twins data to Azure Maps, see [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md). For detailed instructions on how to send Azure Digital Twins data to Time Series Insights, see [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md). - ### Data history To send twin data to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), set up a [data history connection](concepts-data-history.md) that automatically historizes graph updates from your Azure Digital Twins instance to an Azure Data Explorer cluster. The data history connection requires an [event hub](../event-hubs/event-hubs-about.md), but doesn't require an explicit endpoint. |
digital-twins | Concepts Route Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-route-events.md | The following diagram illustrates the flow of event data through a larger IoT so :::image type="content" source="media/concepts-route-events/routing-workflow.png" alt-text="Diagram of Azure Digital Twins routing data through endpoints to several downstream services." border="false"::: -For egress of data outside Azure Digital Twins, typical downstream targets for event routes are Time Series Insights, Azure Maps, storage, and analytics solutions. Azure Digital Twins implements *at least once* delivery for data emitted to egress services. +For egress of data outside Azure Digital Twins, typical downstream targets for event routes are Azure Maps, storage, and analytics solutions. Azure Digital Twins implements *at least once* delivery for data emitted to egress services. For routing of internal digital twin events within the same Azure Digital Twins solution, continue to the next section. |
digital-twins | How To Integrate Time Series Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-time-series-insights.md | - Title: Integrate with Azure Time Series Insights- -description: Learn how to set up event routes from Azure Digital Twins to Azure Time Series Insights. -- Previously updated : 01/10/2023-----# Optional fields. Don't forget to remove # if you need a field. -# -# ---# Integrate Azure Digital Twins with Azure Time Series Insights --In this article, you'll learn how to integrate Azure Digital Twins with [Azure Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md). --The solution described in this article uses Time Series Insights to collect and analyze historical data about your IoT solution. Azure Digital Twins is a good fit for feeding data into Time Series Insights, as it allows you to correlate multiple data streams and standardize your information before sending it to Time Series Insights. -->[!TIP] ->The simplest way to analyze historical twin data over time is to use the [data history](concepts-data-history.md) feature to connect an Azure Digital Twins instance to an Azure Data Explorer cluster, so that graph updates are automatically historized to Azure Data Explorer. You can then query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md). If you don't need to use Time Series Insights specifically, you might consider this alternative for a simpler integration experience. --## Prerequisites --Before you can set up a relationship with Time Series Insights, you'll need to set up the following resources: -* An Azure Digital Twins instance. For instructions, see [Set up an Azure Digital Twins instance and authentication](./how-to-set-up-instance-portal.md). -* A model and a twin in the Azure Digital Twins instance. You'll need to update twin's information a few times to see that data tracked in Time Series Insights. For instructions, see [Add a model and twin](how-to-ingest-iot-hub-data.md#add-a-model-and-twin). --> [!TIP] -> In this article, the changing digital twin values that are viewed in Time Series Insights are updated manually for simplicity. However, if you want to complete this article with live simulated data, you can set up an Azure function that updates digital twins based on IoT telemetry events from a simulated device. For instructions, follow [Ingest IoT Hub data](how-to-ingest-iot-hub-data.md), including the final steps to run the device simulator and validate that the data flow works. -> -> Later, look for another TIP to show you where to start running the device simulator and have your Azure functions update the twins automatically, instead of sending manual digital twin update commands. ---## Solution architecture --You'll be attaching Time Series Insights to Azure Digital Twins through the following path. ---## Create Event Hubs namespace --Before creating the event hubs, you'll first create an Event Hubs namespace that will receive events from your Azure Digital Twins instance. You can either use the Azure CLI instructions below, or use the Azure portal by following [Create an event hub using Azure portal](../event-hubs/event-hubs-create.md). To see what regions support Event Hubs, visit [Azure products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-hubs). --```azurecli-interactive -az eventhubs namespace create --name <name-for-your-Event-Hubs-namespace> --resource-group <your-resource-group> --location <region> -``` --> [!TIP] -> If you get an error stating `BadRequest: The specified service namespace is invalid.`, make sure the name you've chosen for your namespace meets the naming requirements described in this reference document: [Create Namespace](/rest/api/servicebus/create-namespace). --You'll be using this Event Hubs namespace to hold the two event hubs that are needed for this article: -- 1. *Twins hub* - Event hub to receive twin change events - 2. *Time series hub* - Event hub to stream events to Time Series Insights --The next sections will walk you through creating and configuring these hubs within your event hub namespace. --## Create twins hub --The first event hub you'll create in this article is the *twins hub*. This event hub will receive twin change events from Azure Digital Twins. To set up the twins hub, you'll complete the following steps in this section: --1. Create the twins hub -2. Create an authorization rule to control permissions to the hub -3. Create an endpoint in Azure Digital Twins that uses the authorization rule to access the hub -4. Create a route in Azure Digital Twins that sends twin updates event to the endpoint and connected twins hub -5. Get the twins hub connection string --Create the twins hub with the following CLI command. Specify a name for your twins hub. --```azurecli-interactive -az eventhubs eventhub create --name <name-for-your-twins-hub> --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier> -``` --### Create twins hub authorization rule --Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the rule. --```azurecli-interactive -az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-twins-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-earlier> -``` --### Create twins hub endpoint --Create an Azure Digital Twins [endpoint](concepts-route-events.md#creating-endpoints) that links your event hub to your Azure Digital Twins instance. Specify a name for your twins hub endpoint. --```azurecli-interactive -az dt endpoint create eventhub --dt-name <your-Azure-Digital-Twins-instance-name> --eventhub-resource-group <your-resource-group> --eventhub-namespace <your-Event-Hubs-namespace-from-earlier> --eventhub <your-twins-hub-name-from-earlier> --eventhub-policy <your-twins-hub-auth-rule-from-earlier> --endpoint-name <name-for-your-twins-hub-endpoint> -``` --### Create twins hub event route --Azure Digital Twins instances can emit [twin update events](./concepts-event-notifications.md) whenever a twin's state is updated. In this section, you'll create an Azure Digital Twins event route that will direct these update events to the twins hub for further processing. --Create a [route](concepts-route-events.md#creating-event-routes) in Azure Digital Twins to send twin update events to your endpoint from above. The filter in this route will only allow twin update messages to be passed to your endpoint. Specify a name for the twins hub event route. For the Azure Digital Twins instance name placeholder in this command, you can use the friendly name or the host name for a boost in performance. --```azurecli-interactive -az dt route create --dt-name <your-Azure-Digital-Twins-instance-hostname-or-name> --endpoint-name <your-twins-hub-endpoint-from-earlier> --route-name <name-for-your-twins-hub-event-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'" -``` --### Get twins hub connection string --Get the [twins event hub connection string](../event-hubs/event-hubs-get-connection-string.md), using the authorization rules you created above for the twins hub. --```azurecli-interactive -az eventhubs eventhub authorization-rule keys list --resource-group <your-resource-group> --namespace-name <your-Event-Hubs-namespace-from-earlier> --eventhub-name <your-twins-hub-from-earlier> --name <your-twins-hub-auth-rule-from-earlier> -``` -Take note of the **primaryConnectionString** value from the result to configure the twins hub app setting later in this article. --## Create time series hub --The second event hub you'll create in this article is the *time series hub*. This event hub is the one that will stream the Azure Digital Twins events to Time Series Insights. To set up the time series hub, you'll complete these steps: --1. Create the time series hub -2. Create an authorization rule to control permissions to the hub -3. Get the time series hub connection string --Later, when you create the Time Series Insights instance, you'll connect this time series hub as the event source for the Time Series Insights instance. --Create the time series hub using the following command. Specify a name for the time series hub. --```azurecli-interactive - az eventhubs eventhub create --name <name-for-your-time-series-hub> --resource-group <your-resource-group> --namespace-name <your-Event-Hub-namespace-from-earlier> -``` --### Create time series hub authorization rule --Create an [authorization rule](/cli/azure/eventhubs/eventhub/authorization-rule#az-eventhubs-eventhub-authorization-rule-create) with send and receive permissions. Specify a name for the time series hub auth rule. --```azurecli-interactive -az eventhubs eventhub authorization-rule create --rights Listen Send --name <name-for-your-time-series-hub-auth-rule> --resource-group <your-resource-group> --namespace-name <your-Event-Hub-namespace-from-earlier> --eventhub-name <your-time-series-hub-name-from-earlier> -``` --### Get time series hub connection string --Get the [time series hub connection string](../event-hubs/event-hubs-get-connection-string.md), using the authorization rules you created above for the time series hub: --```azurecli-interactive -az eventhubs eventhub authorization-rule keys list --resource-group <your-resource-group> --namespace-name <your-Event-Hub-namespace-from-earlier> --eventhub-name <your-time-series-hub-name-from-earlier> --name <your-time-series-hub-auth-rule-from-earlier> -``` -Take note of the **primaryConnectionString** value from the result to configure the time series hub app setting later in this article. --Also, take note of the following values to use them later to create a Time Series Insights instance. -* Event hub namespace -* Time series hub name -* Time series hub auth rule --## Create a function --In this section, you'll create an Azure function that will convert twin update events from their original form as JSON Patch documents to JSON objects that only contain updated and added values from your twins. --1. First, create a new function app project. -- You can do this using **Visual Studio** (for instructions, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#create-an-azure-functions-project)), **Visual Studio Code** (for instructions, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#create-an-azure-functions-project)), or the **Azure CLI** (for instructions, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#create-a-local-function-project)). --2. Create a new Azure function called *ProcessDTUpdatetoTSI.cs* to update device telemetry events to the Time Series Insights. The function type will be **Event Hub trigger**. -- :::image type="content" source="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png" alt-text="Screenshot of Visual Studio to create a new Azure function of type event hub trigger." lightbox="media/how-to-integrate-time-series-insights/create-event-hub-trigger-function.png"::: --3. Add the following packages to your project (you can use the Visual Studio NuGet package manager, or the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in a command-line tool). - * [Microsoft.Azure.WebJobs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/) - * [Microsoft.Azure.WebJobs.Extensions.EventHubs](https://www.nuget.org/packages/Microsoft.Azure.WebJobs/) - * [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) --4. Replace the code in the *ProcessDTUpdatetoTSI.cs* file with the following code: -- :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/updateTSI.cs"::: -- Save your function code. --5. Publish the project with the *ProcessDTUpdatetoTSI.cs* function to a function app in Azure. -- For instructions on how to publish the function using **Visual Studio**, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure). For instructions on how to publish the function using **Visual Studio Code**, see [Create a C# function in Azure using Visual Studio Code](../azure-functions/create-first-function-vs-code-csharp.md?tabs=in-process#publish-the-project-to-azure). For instructions on how to publish the function using the **Azure CLI**, see [Create a C# function in Azure from the command line](../azure-functions/create-first-function-cli-csharp.md?tabs=azure-cli%2Cin-process#deploy-the-function-project-to-azure). ---Save the function app name to use later to configure app settings for the two event hubs. --### Configure the function app --Next, assign an access role for the function and configure the application settings so that it can access your resources. ---Next, add environment variables in the function app's settings that allow it to access the twins hub and time series hub. --Use the twins hub primaryConnectionString value that you saved earlier to create an app setting in your function app that contains the twins hub connection string: --```azurecli-interactive -az functionapp config appsettings set --settings "EventHubAppSetting-Twins=<your-twins-hub-primaryConnectionString>" --resource-group <your-resource-group> --name <your-function-app-name> -``` --Use the time series hub primaryConnectionString value that you saved earlier to create an app setting in your function app that contains the time series hub connection string: --```azurecli-interactive -az functionapp config appsettings set --settings "EventHubAppSetting-TSI=<your-time-series-hub-primaryConnectionString>" --resource-group <your-resource-group> --name <your-function-app-name> -``` --## Create and connect a Time Series Insights instance --In this section, you'll set up Time Series Insights instance to receive data from your time series hub. For more information about this process, see [Set up an Azure Time Series Insights Gen2 PAYG environment](../time-series-insights/tutorial-set-up-environment.md). Follow the steps below to create a Time Series Insights environment. --1. In the [Azure portal](https://portal.azure.com), search for *Time Series Insights environments*, and select the **Create** button. Choose the following options to create the time series environment. -- * **Subscription** - Choose your subscription. - - **Resource group** - Choose your resource group. - * **Environment name** - Specify a name for your time series environment. - * **Location** - Choose a location. - * **Tier** - Choose the **Gen2(L1)** pricing tier. - * **Property name** - Enter **$dtId** (Read more about selecting an ID value in [Best practices for choosing a Time Series ID](../time-series-insights/how-to-select-tsid.md)). - * **Storage account name** - Specify a storage account name. - * **Enable warm store** - Leave this field set to **Yes**. -- You can leave default values for other properties on this page. Select the **Next : Event Source >** button. -- :::image type="content" source="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-1.png" alt-text="Screenshot of the Azure portal showing how to create Time Series Insights environment (part 1/3)." lightbox="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-1.png"::: - - :::image type="content" source="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-2.png" alt-text="Screenshot of the Azure portal showing how to create Time Series Insights environment (part 2/3)." lightbox="media/how-to-integrate-time-series-insights/create-time-series-insights-environment-2.png"::: --2. In the **Event Source** tab, choose the following fields: -- * **Create an event source?** - Choose **Yes**. - * **Source type** - Choose **Event Hub**. - * **Name** - Specify a name for your event source. - * **Subscription** - Choose your Azure subscription. - * **Event Hub namespace** - Choose the namespace that you created earlier in this article. - * **Event Hub name** - Choose the time series hub name that you created earlier in this article. - * **Event Hub access policy name** - Choose the time series hub auth rule that you created earlier in this article. - * **Event Hub consumer group** - Select **New** and specify a name for your event hub consumer group. Then, select **Add**. - * **Property name** - Leave this field blank. - - Choose the **Review + Create** button to review all the details. Then, select the **Review + Create** button again to create the time series environment. -- :::image type="content" source="media/how-to-integrate-time-series-insights/create-tsi-environment-event-source.png" alt-text="Screenshot of the Azure portal showing how to create Time Series Insights environment (part 3/3)." lightbox="media/how-to-integrate-time-series-insights/create-tsi-environment-event-source.png"::: --## Send IoT data to Azure Digital Twins --To begin sending data to Time Series Insights, you'll need to start updating the digital twin properties in Azure Digital Twins with changing data values. --Use the [az dt twin update](/cli/azure/dt/twin#az-dt-twin-update) CLI command to update a property on the twin you added in the [Prerequisites](#prerequisites) section. If you used the twin creation instructions from [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md)), you can use the following command in the local CLI or the Cloud Shell bash terminal to update the temperature property on the thermostat67 twin. There's one placeholder for the Azure Digital Twins instance's host name (you can also use the instance's friendly name with a slight decrease in performance). --```azurecli-interactive -az dt twin update --dt-name <your-Azure-Digital-Twins-instance-hostname-or-name> --twin-id thermostat67 --json-patch '{"op":"replace", "path":"/Temperature", "value": 20.5}' -``` --Repeat the command at least 4 more times with different property values to create several data points that can be observed later in the Time Series Insights environment. --> [!TIP] -> If you want to complete this article with live simulated data instead of manually updating the digital twin values, first make sure you've completed the TIP from the [Prerequisites](#prerequisites) section to set up an Azure function that updates twins from a simulated device. -After that, you can run the device now to start sending simulated data and updating your digital twin through that data flow. --## Visualize your data in Time Series Insights --Now, data should be flowing into your Time Series Insights instance, ready to be analyzed. Follow the steps below to explore the data coming in. --1. In the [Azure portal](https://portal.azure.com), search for your time series environment name that you created earlier. In the menu options on the left, select **Overview** to see the **Time Series Insights Explorer URL**. Select the URL to view the temperature changes reflected in the Time Series Insights environment. -- :::image type="content" source="media/how-to-integrate-time-series-insights/view-environment.png" alt-text="Screenshot of the Azure portal showing the Time Series Insights explorer URL in the overview tab of the Time Series Insights environment." lightbox="media/how-to-integrate-time-series-insights/view-environment.png"::: --2. In the explorer, you'll see the twins in the Azure Digital Twins instance shown on the left. Select the twin you've edited properties for, choose the property you've changed, and select **Add**. -- :::image type="content" source="media/how-to-integrate-time-series-insights/add-data.png" alt-text="Screenshot of the Time Series Insights explorer with the steps to select thermostat67, select the property temperature, and select add highlighted." lightbox="media/how-to-integrate-time-series-insights/add-data.png"::: --3. You should now see the property changes you made reflected in the graph, as shown below. -- :::image type="content" source="media/how-to-integrate-time-series-insights/initial-data.png" alt-text="Screenshot of the Time Series Insights explorer with the initial temperature data, showing a line of random values between 68 and 85." lightbox="media/how-to-integrate-time-series-insights/initial-data.png"::: --If you allow a simulation to run for much longer, your visualization will look something like this: ---## Next steps --After establishing a data pipeline to send time series data from Azure Digital Twins to Time Series Insights, you might want to think about how to translate asset models designed for Azure Digital Twins into asset models for Time Series Insights. For a tutorial on this next step in the integration process, see [Model synchronization between Azure Digital Twins and Time Series Insights Gen2](../time-series-insights/tutorials-model-sync.md). |
digital-twins | How To Move Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-move-regions.md | Here are some questions to consider: - Azure Functions - Azure Logic Apps - Azure Data Explorer- - Azure Time Series Insights - Azure Maps - Azure IoT Hub Device Provisioning Service * What other personal or company apps do I have that connect to my instance? The exact resources you need to edit depends on your scenario, but here are some * Event Grid, Event Hubs, or Service Bus. * Logic Apps. * Azure Data Explorer.-* Time Series Insights. * Azure Maps. * IoT Hub Device Provisioning Service. * Personal or company apps outside of Azure, such as the client app created in [Code a client app](tutorial-code.md), that connect to the instance and call Azure Digital Twins APIs. |
digital-twins | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md | In Azure Digital Twins, you define the digital entities that represent the peopl You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define a model that defines a Building type, a Floor type, and an Elevator type. Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v3/DTDL.v3.md). In ADT, DTDL models describe types of entities according to their state properties, commands, and relationships. You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry. >[!TIP]->Version 2 of DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem. +>Version 2 of DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot/overview-iot-plug-and-play.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem. Once you've defined your data models, use them to create [digital twins](concepts-twins-graph.md) that represent each specific entity in your environment. For example, you might use the Building model definition to create several Building-type twins (Building 1, Building 2, and so on). You can also use the relationships in the model definitions to connect twins to each other, forming a conceptual graph. To send digital twin data to [Azure Data Explorer](/azure/data-explorer/data-exp To send digital twin data to other Azure services or ultimately outside of Azure, you can create *event routes*, which utilize [Event Hubs](../event-hubs/event-hubs-about.md), [Event Grid](../event-grid/overview.md), and [Service Bus](../service-bus-messaging/service-bus-messaging-overview.md) to send data through custom flows. Here are some things you can do with event routes in Azure Digital Twins:-* [Connect Azure Digital Twins to Time Series Insights](how-to-integrate-time-series-insights.md) to track time series history of each twin * Store Azure Digital Twins data in [Azure Data Lake](../storage/blobs/data-lake-storage-introduction.md) * Analyze Azure Digital Twins data with [Azure Synapse Analytics](../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md), or other Microsoft data analytics tools * Integrate larger workflows with [Logic AppsΓÇï](../logic-apps/logic-apps-overview.md) A possible architecture of a complete solution using Azure Digital Twins may con * One or more client apps that drive the Azure Digital Twins instance by configuring models, creating topology, and extracting insights from the twin graph. * One or more external compute resources to process events generated by Azure Digital Twins, or connected data sources such as devices. One common way to provide compute resources is via [Azure Functions](../azure-functions/functions-overview.md). * An IoT hub to provide device management and IoT data stream capabilities.-* Downstream services to provide things like workflow integration (like Logic Apps), cold storage (like Azure Data Lake), or analytics (like Azure Data Explorer or Time Series Insights). +* Downstream services to provide things like workflow integration (like Logic Apps), cold storage (like Azure Data Lake), or analytics (like Azure Data Explorer). The following diagram shows where Azure Digital Twins might lie in the context of a larger sample Azure IoT solution. |
education-hub | About Education Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/about-education-hub.md | Title: What is the Azure Education Hub? + Title: What is Azure Classroom? description: Learn about the purpose of the Azure Education Hub, including prerequisites. Previously updated : 12/09/2020 Last updated : 08/22/2024 -# What is the Azure Education Hub? +# What is Azure Classroom? -The Microsoft Azure [Education Hub](https://portal.azure.com/#blade/Microsoft_Azure_Education/EducationMenuBlade/quickstart) is a tool that helps academic users provision and manage cloud credit across Azure subscriptions. +The Microsoft [Azure Classroom](https://portal.azure.com/#blade/Microsoft_Azure_Education/EducationMenuBlade/quickstart) is a tool that helps academic users provision and manage cloud credit across Azure subscriptions. -This tool is useful when you need to manage many cloud-based student projects. The Education Hub is also useful for research purposes when you don't yet know your Azure credit needs. +This tool is useful when you need to manage many cloud-based student projects. Azure Classroom is also useful for research purposes when you don't yet know your Azure credit needs. You can easily adjust the amount of allocated credit within each subscription or in bulk. You can also suspend subscriptions to maximize available credits. For example, you might suspend subscriptions when a semester or quarter project is complete. ## Prerequisites -To access the Education Hub, you must first receive an email notification that confirms your approval for an academic sponsorship and contains your approved credit amount. +To access Azure Classroom, you must first receive an email notification that confirms your approval for an academic sponsorship and contains your approved credit amount. After you have signed in, you can navigate to the Education Hub in the Azure portal. ## Related content |
education-hub | Get Started Education Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/get-started-education-hub.md | Title: Get started with the Azure Education Hub -description: Learn how to quickly get started with using the Azure Education Hub. + Title: Get started with Azure Classroom +description: Learn how to quickly get started with using Azure Classroom. Previously updated : 06/30/2020 Last updated : 08/22/2024 -# Getting started with the Azure Education Hub +# Getting started with Azure Classroom -Before you access the Azure Education Hub you must complete signup by clicking the **signup here** link in the invitation email. After you complete the signup you can navigate to the Azure Education Hub to begin allocating this credit to students via labs. +Before you access Azure Classroom you must complete signup by clicking the **signup here** link in the invitation email. After you complete the signup you can navigate to the Azure Education Hub to begin allocating this credit to students via labs. :::image type="content" source="media/get-started-education-hub/get-started-page.png" alt-text="Screenshot that shows an email message with a Get Started link to the Azure Education Hub." border="false"::: |
energy-data-services | How To Upload Large Files Using File Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-upload-large-files-using-file-service.md | Consider an Azure Data Manager for Energy resource named "medstest" with a data } ``` -The SignedURL key in the response object can be then used to upload files into Azure Blob Storage +The SignedURL key in the response object can be then used to upload files into Azure Blob Storage. The expiry time of the SignedURL for File service and Dataset service is 1 hour as per [security enhancements from OSDU](https://community.opengroup.org/osdu/platform/system/file/-/issues/78). ## Upload files with size less than 5 GB In order to upload file sizes less than 5 GB one can directly use [PUT blob API](https://azure.github.io/Storage/docs/application-and-user-data/basics/azure-blob-storage-upload-apis/#put-blob) call to upload their files into Azure Blob Storage |
event-grid | Communication Services Advanced Messaging Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-advanced-messaging-events.md | Azure Communication Services emits the following Advanced Messaging event types: |--|-| | [Microsoft.Communication.AdvancedMessageReceived](#microsoftcommunicationadvancedmessagereceived-event) | Published when Communication Services Advanced Messaging receives a message. | | [Microsoft.Communication.AdvancedMessageDeliveryStatusUpdated](#microsoftcommunicationadvancedmessagedeliverystatusupdated-event) | Published when Communication Services Advanced Messaging receives a status update for a previously sent message notification. |+| [Microsoft.Communication.AdvancedMessageAnalysisCompleted(Preview)](#microsoftcommunicationadvancedmessageanalysiscompletedpreview-event) | Published when Communication Service completes an AI Analysis with a customer message. | + ## Event responses Details for the attributes specific to `Microsoft.Communication.AdvancedMessageR ``` +### Microsoft.Communication.AdvancedMessageAnalysisCompleted(Preview) event ++Published when Communication Service completes an AI Analysis with a customer message. ++Example scenario: A WhatsApp user sends a WhatsApp message to a WhatsApp Business Number that is connected to an active Advanced Messaging channel in a Communication Services resource that has opted in for Message Analysis feature. As a result, a Microsoft.Communication.AdvancedMessageAnalysisCompleted with the analysis of the user's WhatsApp message is published. ++#### Attribute list ++Details for the attributes specific to `Microsoft.Communication.AdvancedMessageAnalysisCompleted` events. ++| Attribute | Type | Nullable | Description | +|:|:--:|:--:|-| +| channelType | `string` | ✔️ | Channel type of the channel that the message was sent on. | +| from | `string` | ✔️ | The channel ID that sent the message, formatted as a GUID. | +| to | `string` | ✔️ | Recipient ID that the message was sent to. | +| receivedTimestamp | `DateTimeOffset` | ✔️ | Timestamp of the message. | +| originalMessage | `string` | ✔️ | The original user message. | +| intentAnalysis | `string` | ✔️ | The intent analysis of the received user message. | +| languageDetection | [`LanguageDetection`](#languagedetection) | ✔️ | Contains the language detection of the received user message. | +| extractedKeyPhrases | `List<string>` | ✔️ | Contains the key phrases of of the received user message. | ++##### LanguageDetection ++| Attribute | Type | Nullable | Description | +|:|:--:|:--:|| +| language | `string` | ✔️ | The languege detected. | +| confidenceScore | `float` | ✔️ | The confidence score of the language detected. | +| translation | `string` | ✔️ | The message translation. | ++#### Examples ++##### Message Analysis Completed +```json +[{ + "id": "df1c2d92-6155-4ad7-a865-cb8497106c52", + "topic": "/subscriptions/{subscription-id}/resourcegroups/{resourcegroup-name}/providers/microsoft.communication/communicationservices/acsxplatmsg-test", + "subject": "advancedMessage/sender/{sender@id}/recipient/00000000-0000-0000-0000-000000000000", + "data": { + "originalMessage": "Hello, could u help me order some flowers for Mother’s Day?", + "channelType": "whatsapp", + "languageDetection": { + "language": "English", + "confidenceScore": 0.99 + }, + "intentAnalysis": "Order request: The customer is contacting customer service to request assistance with ordering flowers for Mother's Day.", + "extractedKeyPhrases": [ + "order", + "flowers", + "Mother's Day" + ], + "from": "{sender@id}", + "to": "00000000-0000-0000-0000-000000000000", + "receivedTimestamp": "2024-07-05T19:10:35.28+00:00" + }, + "eventType": "Microsoft.Communication.AdvancedMessageAnalysisCompleted", + "dataVersion": "1.0", + "metadataVersion": "1", + "eventTime": "2024-07-05T19:10:35.2806524Z" +}] +``` + ## Quickstart For a quickstart that shows how to subscribe for Advanced Messaging events using web hooks, see [Quickstart: Handle Advanced Messaging events](../communication-services/quickstarts/advanced-messaging/whatsapp/handle-advanced-messaging-events.md). |
event-grid | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md | Event Grid offers a rich mixture of features. These features include: - **Publish-subscribe messaging model** - Communicate efficiently using one-to-many, many-to-one, and one-to-one messaging patterns. - **[Built-in cloud integration](mqtt-routing.md)** - Route your MQTT messages to Azure services or custom webhooks for further processing. - **Flexible and fine-grained [access control model](mqtt-access-control.md)** - Group clients and topic to simplify access control management, and use the variable support in topic templates for a fine-grained access control.-- **MQTT broker authentication methods** - [X.509 certificate authentication](mqtt-client-authentication.md) is the industry authentication standard in IoT devices ,[Microsoft Entra IDauthentication](mqtt-client-microsoft-entra-token-and-rbac.md) is Azure's authentication standard for applications and [OAuth 2.0 (JSON Web Token) authentication](oauth-json-web-token-authentication.md) provides a lightweight, secure, and flexible option for MQTT clients that are not provisioned in Azure.+- **MQTT broker authentication methods** - [X.509 certificate authentication](mqtt-client-authentication.md) is the industry authentication standard in IoT devices, [Microsoft Entra IDauthentication](mqtt-client-microsoft-entra-token-and-rbac.md) is Azure's authentication standard for applications and [OAuth 2.0 (JSON Web Token) authentication](oauth-json-web-token-authentication.md) provides a lightweight, secure, and flexible option for MQTT clients that are not provisioned in Azure. - **TLS 1.2 and TLS 1.3 support** - Secure your client communication using robust encryption protocols. - **Multi-session support** - Connect your applications with multiple active sessions to ensure reliability and scalability. - **MQTT over WebSockets** - Enable connectivity for clients in firewall-restricted environments. Event Grid offers a rich mixture of features. These features include: - **High throughput** - Build high-volume integrated solutions with Event Grid. - **Custom domain names** - Allows users to assign their own domain names to Event Grid namespace's HTTP endpoints, enhancing security and simplifying client configuration. +> [!NOTE] +> **Regarding TLS 1.0 / 1.1 deprecation**: For system topics, you need to take action only for the event delivery to webhook destinations. If the destination supports TLS 1.2, the event delivery happens using 1.2. If the destination doesn't support TLS 1.2, the event delivery automatically falls back to 1.0 and 1.1. Post Oct 31st 2024, event delivery using 1.0 and 1.1 won't be supported. Ensure that your webhook destinations support TLS 1.2. One easy way to check for TLS 1.2 support is to use [Qualys SSL Labs](https://www.ssllabs.com/ssltest/). If the report shows that TLS 1.2 is supported, no action is required. For more information, see the following blog post: [Retirement: Upcoming TLS changes for Azure Event Grid](https://azure.microsoft.com/updates/v2/TLS-changes-for-Azure-Event-Grid) + ## Use cases Event Grid supports the following use cases: |
event-grid | Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/sdk-overview.md | Title: Azure Event Grid SDKs -description: Describes the SDKs for Azure Event Grid. These SDKs provide management, publishing and consumption. +description: Describes the SDKs for Azure Event Grid. These SDKs provide management, publishing, and consumption of events in Azure Event Grid. Previously updated : 07/06/2023 Last updated : 08/22/2024 ms.devlang: csharp # ms.devlang: csharp, golang, java, javascript, python ms.devlang: csharp Event Grid provides SDKs that enable you to programmatically manage your resources and post events. +> [!NOTE] +> **Regarding TLS 1.0 / 1.1 deprecation**: For system topics, you need to take action only for the event delivery to webhook destinations. If the destination supports TLS 1.2, the event delivery happens using 1.2. If the destination doesn't support TLS 1.2, the event delivery automatically falls back to 1.0 and 1.1. Post Oct 31st 2024, event delivery using 1.0 and 1.1 won't be supported. Ensure that your webhook destinations support TLS 1.2. One easy way to check for TLS 1.2 support is to use [Qualys SSL Labs](https://www.ssllabs.com/ssltest/). If the report shows that TLS 1.2 is supported, no action is required. For more information, see the following blog post: [Retirement: Upcoming TLS changes for Azure Event Grid](https://azure.microsoft.com/updates/v2/TLS-changes-for-Azure-Event-Grid) + ## Management SDKs The management SDKs enable you to create, update, and delete Event Grid topics and subscriptions. Currently, the following SDKs are available: The management SDKs enable you to create, update, and delete Event Grid topics a | SDK | Package | Reference documentation | Samples | | -- | - | -- | - | | REST API | | [REST reference](/rest/api/eventgrid/controlplane-preview/ca-certificates) | |-| .NET | [Azure.ResourceManager.EventGrid](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/). The beta package has the latest `Namespaces` API. | .NET reference: [Preview](/dotnet/api/overview/azure/resourcemanager.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true), [GA](/dotnet/api/overview/azure/event-grid) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.ResourceManager.EventGrid/samples) | -| Java | [azure-resourcemanager-eventgrid](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-eventgrid/). The beta package has the latest `Namespaces` API. | Java reference: [Preview](/java/api/overview/azure/resourcemanager-eventgrid-readme?view=azure-java-preview&preserve-view=true), [GA](/java/api/overview/azure/event-grid) | [Java samples](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-resourcemanager-eventgrid/src/samples) | -| JavaScript | [@azure/arm-eventgrid](https://www.npmjs.com/package/@azure/arm-eventgrid). The beta package has the latest `Namespaces` API. | JavaScript reference: [Preview](/javascript/api/overview/azure/arm-eventgrid-readme?view=azure-node-preview&preserve-view=true), [GA](/javascript/api/overview/azure/event-grid) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/arm-eventgrid) | -| Python | [azure-mgmt-eventgrid](https://pypi.org/project/azure-mgmt-eventgrid/). The beta package has the latest `Namespaces` API. | Python reference: [Preview](/python/api/azure-mgmt-eventgrid/?view=azure-python-preview&preserve-view=true), [GA](/python/api/overview/azure/event-grid) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-mgmt-eventgrid/generated_samples) +| .NET | [`Azure.ResourceManager.EventGrid`](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/). The beta package has the latest `Namespaces` API. | .NET reference: [Preview](/dotnet/api/overview/azure/resourcemanager.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true), [GA](/dotnet/api/overview/azure/event-grid) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.ResourceManager.EventGrid/samples) | +| Java | [`azure-resourcemanager-eventgrid`](https://central.sonatype.com/artifact/com.azure.resourcemanager/azure-resourcemanager-eventgrid/). The beta package has the latest `Namespaces` API. | Java reference: [Preview](/java/api/overview/azure/resourcemanager-eventgrid-readme?view=azure-java-preview&preserve-view=true), [GA](/java/api/overview/azure/event-grid) | [Java samples](https://github.com/azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-resourcemanager-eventgrid/src/samples) | +| JavaScript | [`@azure/arm-eventgrid`](https://www.npmjs.com/package/@azure/arm-eventgrid). The beta package has the latest `Namespaces` API. | JavaScript reference: [Preview](/javascript/api/overview/azure/arm-eventgrid-readme?view=azure-node-preview&preserve-view=true), [GA](/javascript/api/overview/azure/event-grid) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/arm-eventgrid) | +| Python | [`azure-mgmt-eventgrid`](https://pypi.org/project/azure-mgmt-eventgrid/). The beta package has the latest `Namespaces` API. | Python reference: [Preview](/python/api/azure-mgmt-eventgrid/?view=azure-python-preview&preserve-view=true), [GA](/python/api/overview/azure/event-grid) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-mgmt-eventgrid/generated_samples) | Go | [Azure SDK for Go](https://github.com/Azure/azure-sdk-for-go) | | [Go samples](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main/sdk/resourcemanager/eventgrid) | The data plane SDKs enable you to post events to topics by taking care of authen | Programming language | Package | Reference documentation | Samples | | -- | - | - | -- | | REST API | | [REST reference](/rest/api/eventgrid/dataplane-preview/publish-cloud-events) |-| .NET | [Azure.Messaging.EventGrid](https://www.nuget.org/packages/Azure.Messaging.EventGrid/). The beta package has the latest `Namespaces` API. | [.NET reference](/dotnet/api/overview/azure/messaging.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid/samples) | -|Java | [azure-messaging-eventgrid](https://central.sonatype.com/artifact/com.azure/azure-messaging-eventgrid/). The beta package has the latest `Namespaces` API. | [Java reference](/java/api/overview/azure/messaging-eventgrid-readme?view=azure-java-preview&preserve-view=true) | [Java samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-messaging-eventgrid/src/samples/java) | -| JavaScript | [@azure/eventgrid](https://www.npmjs.com/package/@azure/eventgrid). The beta package has the latest `Namespaces` API. | [JavaScript reference](/javascript/api/overview/azure/eventgrid-readme?view=azure-node-preview&preserve-view=true) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid) | -| Python | [azure-eventgrid](https://pypi.org/project/azure-eventgrid/). The beta package has the latest `Namespaces` API. | [Python reference](/python/api/overview/azure/eventgrid-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid/samples) | +| .NET | [`Azure.Messaging.EventGrid`](https://www.nuget.org/packages/Azure.Messaging.EventGrid/). The beta package has the latest `Namespaces` API. | [.NET reference](/dotnet/api/overview/azure/messaging.eventgrid-readme?view=azure-dotnet-preview&preserve-view=true) | [.NET samples](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid/samples) | +|Java | [`azure-messaging-eventgrid`](https://central.sonatype.com/artifact/com.azure/azure-messaging-eventgrid/). The beta package has the latest `Namespaces` API. | [Java reference](/java/api/overview/azure/messaging-eventgrid-readme?view=azure-java-preview&preserve-view=true) | [Java samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/eventgrid/azure-messaging-eventgrid/src/samples/java) | +| JavaScript | [`@azure/eventgrid`](https://www.npmjs.com/package/@azure/eventgrid). The beta package has the latest `Namespaces` API. | [JavaScript reference](/javascript/api/overview/azure/eventgrid-readme?view=azure-node-preview&preserve-view=true) | [JavaScript and TypeScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid) | +| Python | [`azure-eventgrid`](https://pypi.org/project/azure-eventgrid/). The beta package has the latest `Namespaces` API. | [Python reference](/python/api/overview/azure/eventgrid-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid/samples) | | Go | [Azure SDK for Go](https://github.com/Azure/azure-sdk-for-go) | | | |
event-grid | Send Events Webhooks Private Destinations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/send-events-webhooks-private-destinations.md | + + Title: Send events to webhooks hosted in private destinations +description: Shows how to send events to webhooks in private destinations using Azure Event Grid and Azure Relay. + Last updated : 08/23/2024+# Customer intent: As a developer, I want to know how to send events to webhooks hosted in private destinations such as on-premises servers or virtual machines. +++# Send events to webhooks hosted in private destinations using Azure Event Grid and Azure Relay +In this article, you learn how to receive events from Azure Event Grid to webhooks hosted in private destinations, such as on-premises servers or virtual machines, using Azure Relay. ++Azure Relay is a service that enables you to securely expose services that reside within a corporate enterprise network to the public cloud, without having to open a firewall connection or require intrusive changes to a corporate network infrastructure. ++Azure Relay supports hybrid connections, which are a secure, open-protocol evolution of the existing Azure Relay features that can be implemented on any platform and in any language that has a basic WebSocket capability, which includes the option to accept relayed traffic initiated from Azure Event Grid. See [Azure Relay Hybrid Connections protocol guide - Azure Relay](../azure-relay/relay-hybrid-connections-protocol.md). ++## Receive events from Event Grid basic resources to webhooks in private destinations +This section gives you the high-level steps for receiving events from Event Grid basic resources to webhooks hosted in private destinations using Azure Relay. ++1. Create an Azure Relay resource. You can use the Azure portal, Azure CLI, or Azure Resource Manager templates to create a Relay namespace and a hybrid connection. For more information, see [Create Azure Relay namespaces and hybrid connections using Azure portal](../azure-relay/relay-hybrid-connections-http-requests-dotnet-get-started.md). ++ > [!NOTE] + > Ensure you have enabled the option: **client authorization required**. This option ensures that only authorized clients can connect to your hybrid connection endpoint. You can use the Azure portal or Azure CLI to enable the client authorization and manage the client authorization rules. For more information, see [Secure Azure Relay Hybrid Connections](../azure-relay/relay-authentication-and-authorization.md). +1. Implement the Azure Relay hybrid connection listener. ++ - **Option 1**: You can use the Azure Relay SDK for .NET to programmatically create a hybrid connection listener and handle the incoming requests. For more information, see [Azure Relay Hybrid Connections - HTTP requests in .NET](../azure-relay/relay-hybrid-connections-http-requests-dotnet-get-started.md). + - **Option 2**: Azure Relay Bridge. You can use Azure Relay Bridge, a cross-platform command line tool that can create VPN-less TCP tunnels from and to anywhere. You can run the Azure Relay Bridge as a Docker container or as a standalone executable. For more information, see [Azure Relay Bridge](https://github.com/Azure/azure-relay-bridge). +1. Ensure your hybrid connection listener is connected. You can use the following Azure CLI command to list the hybrid connections in your namespace and check their status. ++ ```azurecli + az relay hyco list --resource-group [resource-group-name] --namespace-name [namespace-name]. You should see a "listenerCount" attribute in the properties of your hybrid connection. + ``` +1. Create an Azure Event Grid system topic. You can use the Azure portal, Azure CLI, or Azure Resource Manager templates to create a system topic that corresponds to an Azure service that has events, such as Storage accounts, event hubs, or Azure subscriptions. For more information, see [System topics in Azure Event Grid](create-view-manage-system-topics.md). +1. Create an event subscription to the system topic. You can use the Azure portal, Azure CLI, or Azure Resource Manager templates to create an event subscription that defines the filter criteria and the destination endpoint for the events. In this case, select the **Azure Relay Hybrid Connection** as the endpoint type and provide the connection string of your hybrid connection. For more information, see [Azure Relay Hybrid Connection as an event handler](handler-relay-hybrid-connections.md). +++## Considerations when using webhooks to receive events from Azure Event Grid +Ensure you have the Cloud Events validation handshake implemented. Here's the sample code in C# that demonstrates how to validate the Cloud Event schema handshake required during the subscription creation. You can use this sample code as a reference to implement your own validation handshake logic in the language of your preference. ++```csharp +if (context.Request.HttpMethod == "OPTIONS" && context.Request.Url.PathAndQuery == _settings!.relayWebhookPath) +{ + context.Response.StatusCode = HttpStatusCode.OK; + context.Response.StatusDescription = "OK"; ++ var origin = context.Request.Headers["Webhook-Request-Origin"]; + context.Response.Headers.Add("Webhook-Allowed-Origin", origin); + using (var sw = new StreamWriter(context.Response.OutputStream)) + { + sw.WriteLine("OK"); + } ++ context.Response.Close(); +} +``` ++If you want to forward events from the Azure Relay Bridge to your local webhook you can use the following command: ++```bash +.\azbridge.exe -x "AzureRelayConnectionString" -H [HybridConnectionName]:[http/https]/localhost:[ApplicationPort] -v +``` ++## Related content ++- [Azure Relay Hybrid Connection as an event handler](handler-relay-hybrid-connections.md) +- [Azure Relay Bridge](https://github.com/Azure/azure-relay-bridge) |
frontdoor | Front Door Route Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-route-matching.md | This section focuses on how Front Door matches to a routing rule. The basic conc Azure Front Door uses the following logic to match frontend hosts: 1. Determine if there are any routes with an exact match on the frontend host.-2. If there are no exact frontend hosts match, the request get rejected and a 400: Bad Request error gets sent. +2. If there are no exact frontend hosts match, the request get rejected and a 404: Bad Request error gets sent. The following tables show three different routing rules with frontend host and paths: The following table shows the matching results for the above routing rules: |--|--| | foo.contoso.com | A, B | | www\.fabrikam.com | C |-| images.fabrikam.com | Error 400: Bad Request | +| images.fabrikam.com | Error 404: Bad Request | | foo.adventure-works.com | C |-| contoso.com | Error 400: Bad Request | -| www\.adventure-works.com | Error 400: Bad Request | -| www\.northwindtraders.com | Error 400: Bad Request | +| contoso.com | Error 404: Bad Request | +| www\.adventure-works.com | Error 404: Bad Request | +| www\.northwindtraders.com | Error 404: Bad Request | ### Path matching After Front Door determines the specific frontend host and filters for possible 1. Determine if there are any routing rules with an exact match to the request path. 1. If there isn't an exact matching path, then Front Door looks for a routing rule with a wildcard path that matches.-1. If there are no routing rules found with a matching path, then request gets rejected and a 400: Bad Request error gets set sent. +1. If there are no routing rules found with a matching path, then request gets rejected and a 404: Bad Request error gets set sent. ::: zone pivot="front-door-standard-premium" The following table shows which routing rule the incoming request gets matched t > > | Incoming request | Matched Route | > |||-> | profile.domain.com/other | None. Error 400: Bad Request | +> | profile.domain.com/other | None. Error 404: Bad Request | ### Routing decision |
governance | Assign Policy Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-rest-api.md | This guide uses REST API to create a policy assignment and to identify non-compl ## Review the REST API syntax -There are two elements to run REST API commands: the REST API URI and the request body. For information, go to [Policy Assignments - Create](/rest/api/policyauthorization/policy-assignments/create). +There are two elements to run REST API commands: the REST API URI and the request body. For information, go to [Policy Assignments - Create](/rest/api/policy/policy-assignments/create). The following example shows the REST API URI syntax to create a policy definition. az rest --method put --uri https://management.azure.com/subscriptions/{subscript In PowerShell, the backtick (``` ` ```) is needed to escape the `at sign` (`@`) to specify a filename. In a Bash shell like Git Bash, omit the backtick. -For information, go to [Policy Assignments - Create](/rest/api/policyauthorization/policy-assignments/create). +For information, go to [Policy Assignments - Create](/rest/api/policy/policy-assignments/create). ## Identify non-compliant resources Your results resemble the following example: } ``` -For more information, go to [Policy States - List Query Results For Resource Group](/rest/api/policyinsights/policy-states/list-query-results-for-resource-group). +For more information, go to [Policy States - List Query Results For Resource Group](/rest/api/policy/policy-states/list-query-results-for-resource-group). ## Clean up resources az rest --method get --uri https://management.azure.com/subscriptions/{subscript The policy assignment 'audit-vm-managed-disks' is not found. ``` -For more information, go to [Policy Assignments - Delete](/rest/api/policyauthorization/policy-assignments/delete) and [Policy Assignments - Get](/rest/api/policyauthorization/policy-assignments/get). +For more information, go to [Policy Assignments - Delete](/rest/api/policy/policy-assignments/delete) and [Policy Assignments - Get](/rest/api/policy/policy-assignments/get). ## Next steps |
governance | Attestation Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md | -> Attestations can be created and managed only through Azure Policy [Azure Resource Manager (ARM) API](/rest/api/policyinsights/attestations), [PowerShell](/powershell/module/az.policyinsights) or [Azure CLI](/cli/azure/policy/attestation). +> Attestations can be created and managed only through Azure Policy [Azure Resource Manager (ARM) API](/rest/api/policy/attestations), [PowerShell](/powershell/module/az.policyinsights) or [Azure CLI](/cli/azure/policy/attestation). ## Best practices |
governance | Initiative Definition Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/initiative-definition-structure.md | This information is: - Displayed in the Azure portal on the overview of a **control** on a Regulatory Compliance initiative. - Available via REST API. See the `Microsoft.PolicyInsights` resource provider and the- [policyMetadata operation group](/rest/api/policyinsights/policy-metadata/get-resource). + [policyMetadata operation group](/rest/api/policy/policy-metadata/get-resource). - Available via Azure CLI. See the [az policy metadata](/cli/azure/policy/metadata) command. > [!IMPORTANT] |
governance | Policy For Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md | aligns with how the add-on was installed: - Component-level [exemptions](./exemption-structure.md) aren't supported for [Resource Provider modes](./definition-structure.md#resource-provider-modes). Parameters support is available in Azure Policy definitions to exclude and include particular namespaces. - Using the `metadata.gatekeeper.sh/requires-sync-data` annotation in a constraint template to configure the [replication of data](https://open-policy-agent.github.io/gatekeeper/website/docs/sync) from your cluster into the OPA cache is currently only allowed for built-in policies. The reason is because it can dramatically increase the Gatekeeper pods resource usage if not used carefully. +### Configuring the Gatekeeper Config +Changing the Gatekeeper config is unsupported, as it contains critical security settings. Edits to the config will be reconciled. ++### Using data.inventory in constraint templates +Currently, several built-in policies make use of [data replication](https://open-policy-agent.github.io/gatekeeper/website/docs/sync), which enables users to sync existing on-cluster resources to the OPA cache and reference them during evaluation of an AdmissionReview request. Data replication policies can be differentiated by the presence of `data.inventory` in the Rego, as well as the presence of the `metadata.gatekeeper.sh/requires-sync-data` annotation, which informs the Azure Policy addon what resources need to be cached for policy evaluation to work properly. (Note that this differs from standalone Gatekeeper, where this annotation is descriptive, not prescriptive.) ++Data replication is currently blocked for use in custom policy definitions, because replicating resources with high instance counts can dramatically increase the Gatekeeper pods\' resource usage if not used carefully. You will see a `ConstraintTemplateInstallFailed` error when attempting to create a custom policy definition containing a constraint template with this annotation. +> Removing the annotation may appear to mitigate the error you see, but then the policy addon will not sync any required resources for that constraint template into the cache. Thus, your policies will be evaluated against an empty `data.inventory` (assuming that no built-in is assigned that replicates the requisite resources). This will lead to misleading compliance results. As noted [previously](#configuring-the-gatekeeper-config), manually editing the config to cache the required resources is also not permitted. + The following limitations apply only to the Azure Policy Add-on for AKS: - [AKS Pod security policy](/azure/aks/use-pod-security-policies) and the Azure Policy Add-on for AKS can't both be enabled. For more information, see [AKS pod security limitation](/azure/aks/use-azure-policy). - Namespaces automatically excluded by Azure Policy Add-on for evaluation: kube-system and gatekeeper-system. |
governance | Author Policies For Arrays | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/author-policies-for-arrays.md | To use this string with each SDK, use the following commands: parameter **params** - Azure PowerShell: Cmdlet [New-AzPolicyAssignment](/powershell/module/az.resources/New-Azpolicyassignment) with parameter **PolicyParameter**-- REST API: In the _PUT_ [create](/rest/api/policyauthorization/policy-assignments/create) operation as part of+- REST API: In the _PUT_ [create](/rest/api/policy/policy-assignments/create) operation as part of the Request Body as the value of the **properties.parameters** property ## Using arrays in conditions |
governance | Get Compliance Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md | Use ARMClient or a similar tool to handle authentication to Azure for the REST A With the REST API, summarization can be performed by container, definition, or assignment. Here's an example of summarization at the subscription level using Azure Policy Insight's [Summarize For-Subscription](/rest/api/policyinsights/policy-states/summarize-for-subscription): +Subscription](/rest/api/policy/policy-states/summarize-for-subscription): ```http POST https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/summarize?api-version=2019-10-01 Your results resemble the following example: ``` For more information about querying policy events, see the-[Azure Policy Events](/rest/api/policyinsights/policy-events) reference article. +[Azure Policy Events](/rest/api/policy/policy-events) reference article. ### Azure CLI |
governance | Migrate From Automanage Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/migrate-from-automanage-best-practices.md | next steps: <!-- Reference link definitions --> [01]: ../overview.md [02]: ../../../backup/backup-azure-arm-userestapi-createorupdatepolicy.md-[03]: ../../../virtual-machines/extensions/iaas-antimalware-windows.md +[03]: /azure/virtual-machines/extensions/iaas-antimalware-windows [04]: https://learn.microsoft.com/windows-server/manage/windows-admin-center/azure/manage-vm [05]: ../../../update-manager/migration-overview.md [06]: https://ms.portal.azure.com/ [07]: ../concepts/definition-structure-basics.md [08]: ../assign-policy-portal.md [09]: https://azure.microsoft.com/pricing/details/azure-automanage/-- |
governance | Programmatically Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/programmatically-create.md | Use the following procedure to create a policy definition. with the ID of your [management group](../../management-groups/overview.md). For more information about the structure of the query, see- [Azure Policy Definitions - Create or Update](/rest/api/policyauthorization/policy-definitions/create-or-update) + [Azure Policy Definitions - Create or Update](/rest/api/policy/policy-definitions/create-or-update) and- [Policy Definitions - Create or Update At Management Group](/rest/api/policyauthorization/policy-definitions/create-or-update-at-management-group). + [Policy Definitions - Create or Update At Management Group](/rest/api/policy/policy-definitions/create-or-update-at-management-group). Use the following procedure to create a policy assignment and assign the policy definition at the resource group level. |
hdinsight | Hdinsight Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md | For workload specific versions, see [HDInsight 5.x component versions](./hdinsig **[Addition of Azure Monitor Agent](./azure-monitor-agent.md) for Log Analytics in HDInsight** -Addition of `SystemMSI` and Automated DCR for Log analytics, given the deprecation of [New Azure Monitor experience (preview)](./hdinsight-hadoop-oms-log-analytics-tutorial.md) . +Addition of `SystemMSI` and Automated DCR for Log analytics, given the deprecation of [New Azure Monitor experience (preview)](./hdinsight-hadoop-oms-log-analytics-tutorial.md). ++> [!NOTE] +> Effective Image number 2407260448, customers using portal for log analytics will have default [Azure Monitor Agent](./azure-monitor-agent.md) experience. In case you wish to switch to Azure Monitor experience (preview), you can pin your clusters to old images by creating a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). + ## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon |
import-export | Storage Import Export Contact Microsoft Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-contact-microsoft-support.md | Title: Create Support ticket or case for Azure Import/Export job | Microsoft Docs description: Learn how to log support request for issues related to your Import/Export job. -+ Last updated 03/14/2022-+ # Open a support ticket for an Import/Export job |
import-export | Storage Import Export Data From Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-from-blobs.md | Title: Tutorial to export data from Azure Blob storage with Azure Import/Export | Microsoft Docs description: Learn how to create export jobs in Azure portal to transfer data from Azure Blobs.-+ Last updated 02/13/2023-+ # Tutorial: Export data from Azure Blob storage with Azure Import/Export |
import-export | Storage Import Export Data To Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-blobs.md | Title: Tutorial to import data to Azure Blob Storage with Azure Import/Export service | Microsoft Docs description: Learn how to create import and export jobs in Azure portal to transfer data to and from Azure Blobs.-+ Last updated 02/01/2023-+ # Tutorial: Import data to Blob Storage with Azure Import/Export service |
import-export | Storage Import Export Data To Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-files.md | Title: Tutorial to transfer data to Azure Files with Azure Import/Export | Microsoft Docs description: Learn how to create import jobs in the Azure portal to transfer data to Azure Files.-+ Last updated 02/13/2023-+ # Tutorial: Transfer data to Azure Files with Azure Import/Export |
import-export | Storage Import Export Determine Drives For Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-determine-drives-for-export.md | Title: Check number of drives needed for an export with Azure Import/Export | Microsoft Docs description: Find out how many drives you need for an export using Azure Import/Export service.-+ Last updated 03/15/2022-+ # Check number of drives needed for an export with Azure Import/Export |
import-export | Storage Import Export Encryption Key Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-encryption-key-portal.md | Title: Use the Azure portal to configure customer-managed keys for Import/Export service description: Learn how to use the Azure portal to configure customer-managed keys with Azure Key Vault for Azure Import/Export service. Customer-managed keys enable you to create, rotate, disable, and revoke access controls. -+ Last updated 03/14/2022-+ |
import-export | Storage Import Export Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-requirements.md | Title: Requirements for Azure Import/Export service | Microsoft Docs description: Understand the software and hardware requirements for Azure Import/Export service.-+ Last updated 05/19/2022-+ # Azure Import/Export system requirements |
import-export | Storage Import Export Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-service.md | Title: Using Azure Import/Export to transfer data to and from Azure Storage | Microsoft Docs description: Learn how to create import and export jobs in the Azure portal for transferring data to and from Azure Storage.-+ Last updated 03/31/2023-+ # What is Azure Import/Export service? |
import-export | Storage Import Export Tool Repairing An Export Job V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-repairing-an-export-job-v1.md | Title: Repairing an Azure Import/Export export job - v1 | Microsoft Docs description: Learn how to repair an export job that was created and run using the Azure Import/Export service.-+ Last updated 03/14/2022-+ # Repairing an export job |
import-export | Storage Import Export Tool Repairing An Import Job V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-repairing-an-import-job-v1.md | Title: Repairing an Azure Import/Export import job - v1 | Microsoft Docs description: Learn how to repair an import job that was created and run using the Azure Import/Export service.-+ Last updated 03/14/2022-+ # Repairing an import job -The Microsoft Azure Import/Export service may fail to copy some of your files or parts of a file to the Windows Azure Blob service. Some reasons for failures include: +The Microsoft Azure Import/Export service might fail to copy some of your files or parts of a file to the Windows Azure Blob service. Some reasons for failures include: - Corrupted files The following parameters can be specified with **RepairImport**: |-|-| |**/r:**<RepairFile\>|**Required.** Path to the repair file, which tracks the progress of the repair, and allows you to resume an interrupted repair. Each drive must have one and only one repair file. When you start a repair for a given drive, pass in the path to a repair file, which doesn't yet exist. To resume an interrupted repair, you should pass in the name of an existing repair file. Always specify the repair file corresponding to the target drive.| |**/logdir:**<LogDirectory\>|**Optional.** The log directory. Verbose log files are written to this directory. If no log directory is specified, the current directory is used as the log directory.| -|**/d:**<TargetDirectories\>|**Required.** One or more semicolon-separated directories that contain the original files that were imported. The import drive may also be used, but isn't required if alternate locations of original files are available.| +|**/d:**<TargetDirectories\>|**Required.** One or more semicolon-separated directories that contain the original files that were imported. The import drive might also be used, but isn't required if alternate locations of original files are available.| |**/bk:**<BitLockerKey\>|**Optional.** Specify the BitLocker key if you want the tool to unlock an encrypted drive where the original files are available.| |**/sn:**<StorageAccountName\>|**Required.** The name of the storage account for the import job.| |**/sk:**<StorageAccountKey\>|**Required** if and only if a container SAS isn't specified. The account key for the storage account for the import job.| In the following example of a copy log file, one 64-K piece of a file was corrup </DriveLog> ``` -When this copy log is passed to the Azure Import/Export Tool, the tool tries to finish the import for this file by copying the missing contents across the network. Following the example above, the tool looks for the original file `\animals\koala.jpg` within the two directories `C:\Users\bob\Pictures` and `X:\BobBackup\photos`. If the file `C:\Users\bob\Pictures\animals\koala.jpg` exists, the Azure Import/Export Tool copies the missing range of data to the corresponding blob `http://bobmediaaccount.blob.core.windows.net/pictures/animals/koala.jpg`. +When this copy log is passed to the Azure Import/Export Tool, the tool tries to finish the import for this file by copying the missing contents across the network. Following the previously supplied example, the tool looks for the original file `\animals\koala.jpg` within the two directories `C:\Users\bob\Pictures` and `X:\BobBackup\photos`. If the file `C:\Users\bob\Pictures\animals\koala.jpg` exists, the Azure Import/Export Tool copies the missing range of data to the corresponding blob `http://bobmediaaccount.blob.core.windows.net/pictures/animals/koala.jpg`. ## Resolving conflicts when using RepairImport -In some situations, the tool may not be able to find or open the necessary file for one of the following reasons: the file couldn't be found, the file isn't accessible, the file name is ambiguous, or the content of the file is no longer correct. +In some situations, the tool might not be able to find or open the necessary file for one of the following reasons: the file couldn't be found or isn't accessible, the file name is ambiguous, or the content of the file is no longer correct. An ambiguous error could occur if the tool is trying to locate `\animals\koala.jpg` and there's a file with that name under both `C:\Users\bob\pictures` and `X:\BobBackup\photos`. That is, both `C:\Users\bob\pictures\animals\koala.jpg` and `X:\BobBackup\photos\animals\koala.jpg` exist on the import job drives. The `/PathMapFile` option allows you to resolve these errors. You can specify th WAImportExport.exe RepairImport /r:C:\WAImportExport\9WM35C2V.rep /d:C:\Users\bob\Pictures;X:\BobBackup\photos /sn:bobmediaaccount /sk:VkGbrUqBWLYJ6zg1m29VOTrxpBgdNOlp+kp0C9MEdx3GELxmBw4hK94f7KysbbeKLDksg7VoN1W/a5UuM2zNgQ== /CopyLogFile:C:\WAImportExport\9WM35C2V.log /PathMapFile:C:\WAImportExport\9WM35C2V_pathmap.txt ``` -The tool will then write the problematic file paths to `9WM35C2V_pathmap.txt`, one on each line. For instance, the file may contain the following entries after running the command: +The tool writes the problematic file paths to `9WM35C2V_pathmap.txt`, one on each line. For instance, the file might contain the following entries after running the command: ``` \animals\koala.jpg |
import-export | Storage Import Export Tool Reviewing Job Status V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-reviewing-job-status-v1.md | |
import-export | Storage Import Export Tool Setup V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-setup-v1.md | Title: Setting Up the Azure Import/Export Tool v1 | Microsoft Docs description: Learn how to set up the drive preparation and repair tool for the Azure Import/Export service. This article refers to version 1 of the Import/Export Tool.-+ Last updated 03/14/2022-+ |
import-export | Storage Import Export Tool Troubleshooting V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-tool-troubleshooting-v1.md | |
import-export | Storage Import Export View Drive Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-view-drive-status.md | Title: View status of Azure Import/Export jobs | Microsoft Docs description: Learn how to view the status of Azure Import/Export jobs and the drives used. Understand the factors that affect how long it takes to process a job.-+ Last updated 03/14/2022-+ # View the status of Azure Import/Export jobs |
iot-central | Concepts Device Implementation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md | A device can set the `iothub-creation-time-utc` property when it creates a messa You can export both the enqueued time and the `iothub-creation-time-utc` property when you export telemetry from your IoT Central application. -To learn more about message properties, see [System Properties of device-to-cloud IoT Hub messages](../../iot-hub/iot-hub-devguide-messages-construct.md#system-properties-of-d2c-iot-hub-messages). +To learn more about message properties, see [System Properties of device-to-cloud IoT Hub messages](../../iot-hub/iot-hub-devguide-messages-construct.md#system-properties-of-device-to-cloud-messages). ## Best practices |
iot-dps | How To Legacy Device Symm Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md | To update and run the provisioning sample with your device information: 2022-10-07 18:14:59,395 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully 2022-10-07 18:14:59,404 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully Sending message from device to IoT Hub...- 2022-10-07 18:14:59,408 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) + 2022-10-07 18:14:59,408 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) Press any key to exit...- 2022-10-07 18:14:59,409 DEBUG (contoso-hub-2.azure-devices.net-sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6-c32c76d0-Cxn0e70bbf7-8476-441d-8626-c17250585ee6-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) - 2022-10-07 18:14:59,777 DEBUG (MQTT Call: sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) - 2022-10-07 18:14:59,779 DEBUG (contoso-hub-2.azure-devices.net-sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6-c32c76d0-Cxn0e70bbf7-8476-441d-8626-c17250585ee6-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [32cf12c4-4db1-4562-9d8c-267c0506636f] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) with status OK + 2022-10-07 18:14:59,409 DEBUG (contoso-hub-2.azure-devices.net-sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6-c32c76d0-Cxn0e70bbf7-8476-441d-8626-c17250585ee6-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) + 2022-10-07 18:14:59,777 DEBUG (MQTT Call: sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) + 2022-10-07 18:14:59,779 DEBUG (contoso-hub-2.azure-devices.net-sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6-c32c76d0-Cxn0e70bbf7-8476-441d-8626-c17250585ee6-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [2e1717be-cfcf-41a7-b1c0-59edeb8ea865] ) with status OK Message received! Response status: OK ``` |
iot-dps | Monitor Iot Dps Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md | The following JSON is an example of a successful attestation attempt from a devi { "CallerIPAddress": "24.18.226.XXX", "Category": "DeviceOperations",- "CorrelationId": "68952383-80c0-436f-a2e3-f8ae9a41c69d", + "CorrelationId": "aaaa0000-bb11-2222-33cc-444444dddddd", "DurationMs": "226", "Level": "Information", "OperationName": "AttestationAttempt", The following JSON is an example of a successful add (`Upsert`) individual enrol { "CallerIPAddress": "13.91.244.XXX", "Category": "ServiceOperations",- "CorrelationId": "23bd419d-d294-452b-9b1b-520afef5ef52", + "CorrelationId": "aaaa0000-bb11-2222-33cc-444444dddddd", "DurationMs": "98", "Level": "Information", "OperationName": "Upsert", |
iot-dps | Quick Create Simulated Device X509 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md | In this section, you use both your Windows command prompt and your Git Bash prom 2022-05-11 09:42:26,074 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully 2022-05-11 09:42:26,075 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully Sending message from device to IoT Hub...- 2022-05-11 09:42:26,077 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) + 2022-05-11 09:42:26,077 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) Press any key to exit...- 2022-05-11 09:42:26,079 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) - 2022-05-11 09:42:26,422 DEBUG (MQTT Call: java-device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) - 2022-05-11 09:42:26,425 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [54d9c6b5-3da9-49fe-9343-caa6864f9a02] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) with status OK + 2022-05-11 09:42:26,079 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) + 2022-05-11 09:42:26,422 DEBUG (MQTT Call: java-device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) + 2022-05-11 09:42:26,425 DEBUG (MyExampleHub.azure-devices.net-java-device-01-ee6c362d-Cxn7a1fb819-e46d-4658-9b03-ca50c88c0440-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [28069a3d-f6be-4274-a48b-1ee539524eeb] ) with status OK Message sent! ``` |
iot-dps | Tutorial Custom Hsm Enrollment Group X509 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md | In the following steps, you use both your Windows command prompt and your Git Ba 2022-10-21 10:41:31,536 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully 2022-10-21 10:41:31,537 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully Sending message from device to IoT Hub...- 2022-10-21 10:41:31,539 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) + 2022-10-21 10:41:31,539 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) Press any key to exit...- 2022-10-21 10:41:31,540 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) - 2022-10-21 10:41:31,844 DEBUG (MQTT Call: device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) - 2022-10-21 10:41:31,846 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) with status OK + 2022-10-21 10:41:31,540 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) + 2022-10-21 10:41:31,844 DEBUG (MQTT Call: device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) + 2022-10-21 10:41:31,846 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [aaaa0000-bb11-2222-33cc-444444dddddd] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) with status OK Message sent! ``` |
iot-hub | Iot Hub Devguide Messages Construct | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md | Title: Understand Azure IoT Hub message format -description: This article describes the format and expected content of IoT Hub messages. + Title: Understand message format ++description: This article describes the format and expected content of IoT Hub messages for cloud-to-device and device-to-cloud messages. Previously updated : 2/7/2022 Last updated : 08/22/2024 # Create and read IoT Hub messages -To support seamless interoperability across protocols, IoT Hub defines a common set of messaging features that are available in all device-facing protocols. These can be used in both [device-to-cloud message routing](iot-hub-devguide-messages-d2c.md) and [cloud-to-device messages](iot-hub-devguide-messages-c2d.md). +To support interoperability across protocols, IoT Hub defines a common set of messaging features that are available in all device-facing protocols. These features can be used in both [device-to-cloud messages](iot-hub-devguide-messages-d2c.md) and [cloud-to-device messages](iot-hub-devguide-messages-c2d.md). [!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)] -IoT Hub implements device-to-cloud messaging using a streaming messaging pattern. IoT Hub's device-to-cloud messages are more like [Event Hubs](../event-hubs/index.yml) *events* than [Service Bus](../service-bus-messaging/index.yml) *messages* in that there is a high volume of events passing through the service that can be read by multiple readers. +IoT Hub implements device-to-cloud messaging using a streaming messaging pattern. IoT Hub's device-to-cloud messages are more like [Event Hubs](../event-hubs/index.yml) *events* than [Service Bus](../service-bus-messaging/index.yml) *messages* in that there's a high volume of events passing through the service that multiple readers can read. An IoT Hub message consists of: -* A predetermined set of *system properties* as listed below. +* A predetermined set of *system properties* as described later in this article. * A set of *application properties*. A dictionary of string properties that the application can define and access, without needing to deserialize the message body. IoT Hub never modifies these properties. * A message body, which can be any type of data. -Each device protocol implements setting properties in different ways. Please see the related [MQTT](../iot/iot-mqtt-connect-to-iot-hub.md) and [AMQP](./iot-hub-amqp-support.md) developer guides for details. +Each device protocol implements setting properties in different ways. For more information, see the [MQTT protocol guide](../iot/iot-mqtt-connect-to-iot-hub.md) and [AMQP protocol guide](./iot-hub-amqp-support.md) developer guides for details. -Property names and values can only contain ASCII alphanumeric characters, plus ``{'!', '#', '$', '%, '&', ''', '*', '+', '-', '.', '^', '_', '`', '|', '~'}`` when you send device-to-cloud messages using the HTTPS protocol or send cloud-to-device messages. +When you send device-to-cloud messages using the HTTPS protocol or send cloud-to-device messages, property names and values can only contain ASCII alphanumeric characters, plus ``! # $ % & ' * + - . ^ _ ` | ~`` . Device-to-cloud messaging with IoT Hub has the following characteristics: Device-to-cloud messaging with IoT Hub has the following characteristics: * Device-to-cloud messages can be at most 256 KB, and can be grouped in batches to optimize sends. Batches can be at most 256 KB. -* IoT Hub does not allow arbitrary partitioning. Device-to-cloud messages are partitioned based on their originating **deviceId**. +* IoT Hub doesn't allow arbitrary partitioning. Device-to-cloud messages are partitioned based on their originating **deviceId**. * As explained in [Control access to IoT Hub](iot-hub-devguide-security.md), IoT Hub enables per-device authentication and access control. -* You can stamp messages with information that goes into the application properties. For more information, please see [message enrichments](iot-hub-message-enrichments-overview.md). --For more information about how to encode and decode messages sent using different protocols, see [Azure IoT SDKs](iot-hub-devguide-sdks.md). +* You can stamp messages with information that goes into the application properties. For more information, see [message enrichments](iot-hub-message-enrichments-overview.md). > [!NOTE]-> Each IoT Hub protocol provides a message content type property which is respected when routing data to custom endpoints. To have your data properly handled at the destination (for example, JSON being treated as a parsable string instead of Base64 encoded binary data), you must provide the appropriate content type and charset for the message. -> +> Each IoT Hub protocol provides a message content type property which is respected when routing data to custom endpoints. To have your data properly handled at the destination (for example, JSON being treated as a parsable string instead of Base64 encoded binary data), provide the appropriate content type and charset for the message. -To use your message body in an IoT Hub routing query you must provide a valid JSON object for the message and set the content type property of the message to `application/json;charset=utf-8`. +To use your message body in an IoT Hub routing query, provide a valid JSON object for the message and set the content type property of the message to `application/json;charset=utf-8`. -A valid, routable message body may look like the following: +The following example shows a valid, routable message body: ```json { A valid, routable message body may look like the following: } ``` -## System Properties of **D2C** IoT Hub messages +## System properties of device-to-cloud messages | Property | Description |User Settable?|Keyword for </br>routing query| | | | | |-| message-id |A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + `{'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}`. | Yes | messageId | +| message-id |A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters plus `- : . + % _ # * ? ! ( ) , = @ ; $ '`. | Yes | messageId | | iothub-enqueuedtime |Date and time the [Device-to-Cloud](iot-hub-devguide-d2c-guidance.md) message was received by IoT Hub. | No | enqueuedTime |-| user-id |An ID used to specify the origin of messages. When messages are generated by IoT Hub, it is set to `{iot hub name}`. | Yes | userId | +| user-id |An ID used to specify the origin of messages. | Yes | userId | | iothub-connection-device-id |An ID set by IoT Hub on device-to-cloud messages. It contains the **deviceId** of the device that sent the message. | No | connectionDeviceId | | iothub-connection-module-id |An ID set by IoT Hub on device-to-cloud messages. It contains the **moduleId** of the device that sent the message. | No | connectionModuleId | | iothub-connection-auth-generation-id |An ID set by IoT Hub on device-to-cloud messages. It contains the **connectionDeviceGenerationId** (as per [Device identity properties](iot-hub-devguide-identity-registry.md#device-identity-properties)) of the device that sent the message. | No |connectionDeviceGenerationId | A valid, routable message body may look like the following: | dt-dataschema | This value is set by IoT hub on device-to-cloud messages. It contains the device model ID set in the device connection. | No | $dt-dataschema | | dt-subject | The name of the component that is sending the device-to-cloud messages. | Yes | $dt-subject | -## Application Properties of **D2C** IoT Hub messages +## Application properties of device-to-cloud messages -A common use of application properties is to send a timestamp from the device using the `iothub-creation-time-utc` property to record when the message was sent by the device. The format of this timestamp must be UTC with no timezone information. For example, `2021-04-21T11:30:16Z` is valid, `2021-04-21T11:30:16-07:00` is invalid: +A common use of application properties is to send a timestamp from the device using the `iothub-creation-time-utc` property to record when the message was sent by the device. The format of this timestamp must be UTC with no timezone information. For example, `2021-04-21T11:30:16Z` is valid, but `2021-04-21T11:30:16-07:00` is invalid. ```json {- "applicationId":"5782ed70-b703-4f13-bda3-1f5f0f5c678e", + "applicationId":"00001111-aaaa-2222-bbbb-3333cccc4444", "messageSource":"telemetry", "deviceId":"sample-device-01", "schema":"default@v1", A common use of application properties is to send a timestamp from the device us } ``` -## System Properties of **C2D** IoT Hub messages +## System properties of cloud-to-device messages | Property | Description |User Settable?| | | | |-| message-id |A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters + `{'-', ':', '.', '+', '%', '_', '#', '*', '?', '!', '(', ')', ',', '=', '@', ';', '$', '''}`. |Yes| +| message-id |A user-settable identifier for the message used for request-reply patterns. Format: A case-sensitive string (up to 128 characters long) of ASCII 7-bit alphanumeric characters plus `- : . + % _ # * ? ! ( ) , = @ ; $ '`. |Yes| | sequence-number |A number (unique per device-queue) assigned by IoT Hub to each cloud-to-device message. |No| | to |A destination specified in [Cloud-to-Device](iot-hub-devguide-c2d-guidance.md) messages. |No| | absolute-expiry-time |Date and time of message expiration. |Yes| | correlation-id |A string property in a response message that typically contains the MessageId of the request, in request-reply patterns. |Yes|-| user-id |An ID used to specify the origin of messages. When messages are generated by IoT Hub, it is set to `{iot hub name}`. |Yes| +| user-id |An ID used to specify the origin of messages. When messages are generated by IoT Hub, the user ID is the IoT hub name. |Yes| | iothub-ack |A feedback message generator. This property is used in cloud-to-device messages to request IoT Hub to generate feedback messages as a result of the consumption of the message by the device. Possible values: **none** (default): no feedback message is generated, **positive**: receive a feedback message if the message was completed, **negative**: receive a feedback message if the message expired (or maximum delivery count was reached) without being completed by the device, or **full**: both positive and negative. |Yes| -### System Property Names +### System property names -The system property names vary based on the endpoint to which the messages are being routed. Please see the table below for details on these names. +The system property names vary based on the endpoint to which the messages are being routed. |System property name|Event Hubs|Azure Storage|Service Bus|Event Grid| |--|-|-|--|-| |
iot-hub | Iot Hub Non Telemetry Event Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-non-telemetry-event-schema.md | -This article provides the properties and schemas for non-telemetry events emitted by Azure IoT Hub. Non-telemetry events are different from device-to-cloud and cloud-to-device messages in that they are emitted directly by IoT Hub in response to specific kinds of state changes associated with your devices. For example, lifecycle changes like a device or module being created or deleted, or connection state changes like a device or module connecting or disconnecting. +This article provides the properties and schemas for non-telemetry events emitted by Azure IoT Hub. Non-telemetry events are different from device-to-cloud and cloud-to-device messages in that IoT Hub emits these events in response to specific state changes associated with your devices. For example, lifecycle changes like a device or module being created or deleted, or connection state changes like a device or module connecting or disconnecting. You can route non-telemetry events using message routing, or reach to non-telemetry events using Azure Event Grid. To learn more about IoT Hub message routing, see [IoT Hub message routing](iot-hub-devguide-messages-d2c.md) and [React to IoT Hub events by using Event Grid](./iot-hub-event-grid.md). Non-telemetry events share several common properties. ### System properties -The following system properties are set by IoT Hub on each event. +IoT Hub sets the following system properties on each event. | Property | Type |Description | Keyword for routing query | | -- | - | - | - | The following system properties are set by IoT Hub on each event. ### Application properties -The following application properties are set by IoT Hub on each event. +IoT Hub sets the following application properties on each event. | Property | Type |Description | | -- | - | - | Connection state events are emitted whenever a device or module connects or disc | - | -- | | iothub-message-source | deviceConnectionStateEvents | -**Body**: The body contains a sequence number. The sequence number is a string representation of a hexadecimal number. You can use string compare to identify the larger number. If you're converting the string to hex, then the number will be a 256-bit number. The sequence number is strictly increasing, and the latest event will have a higher number than other events. This is useful if you have frequent device connects and disconnects, and want to ensure only the latest event is used to trigger a downstream action. +**Body**: The body contains a sequence number. The sequence number is a string representation of a hexadecimal number. You can use string compare to identify the larger number. If you're converting the string to hex, then the number will be a 256-bit number. The sequence number is strictly increasing, so the latest event has a higher number than older events. This is useful if you have frequent device connects and disconnects, and want to ensure that only the latest event is used to trigger a downstream action. ### Example The following JSON shows a device connection state event emitted when a device d "system": { "content_encoding": "utf-8", "content_type": "application/json",- "correlation_id": "98dcbcf6-3398-c488-c62c-06330e65ea98", + "correlation_id": "aaaa0000-bb11-2222-33cc-444444dddddd", "user_id": "contoso-routing-hub" }, "application": { Device lifecycle events are emitted whenever a device or module is created or de | - | -- | | iothub-message-source | deviceLifecycleEvents | -**Body**: The body contains a representation of the device twin or module twin. It includes the device ID and module ID, the twin etag, the version property, and the tags, properties and associated metadata of the twin. +**Body**: The body contains a representation of the device twin or module twin. It includes the device ID and module ID, the twin etag, the version property, and the tags, properties, and associated metadata of the twin. ### Example Device twin change events are emitted whenever a device twin or a module twin is | - | -- | | iothub-message-source | twinChangeEvents | -**Body**: On an update, the body contains the version property of the twin and the updated tags and properties and their associated metadata. On a replace, the body contains the device ID and module ID, the twin etag, the version property, and all the tags, properties and associated metadata of the device or module twin. +**Body**: On an update, the body contains the version property of the twin and the updated tags and properties and their associated metadata. On a replace, the body contains the device ID and module ID, the twin etag, the version property, and all the tags, properties, and associated metadata of the device or module twin. ### Example |
iot-operations | Howto Configure Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-broker/howto-configure-authorization.md | To set up authorization for clients that use the DSS, provide the following perm - Permission to publish to the system key value store `$services/statestore/_any_/command/invoke/request` topic - Permission to subscribe to the response-topic (set during initial publish as a parameter) `<response_topic>/#` +For more information about DSS authorization, see [state store keys](https://github.com/Azure/iotedge-broker/blob/main/docs/authorization/readme.md#state-store-keys). + ## Update authorization Broker authorization resources can be updated at runtime without restart. All clients connected at the time of the update of policy are disconnected. Changing the policy type is also supported. |
lab-services | Class Type Adobe Creative Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-adobe-creative-cloud.md | Read [AdobeΓÇÖs deployment steps](https://helpx.adobe.com/enterprise/admin-guide Lab virtual machines have a maximum disk size of 128 GB. If users need extra storage for saving large media assets or they need to access shared media assets, you should consider using external file storage. For more information, read the following articles: -- [Using external file storage in Azure Lab Services](how-to-attach-external-storage.md) - [Install and configure OneDrive](./how-to-prepare-windows-template.md#install-and-configure-onedrive) ### Save template VM image |
lab-services | Class Type Rstudio Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-linux.md | To set up this lab, you need an Azure subscription and lab plan to get started. ### External resource configuration -Some classes require files, such as large data files, to be stored externally. See [use external file storage in Azure Lab Services](how-to-attach-external-storage.md) for options and setup instructions. +Some classes require files, such as large data files, to be stored externally. If you choose to have a shared R Server for the students, the server should be set up before the lab is created. For more information on how to set up a shared server, see [how to create a lab with a shared resource in Azure Lab Services](how-to-create-a-lab-with-shared-resource.md). For instructions to create an RStudio Server, see [Download RStudio Server for Debian & Ubuntu](https://www.rstudio.com/products/rstudio/download-server/debian-ubuntu/) and [Accessing RStudio Server Open-Source](https://support.rstudio.com/hc/en-us/articles/200552306-Getting-Started). |
lab-services | Class Type Rstudio Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md | This article focuses on using R and RStudio for statistical computing. The [deep ### External resource configuration -Some classes require files, such as large data files, to be stored externally. See [use external file storage in Azure Lab Services](how-to-attach-external-storage.md) for options and setup instructions. +Some classes require files, such as large data files, to be stored externally. If you choose to have a shared R Server for the students, the server should be set up before the lab is created. For more information on how to set up a shared server, see [how to create a lab with a shared resource in Azure Lab Services](how-to-create-a-lab-with-shared-resource.md). For instructions to create an RStudio Server, see [Download RStudio Server for Debian & Ubuntu](https://www.rstudio.com/products/rstudio/download-server/debian-ubuntu/) and [Accessing RStudio Server Open-Source](https://support.rstudio.com/hc/en-us/articles/200552306-Getting-Started). |
lab-services | How To Attach External Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md | - Title: Use external file storage- -description: Learn how to set up a lab that uses external file storage in Azure Lab Services. ----- Previously updated : 04/25/2023---# Use external file storage in Azure Lab Services ---This article covers some of the options for using external file storage in Azure Lab Services. [Azure Files](https://azure.microsoft.com/services/storage/files/) offers fully managed file shares in the cloud, [accessible via SMB 2.1 and SMB 3.0](/azure/storage/files/storage-how-to-use-files-windows). An Azure Files share can be connected either publicly or privately within a virtual network. You can also configure the share to use a lab userΓÇÖs Active Directory credentials for connecting to the file share. If you're on a Linux machine, you can also use Azure NetApp Files with NFS volumes for external file storage with Azure Lab Services. --## Which solution to use --The following table lists important considerations for each external storage solution. --| Solution | Important to know | -| -- | | -| [Azure Files share with public endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>No virtual network peering is required.</li><li>Accessible to all VMs, not just lab VMs.</li><li>If you're using Linux, lab users have access to the storage account key.</li></ul> | -| [Azure Files share with private endpoint](#azure-files-share) | <ul><li>Everyone has read/write access.</li><li>Virtual network peering is required.</li><li>Accessible only to VMs on the same network (or a peered network) as the storage account.</li><li>If you're using Linux, lab users have access to the storage account key.</li></ul> | -| [Azure NetApp Files with NFS volumes](#azure-netapp-files-with-nfs-volumes) | <ul><li>Either read or read/write access can be set for volumes.</li><li>Permissions are set by using a lab VMΓÇÖs IP address.</li><li>Virtual network peering is required.</li><li>You might need to register to use the Azure NetApp Files service.</li><li>Linux only.</li></ul> --The cost of using external storage isn't included in the cost of using Azure Lab Services. For more information about pricing, see [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) and [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/). --## Azure Files share --Azure Files shares are accessed by using a public or private endpoint. You mount the shares to a virtual machine by using the storage account key as the password. With this approach, everyone has read-write access to the file share. --By default, standard file shares can span up to 5 TiB. See [Create an Azure file share](/azure/storage/files/storage-how-to-create-file-share) for information on how to create file shares that span up to 100 TiB. --### Considerations for using a public endpoint --- The virtual network for the storage account doesn't have to be connected to the lab virtual network. You can create the file share anytime before the template VM is published.-- The file share can be accessed from any machine if a user has the storage account key.-- Linux lab users can see the storage account key. Credentials for mounting an Azure Files share are stored in `{file-share-name}.cred` on Linux VMs, and are readable by *sudo*. Because lab users are given sudo access by default in Azure Lab Services VMs, they can read the storage account key. If the storage account endpoint is public, lab users can get access to the file share outside of their lab VM. Consider rotating the storage account key after class ends, or using private file shares.--### Considerations for using a private endpoint --- This approach requires the file share virtual network to be connected to the lab. To enable advanced networking for labs, see [Connect to your virtual network in Azure Lab Services using virtual network injection](how-to-connect-vnet-injection.md). Virtual network injection must be done during lab plan creation.-- Access is restricted to traffic originating from the private network, and canΓÇÖt be accessed through the public internet. Only VMs in the private virtual network, VMs in a network peered to the private virtual network, or machines connected to a VPN for the private network, can access the file share.-- Linux lab users can see the storage account key. Credentials for mounting an Azure Files share are stored in `{file-share-name}.cred` on Linux VMs, and are readable by *sudo*. Because lab users are given sudo access by default in Azure Lab Services VMs, they can read the storage account key. Consider rotating the storage account key after class ends.--### Connect a lab VM to an Azure file share --Follow these steps to create a VM connected to an Azure file share. --1. Create an [Azure Storage account](/azure/storage/files/storage-how-to-create-file-share). On the **Connectivity method** page, choose **public endpoint** or **private endpoint**. --1. If using the private method, create a [private endpoint](/azure/private-link/tutorial-private-endpoint-storage-portal) in order for the file shares to be accessible from the virtual network. --1. Create an [Azure file share](/azure/storage/files/storage-how-to-create-file-share). The file share is reachable by the public host name of the storage account if using a public endpoint. The file share is reachable by private IP address if using a private endpoint. --1. Mount the Azure file share in the template VM: -- - [Windows](/azure/storage/files/storage-how-to-use-files-windows) - - [Linux](/azure/storage/files/storage-how-to-use-files-linux). To avoid mounting issues on lab VMs, see the [use Azure Files with Linux](#use-azure-files-with-linux) section. --1. [Publish](how-to-create-manage-template.md#publish-the-template-vm) the template VM. --> [!IMPORTANT] -> Make sure Windows Defender Firewall isn't blocking the outgoing SMB connection through port 445. By default, SMB is allowed for Azure VMs. --### Use Azure Files with Linux --If you use the default instructions to mount an Azure Files share, the file share will seem to disappear on lab VMs after the template is published. The following modified script addresses this issue. --For file share with a public endpoint: --```bash -#!/bin/bash --# Assign variables values for your storage account and file share -STORAGE_ACCOUNT_NAME="" -STORAGE_ACCOUNT_KEY="" -FILESHARE_NAME="" --# Do not use 'mnt' for mount directory. -# Using ΓÇÿmntΓÇÖ will cause issues on lab VMs. -MOUNT_DIRECTORY="prm-mnt" --sudo mkdir /$MOUNT_DIRECTORY/$FILESHARE_NAME -if [ ! -d "/etc/smbcredentials" ]; then - sudo mkdir /etc/smbcredentials -fi -if [ ! -f "/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" ]; then - sudo bash -c "echo ""username=$STORAGE_ACCOUNT_NAME"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" - sudo bash -c "echo ""password=$STORAGE_ACCOUNT_KEY"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" -fi -sudo chmod 600 /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred --sudo bash -c "echo ""//$STORAGE_ACCOUNT_NAME.file.core.windows.net/$FILESHARE_NAME /$MOUNT_DIRECTORY/$FILESHARE_NAME cifs nofail,vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino"" >> /etc/fstab" -sudo mount -t cifs //$STORAGE_ACCOUNT_NAME.file.core.windows.net/$FILESHARE_NAME /$MOUNT_DIRECTORY/$FILESHARE_NAME -o vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino -``` --For file share with a private endpoint: --```bash -#!/bin/bash --# Assign variables values for your storage account and file share -STORAGE_ACCOUNT_NAME="" -STORAGE_ACCOUNT_IP="" -STORAGE_ACCOUNT_KEY="" -FILESHARE_NAME="" --# Do not use 'mnt' for mount directory. -# Using ΓÇÿmntΓÇÖ will cause issues on lab VMs. -MOUNT_DIRECTORY="prm-mnt" --sudo mkdir /$MOUNT_DIRECTORY/$FILESHARE_NAME -if [ ! -d "/etc/smbcredentials" ]; then - sudo mkdir /etc/smbcredentials -fi -if [ ! -f "/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" ]; then - sudo bash -c "echo ""username=$STORAGE_ACCOUNT_NAME"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" - sudo bash -c "echo ""password=$STORAGE_ACCOUNT_KEY"" >> /etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred" -fi -sudo chmod 600 /etc/smbcredentials/$storage_account_name.cred --sudo bash -c "echo ""//$STORAGE_ACCOUNT_IP/$FILESHARE_NAME /$MOUNT_DIRECTORY/$fileshare_name cifs nofail,vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino"" >> /etc/fstab" -sudo mount -t cifs //$STORAGE_ACCOUNT_NAME.file.core.windows.net/$FILESHARE_NAME /$MOUNT_DIRECTORY/$FILESHARE_NAME -o vers=3.0,credentials=/etc/smbcredentials/$STORAGE_ACCOUNT_NAME.cred,dir_mode=0777,file_mode=0777,serverino -``` --If the template VM that mounts the Azure Files share to the `/mnt` directory is already published, the lab user can either: --- Move the instruction to mount `/mnt` to the top of the `/etc/fstab` file. -- Modify the instruction to mount `/mnt/{file-share-name}` to a different directory, like `/prm-mnt/{file-share-name}`.--Lab users should run `mount -a` to remount directories. --For more general information, see [Use Azure Files with Linux](/azure/storage/files/storage-how-to-use-files-linux). --## Azure NetApp Files with NFS volumes --[Azure NetApp Files](https://azure.microsoft.com/services/netapp/) is an enterprise-class, high-performance, metered file storage service. --- Set access policies on a per-volume basis-- Permission policies are IP-based for each volume-- If lab users need their own volume that other lab users don't have access to, permission policies must be assigned after the lab is published-- Azure Lab Services only supports Linux-based lab virtual machines to connect to Azure NetApp Files-- The virtual network for the Azure NetApp Files capacity pool must be connected to the lab. To enable advanced networking for labs, see [Connect to your virtual network in Azure Lab Services using virtual network injection](how-to-connect-vnet-injection.md). Virtual network injection must be done during lab plan creation.--To use an Azure NetApp Files share in Azure Lab --1. Create an Azure NetApp Files capacity pool and one or more NFS volumes by following the steps in [Set up Azure NetApp Files and NFS volume](/azure/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes). -- For information about service levels, see [Service levels for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-service-levels). --1. [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md) --1. [Create the lab](how-to-manage-labs.md). --1. On the template VM, install the components necessary to use NFS file shares. -- - Ubuntu: -- ```bash - sudo apt update - sudo apt install nfs-common - ``` --1. On the template VM, save the following script as `mount_fileshare.sh` to [mount the Azure NetApp Files share](/azure/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines). -- Assign the `capacity_pool_ipaddress` variable the mount target IP address for the capacity pool. Get the mount instructions for the volume to find the appropriate value. The script expects the path name of the Azure NetApp Files volume. - - To ensure that users can run the script, run `chmod u+x mount_fileshare.sh`. -- ```bash - #!/bin/bash - - if [ $# -eq 0 ]; then - echo "Must provide volume name." - exit 1 - fi - - VOLUME_NAME=$1 - CAPACITY_POOL_IP_ADDR=0.0.0.0 # IP address of capacity pool - - # Do not use 'mnt' for mount directory. - # Using ΓÇÿmntΓÇÖ might cause issues on lab VMs. - MOUNT_DIRECTORY="prm-mnt" - - sudo mkdir -p /$MOUNT_DIRECTORY - sudo mkdir /$MOUNT_DIRECTORY/$FOLDER_NAME - - sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp $CAPACITY_POOL_IP_ADDR:/$VOLUME_NAME /$MOUNT_DIRECTORY/$VOLUME_NAME - sudo bash -c "echo ""$CAPACITY_POOL_IP_ADDR:/$VOLUME_NAME /$MOUNT_DIRECTORY/$VOLUME_NAME nfs bg,rw,hard,noatime,nolock,rsize=65536,wsize=65536,vers=3,tcp,_netdev 0 0"" >> /etc/fstab" - ``` --1. If all lab users are sharing access to the same Azure NetApp Files volume, you can run the `mount_fileshare.sh` script on the template machine before publishing. If lab users each get their own volume, save the script so each lab user can run it later. --1. [Publish](how-to-create-manage-template.md#publish-the-template-vm) the template VM. --1. [Configure the policy](/azure/azure-netapp-files/azure-netapp-files-configure-export-policy) for the file share. -- The export policy can allow for a single VM or multiple VMs to have access to a volume. You can grant read-only or read/write access. --1. Lab users must start their VM and run the script to mount the file share. They have to run the script only once. -- Use the command: `./mount_fileshare.sh myvolumename`. --## Next steps --- Learn how to [create a lab for classroom training](./tutorial-setup-lab.md)-- Get started by following the steps in [Quickstart: Create and connect to a lab](./quick-create-connect-lab.md)-- [Create and manage a template](how-to-create-manage-template.md) |
lab-services | Troubleshoot Access Lab Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-access-lab-vm.md | Learn more about how to [reimage a lab VM in the Azure Lab Services website](./h When you reimage a lab VM, all user data on the VM is lost. To avoid losing this data, you have to store the user data outside of the lab VM. You have different options to configure the template VM: - [Use OneDrive to store user data](./how-to-prepare-windows-template.md#install-and-configure-onedrive).-- [Attach external file storage](./how-to-attach-external-storage.md), such as Azure Files or Azure NetApp Files. ## Create multiple labs for a course Learn how to [set up a new lab](./tutorial-setup-lab.md#create-a-lab) and how to ## Next steps - As a lab user, learn how to [reimage or redeploy lab VMs](./how-to-reset-and-redeploy-vm.md).-- As an admin or educator, [attach external file storage to a lab](./how-to-attach-external-storage.md). - As a lab creator, [use OneDrive to store user data](./how-to-prepare-windows-template.md#install-and-configure-onedrive). |
load-balancer | Monitor Load Balancer Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer-reference.md | Title: Load balancer metrics and log definitions- -description: Important reference material needed when you monitor Load Balancer. -+ Title: Monitoring data reference for Azure Load Balancer +description: This article contains important reference material you need when you monitor Azure Load Balancer by using Azure Monitor. Last updated : 08/21/2024+ + Previously updated : 05/24/2024- -# Monitoring load balancer data reference +# Azure Load Balancer monitoring data reference +++See [Monitor Azure Load Balancer](monitor-load-balancer.md) for details on the data you can collect for Load Balancer and how to use it. +++### Supported metrics for Microsoft.Network/loadBalancers ++The following table lists the metrics available for the Microsoft.Network/loadBalancers resource type. +++<!-- [!INCLUDE [Microsoft.Network/loadBalancers](~/reusable-content/ce-skilling/azure/includes/azure-monitor/reference/metrics/microsoft-network-loadbalancers-metrics-include.md)] --> ++<!-- Manually included due to inprocess dimensions. Once those dimensions are completed, please reinstate this include and remove the manual copy. Please do not remove this comment. rboucher 2024_08_23 --> -See [Monitoring Load Balancer](monitor-load-balancer.md) for details on collecting and analyzing monitoring data for Load Balancer. +|Metric|Name in REST API|Unit|Aggregation|Dimensions|Time Grains|DS Export| +|||||||| +|**Allocated SNAT Ports**<br><br>Total number of SNAT ports allocated within time period |`AllocatedSnatPorts` |Count |Average |`FrontendIPAddress`, `BackendIPAddress`, `ProtocolType` |PT1M |No| +|**Byte Count**<br><br>Total number of Bytes transmitted within time period |`ByteCount` |Bytes |Total |`FrontendIPAddress`, `FrontendPort`, `Direction`|PT1M |Yes| +|**Health Probe Status**<br><br>Average Load Balancer health probe status per time duration |`DipAvailability` |Count |Average |`ProtocolType`, `BackendPort`, `FrontendIPAddress`, `FrontendPort`, `BackendIPAddress`|PT1M |Yes| +|**Health Probe Status**<br><br>Azure Cross-region Load Balancer backend health and status per time duration |`GlobalBackendAvailability` |Count |Average |`FrontendIPAddress`, `FrontendPort`, `BackendIPAddress`, `ProtocolType` |PT1M |Yes| +|**Packet Count**<br><br>Total number of Packets transmitted within time period |`PacketCount` |Count |Total |`FrontendIPAddress`, `FrontendPort`, `Direction`|PT1M |Yes| +|**SNAT Connection Count**<br><br>Total number of new SNAT connections created within time period |`SnatConnectionCount` |Count |Total |`FrontendIPAddress`, `BackendIPAddress`, `ConnectionState`|PT1M |Yes| +|**SYN Count**<br><br>Total number of SYN Packets transmitted within time period |`SYNCount` |Count |Total |`FrontendIPAddress`, `FrontendPort`, `Direction`|PT1M |Yes| +|**Used SNAT Ports**<br><br>Total number of SNAT ports used within time period |`UsedSnatPorts` |Count |Average |`FrontendIPAddress`, `BackendIPAddress`, `ProtocolType` |PT1M |No| +|**Data Path Availability**<br><br>Average Load Balancer data path availability per time duration |`VipAvailability` |Count |Average |`FrontendIPAddress`, `FrontendPort`|PT1M |Yes| -## Metrics +### Load balancer metrics -### Load balancer metrics +This table includes additional information about metrics from the Microsoft.Network/loadBalancers table: -| **Metric** | **Resource type** | **Description** | **Recommended aggregation** | -| | - | -- | -- | -| Data path availability | Public and internal load balancer | Standard Load Balancer continuously exercises the data path from within a region to the load balancer front end, all the way to the SDN stack that supports your VM. As long as healthy instances remain, the measurement follows the same path as your application's load-balanced traffic. The data path that your customer's use is also validated. The measurement is invisible to your application and doesn't interfere with other operations. | Average | -| Health probe status | Public and internal load balancer | Standard Load Balancer uses a distributed health-probing service that monitors your application endpoint's health according to your configuration settings. This metric provides an aggregate or per-endpoint filtered view of each instance endpoint in the load balancer pool. You can see how Load Balancer views the health of your application, as indicated by your health probe configuration. | Average | -| SYN (synchronize) count | Public and internal load balancer | Standard Load Balancer doesnΓÇÖt terminate Transmission Control Protocol (TCP) connections or interact with TCP or User Data-gram Packet (UDP) flows. Flows and their handshakes are always between the source and the VM instance. To better troubleshoot your TCP protocol scenarios, you can make use of SYN packets counters to understand how many TCP connection attempts are made. The metric reports the number of TCP SYN packets that were received. | Average | -| SNAT connection count | Public load balancer | Standard Load Balancer reports the number of outbound flows that are masqueraded to the Public IP address front end. Source network address translation (SNAT) ports are an exhaustible resource. This metric can give an indication of how heavily your application is relying on SNAT for outbound originated flows. Counters for successful and failed outbound SNAT flows are reported and can be used to troubleshoot and understand the health of your outbound flows. | Sum | -| Allocated SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports allocated per backend instance. | Average | -| Used SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports that are utilized per backend instance. | Average | -| Byte count | Public and internal load balancer | Standard Load Balancer reports the data processed per front end. You may notice that the bytes aren't distributed equally across the backend instances. This is expected as Azure's Load Balancer algorithm is based on flows. | Sum | -| Packet count | Public and internal load balancer | Standard Load Balancer reports the packets processed per front end. | Sum | +| Metric | Resource type | Description | +|: |:- |:-- | +| Allocated SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports allocated per backend instance. | +| Byte count | Public and internal load balancer | Standard Load Balancer reports the data processed per front end. You might notice that the bytes aren't distributed equally across the backend instances. This behavior is expected as Azure's Load Balancer algorithm is based on flows. | +| Health probe status | Public and internal load balancer | Standard Load Balancer uses a distributed health-probing service that monitors your application endpoint's health according to your configuration settings. This metric provides an aggregate or per-endpoint filtered view of each instance endpoint in the load balancer pool. You can see how Load Balancer views the health of your application, as indicated by your health probe configuration. | +| SNAT connection count | Public load balancer | Standard Load Balancer reports the number of outbound flows that are masqueraded to the Public IP address front end. Source network address translation (SNAT) ports are an exhaustible resource. This metric can give an indication of how heavily your application is relying on SNAT for outbound originated flows. Counters for successful and failed outbound SNAT flows are reported and can be used to troubleshoot and understand the health of your outbound flows. | +| SYN count (synchronize) | Public and internal load balancer | Standard Load Balancer doesnΓÇÖt terminate Transmission Control Protocol (TCP) connections or interact with TCP or User Data-gram Packet (UDP) flows. Flows and their handshakes are always between the source and the virtual machine instance. To better troubleshoot your TCP protocol scenarios, you can make use of SYN packets counters to understand how many TCP connection attempts are made. The metric reports the number of TCP SYN packets that were received. | +| Used SNAT ports | Public load balancer | Standard Load Balancer reports the number of SNAT ports that are utilized per backend instance. | +| Data path availability | Public and internal load balancer | Standard Load Balancer continuously exercises the data path from within a region to the load balancer front end, all the way to the SDN stack that supports your virtual machine. As long as healthy instances remain, the measurement follows the same path as your application's load-balanced traffic. The data path that your customer's use is also validated. The measurement is invisible to your application and doesn't interfere with other operations. | ### Global load balancer metrics -| **Metric** | **Resource type** | **Description** | **Recommended aggregation** | -| | - | -- | -- | -| Data path availability | Public global load balancer| Global load balancer continuously exercises the data path from within a region to the load balancer front end, all the way to the SDN stack that supports your VM. As long as healthy instances remain, the measurement follows the same path as your application's load-balanced traffic. The data path that your customer's use is also validated. The measurement is invisible to your application and doesn't interfere with other operations. | Average | -| Health probe status | Public global load balancer | Global load balancer uses a distributed health-probing service that monitors your application endpoint's health according to your configuration settings. This metric provides an aggregate or per-endpoint filtered view of each instance regional load balancer in the global load balancer's backend pool. You can see how global load balancer views the health of your application, as indicated by your health probe configuration. | Average | +This table includes additional information about global metrics from the Microsoft.Network/loadBalancers table: -For more information, see a list of [all platform metrics supported in Azure Monitor for load balancer](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkloadbalancers). +| Metric | Resource type | Description | +|: |:- |:-- | +| Health probe status | Public global load balancer | Global load balancer uses a distributed health-probing service that monitors your application endpoint's health according to your configuration settings. This metric provides an aggregate or per-endpoint filtered view of each instance regional load balancer in the global load balancer's backend pool. You can see how global load balancer views the health of your application, as indicated by your health probe configuration. | +| Data path availability | Public global load balancer| Global load balancer continuously exercises the data path from within a region to the load balancer front end, all the way to the SDN stack that supports your virtual machine. As long as healthy instances remain, the measurement follows the same path as your application's load-balanced traffic. The data path that your customer's use is also validated. The measurement is invisible to your application and doesn't interfere with other operations. | -## Metric dimensions +> [!NOTE] +> Bandwidth-related metrics such as SYN packet, byte count, and packet count doesn't capture any traffic to an internal load balancer by using a UDR, such as from an NVA or firewall. +> +> Max and min aggregations are not available for the SYN count, packet count, SNAT connection count, and byte count metrics. +> Count aggregation is not recommended for Data path availability and health probe status. Use average instead for best represented health data. -For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics). -Load Balancer has the following **dimensions** associated with its metrics. -| **Dimension Name** | **Description** | -| -- | -- | -| Frontend IP | The frontend IP address of one or more relevant load balancing rules | -| Frontend Port | The frontend port of one or more relevant load balancing rules | -| Backend IP | The backend IP address of one or more relevant load balancing rules | -| Backend Port | The backend port of one or more relevant load balancing rules | -| Protocol Type | The protocol of the relevant load balancing rule. The protocol can be TCP or UDP | -| Direction | The direction traffic is flowing. This can be inbound or outbound. | -| Connection state | The state of SNAT connection. The state can be pending, successful, or failed | +| Dimension | Name | Description | +|:-|:--|:| +| BackendIPAddress | Backend IP | The backend IP address of one or more relevant load balancing rules | +| BackendPort | Backend Port | The backend port of one or more relevant load balancing rules | +| ConnectionState | Connection state | The state of SNAT connection. The state can be pending, successful, or failed | +| Direction | Direction | The direction traffic is flowing. This value can be inbound or outbound. | +| FrontendIPAddress | Frontend IP | The frontend IP address of one or more relevant load balancing rules | +| FrontendPort | Frontend Port | The frontend port of one or more relevant load balancing rules | +| ProtocolType | Protocol Type | The protocol of the relevant load balancing rule. The protocol can be TCP or UDP | -## Resource logs -Azure Load Balancer supports Azure Activity logs and the LoadBalancerHealthEvent log category. +### Supported resource logs for Microsoft.Network/loadBalancers -### LoadBalancerHealthEvent logs -For more information on the LoadBalancerHealthEvent log category, see [Azure Load Balancer health event logs](load-balancer-health-event-logs.md). -### Azure Activity logs +### Load Balancer Microsoft.Network/LoadBalancers -The following table lists the **operations** related to Load Balancer that may be created in the Activity log. +- [ALBHealthEvent](/azure/azure-monitor/reference/tables/albhealthevent#columns) +- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns) -| **Operation** | **Description** | -| | | -| Microsoft.Network/loadBalancers/read | Gets a load balancer definition | -| Microsoft.Network/loadBalancers/write | Creates a load balancer or updates an existing load balancer | -| Microsoft.Network/loadBalancers/delete | Deletes a load balancer | -| Microsoft.Network/loadBalancers/backendAddressPools/queryInboundNatRulePortMapping/action | Query inbound Nat rule port mapping. | -| Microsoft.Network/loadBalancers/backendAddressPools/read | Gets a load balancer backend address pool definition | -| Microsoft.Network/loadBalancers/backendAddressPools/write | Creates a load balancer backend address pool or updates an existing load balancer backend address pool | -| Microsoft.Network/loadBalancers/backendAddressPools/delete | Deletes a load balancer backend address pool | -| Microsoft.Network/loadBalancers/backendAddressPools/join/action | Joins a load balancer backend address pool. Not Alertable. | -| Microsoft.Network/loadBalancers/backendAddressPools/backendPoolAddresses/read | Lists the backend addresses of the Load Balancer backend address pool | -| Microsoft.Network/loadBalancers/frontendIPConfigurations/read | Gets a load balancer frontend IP configuration definition | -| Microsoft.Network/loadBalancers/frontendIPConfigurations/join/action | Joins a Load Balancer Frontend IP Configuration. Not alertable. | -| Microsoft.Network/loadBalancers/inboundNatPools/read | Gets a load balancer inbound nat pool definition | -| Microsoft.Network/loadBalancers/inboundNatPools/join/action | Joins a load balancer inbound NAT pool. Not alertable. | -| Microsoft.Network/loadBalancers/inboundNatRules/read | Gets a load balancer inbound nat rule definition | -| Microsoft.Network/loadBalancers/inboundNatRules/write | Creates a load balancer inbound nat rule or updates an existing load balancer inbound nat rule | -| Microsoft.Network/loadBalancers/inboundNatRules/delete | Deletes a load balancer inbound nat rule | -| Microsoft.Network/loadBalancers/inboundNatRules/join/action | Joins a load balancer inbound nat rule. Not Alertable. | -| Microsoft.Network/loadBalancers/loadBalancingRules/read | Gets a load balancer load balancing rule definition | -| Microsoft.Network/loadBalancers/networkInterfaces/read | Gets references to all the network interfaces under a load balancer | -| Microsoft.Network/loadBalancers/outboundRules/read | Gets a load balancer outbound rule definition | -| Microsoft.Network/loadBalancers/probes/read | Gets a load balancer probe | -| Microsoft.Network/loadBalancers/probes/join/action | Allows using probes of a load balancer. For example, with this permission healthProbe property of virtual machine scale set can reference the probe. Not alertable. | -| Microsoft.Network/loadBalancers/virtualMachines/read | Gets references to all the virtual machines under a load balancer | -For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md). +- [Microsoft.Network resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftnetwork) -## See Also +## Related content -- See [Monitoring Azure Load Balancer](./monitor-load-balancer.md) for a description of monitoring Azure Load Balancer.+- See [Monitor Azure Load Balancer](monitor-load-balancer.md) for a description of monitoring Load Balancer. +- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
load-balancer | Monitor Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md | Title: Monitoring Azure Load Balancer -description: Start here to learn how to monitor load balancer. + Title: Monitor Azure Load Balancer +description: Start here to learn how to monitor Azure Load Balancer by using Azure Monitor and Azure Monitor Insights. Last updated : 08/21/2024++ - Previously updated : 05/24/2024- -# Monitoring load balancer +# Monitor Azure Load Balancer -When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. -This article describes the monitoring data generated by Load Balancer. Load Balancer uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md). +Load Balancer provides other monitoring data through: -## Load balancer insights +- [Health Probes](./load-balancer-custom-probe-overview.md) +- [Resource health status](./load-balancer-standard-diagnostics.md#resource-health-status) +- [REST API](load-balancer-query-metrics-rest-api.md) -Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights". Load Balancer insights provide: -* Functional dependency view -* Metrics dashboard -* Overview tab -* Frontend and Backend Availability tab -* Data Throughput tab -* Flow Distribution -* Connection Monitors -* Metric Definitions +- Functional dependency view +- Metrics dashboard +- Overview tab +- Frontend and Backend Availability tab +- Data Throughput tab +- Flow Distribution +- Connection Monitors +- Metric Definitions For more information on Load Balancer insights, see [Using Insights to monitor and configure your Azure Load Balancer](./load-balancer-insights.md). -## Monitoring data -Load Balancer collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). +For more information about the resource types for Load Balancer, see [Azure Load Balancer monitoring data reference](monitor-load-balancer-reference.md). -See [Monitoring Load Balancer data reference](monitor-load-balancer-reference.md) for detailed information on the metrics and logs metrics created by Load Balancer. -Load Balancer provides other monitoring data through: -* [Health Probes](./load-balancer-custom-probe-overview.md) -* [Resource health status](./load-balancer-standard-diagnostics.md#resource-health-status) -* [REST API](./load-balancer-query-metrics-rest-api.md) +You can analyze metrics for Load Balancer with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. -## Collection and routing +For a list of available metrics for Load Balancer, see [Azure Load Balancer monitoring data reference](monitor-load-balancer-reference.md#metrics). -Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. -Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. +For the available resource log categories, their associated Log Analytics tables, and the log schemas for Load Balancer, see [Azure Load Balancer monitoring data reference](monitor-load-balancer-reference.md#resource-logs). ## Creating a diagnostic setting -You can create a diagnostic setting with the Azure portal, Azure PowerShell, or the Azure CLI. -+Resource logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. You can create a diagnostic setting with the Azure portal, Azure PowerShell, or the Azure CLI. -For general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md). +To use the Azure portal and for general guidance, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md). To use PowerShell or the Azure CLI, see the following sections. When you create a diagnostic setting, you specify which categories of logs to collect. The category for Load Balancer is **AllMetrics**. -### Portal --1. Sign in to the [Azure portal](https://portal.azure.com). --2. In the search box at the top of the portal, enter **Load balancer**. --3. Select **Load balancers** in the search results. --4. Select your load balancer. For this example, **myLoadBalancer** is used. --5. In the **Monitoring** section of **myLoadBalancer**, select **Diagnostic settings**. --6. In **Diagnostic settings**, select **+ Add diagnostic setting**. --7. Enter or select the following information in **Diagnostic setting**: -- | **Setting** | **Value** | - | - | -- | - | Diagnostic setting name | Enter a name for the diagnostic setting. | - | **Category details** | | - | metric | Select **AllMetrics**. | --8. Select the **Destination details**. Some of the destinations options are: - - | **Option** | **Description** | - | - | -- | - | **Send to Log Analytics** | Select the **Subscription** and **Log Analytics workspace**. | - | **Archive to a storage account** | Select the **Subscription** and the **Storage Account**. | - | **Stream to an event hub** | Select the **Subscription**, **Event hub namespace**, **Event hub name (optional)**, and **Event hub policy name**. | - -9. Select **Save**. - ### PowerShell Sign in to Azure PowerShell: Set-AzDiagnosticSetting ` -Enabled $true ` -MetricCategory 'AllMetrics' ```+ ### Azure CLI Sign in to Azure CLI: az monitor diagnostic-settings create \ --event-hub-rule /subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.EventHub/namespaces/<your-event-hub-namespace>/authorizationrules/RootManageSharedAccessKey ``` -The metrics and logs you can collect are discussed in the following sections. --## Analyzing metrics -You can analyze metrics for Load Balancer with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. -For a list of the platform metrics collected for Load Balancer, see [Monitoring Load Balancer data reference metrics](monitor-load-balancer-reference.md#metrics). -For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). +## Analyzing Load Balancer Traffic with NSG flow logs -## Analyzing logs +[NSG flow logs](../network-watcher/nsg-flow-logs-overview.md) is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a network security group. Flow data is sent to Azure Storage from where you can access it and export it to a visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS). -Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. +NSG flow logs can be used to analyze traffic flowing through the load balancer. -The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. +> [!NOTE] +> NSG flow logs don't contain the load balancers frontend IP address. To analyze the traffic flowing into a load balancer, the NSG flow logs would need to be filtered by the private IP addresses of the load balancerΓÇÖs backend pool members. -For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Load Balancer data reference](./monitor-load-balancer-reference.md) -## Analyzing Load Balancer Traffic with NSG Flow Logs -[NSG flow logs](../network-watcher/nsg-flow-logs-overview.md) is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a network security group. Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. -NSG flow logs can be used to analyze traffic flowing through the load balancer. ->[!Note] ->NSG flow logs doesn't contain the load balancers frontend IP address. To analyze the traffic flowing into a load balancer, the NSG flow logs would need to be filtered by the private IP addresses of the load balancerΓÇÖs backend pool members. - +### Load Balancer alert rules -## Alerts +The following table lists some suggested alert rules for Load Balancer. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Load Balancer monitoring data reference](monitor-load-balancer-reference.md). -Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks. --If you're creating or running an application that runs on Load Balancer, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) offers other types of alerts. ---The following table lists common and recommended alert rules for Load Balancer. --| **Alert type** | **Condition** | **Description** | +| Alert type | Condition | Description | |:|:|:|-| Load balancing rule unavailable due to unavailable VMs | If data path availability split by Frontend IP address and Frontend Port (all known and future values) is equal to zero, or in a second independent alert, if health probe status is equal to zero, then fire alert(s) | These alerts help determine if the data path availability for any configured load balancing rules isn't servicing traffic due to all VMs in the associated backend pool being probed down by the configured health probe. Review load balancer [troubleshooting guide](load-balancer-troubleshoot.md) to investigate the potential root cause. | +| Load balancing rule unavailable due to unavailable VMs | If data path availability split by Frontend IP address and Frontend Port (all known and future values) is equal to zero, or in a second independent alert, if health probe status is equal to zero, then fire alerts | These alerts help determine if the data path availability for any configured load balancing rules isn't servicing traffic due to all VMs in the associated backend pool being probed down by the configured health probe. Review load balancer [troubleshooting guide](load-balancer-troubleshoot.md) to investigate the potential root cause. | | VM availability significantly low | If health probe status split by Backend IP and Backend Port is equal to user defined probed-up percentage of total pool size (that is, 25% are probed up), then fire alert | This alert determines if there are less than needed VMs available to serve traffic | | Outbound connections to internet endpoint failing | If SNAT Connection Count filtered to Connection State = Failed is greater than zero, then fire alert | This alert fires when SNAT ports are exhausted and VMs are failing to initiate outbound connections. | | Approaching SNAT exhaustion | If Used SNAT Ports is greater than user defined number, then fire alert | This alert requires a static outbound configuration where the same number of ports are always allocated. It then fires when a percentage of the allocated ports is used. | -## Next steps ++## Related content -- See [Monitoring Load Balancer data reference](monitor-load-balancer-reference.md) for a reference of the metrics, logs, and other important values created by load balancer.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+- See [Azure Load Balancer monitoring data reference](monitor-load-balancer-reference.md) for a reference of the metrics, logs, and other important values created for Load Balancer. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources. |
machine-learning | Concept Automl Forecasting At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-at-scale.md | show_latex: true This article is about training forecasting models on large quantities of historical data. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article. -Time series data can be large due to the number of series in the data, the number of historical observations, or both. **Many models** and hierarchical time series, or **HTS**, are scaling solutions for the former scenario, where the data consists of a large number of time series. In these cases, it can be beneficial for model accuracy and scalability to partition the data into groups and train a large number of independent models in parallel on the groups. Conversely, there are scenarios where one or a small number of high-capacity models is better. **Distributed DNN training** targets this case. We review concepts around these scenarios in the remainder of the article. +Time series data can be large due to the number of series in the data, the number of historical observations, or both. **Many models** and hierarchical time series, or **HTS**, are scaling solutions for the former scenario, where the data consists of a large number of time series. In these cases, it can be beneficial for model accuracy and scalability to partition the data into groups and train a large number of independent models in parallel on the groups. Conversely, there are scenarios where one or a few high-capacity models are better. **Distributed DNN training** targets this case. We review concepts around these scenarios in the remainder of the article. ## Many models |
machine-learning | Concept Automl Forecasting Evaluation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-evaluation.md | AutoML supports this inference scenario, but **you need to provide the context d Here, known values of the target and features are provided for 2023-05-01 through 2023-05-03. Missing target values starting at 2023-05-04 indicate that the inference period starts at that date. -AutoML uses the new context data to update lag and other lookback features, and also to update models like ARIMA that keep an internal state. This operation _does not_ update or re-fit model parameters. +AutoML uses the new context data to update lag and other lookback features, and also to update models like ARIMA that keep an internal state. This operation _doesn't_ update or re-fit model parameters. ## Model evaluation -Evaluation is the process of generating predictions on a test set held-out from the training data and computing metrics from these predictions that guide model deployment decisions. Accordingly, there's an inference mode specifically suited for model evaluation - a rolling forecast. We review it in the following sub-section. +Evaluation is the process of generating predictions on a test set held-out from the training data and computing metrics from these predictions that guide model deployment decisions. Accordingly, there's an inference mode suited for model evaluation - a rolling forecast. We review it in the following subsection. ### Rolling forecast |
machine-learning | Concept Compute Target | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md | While Azure Machine Learning supports these VM series, they might not be availab :::moniker-end :::moniker range="azureml-api-2" > [!NOTE]-> Azure Machine Learning doesn't support all VM sizes that Azure Compute supports. To list the available VM sizes, use one of the following methods: +> Azure Machine Learning doesn't support all VM sizes that Azure Compute supports. To list the available VM sizes supported by specific compute VM types, use one of the following methods: > * [REST API](/rest/api/azureml/virtual-machine-sizes/list) > * The [Azure CLI extension 2.0 for machine learning](how-to-configure-cli.md) command, [az ml compute list-sizes](/cli/azure/ml/compute#az-ml-compute-list-sizes). :::moniker-end |
machine-learning | How To Access Data Batch Endpoints Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md | Azure Machine Learning data assets (formerly known as datasets) are supported as } ``` - -- The data assets ID looks like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`. You can also use the `azureml:/<datasset_name>@latest` format to specify the input. + The data assets ID looks like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/Microsoft.MachineLearningServices/workspaces/<workspace>/data/<data-asset>/versions/<version>`. You can also use the `azureml:<datasset_name>@latest` format to specify the input. 1. Run the endpoint: |
machine-learning | How To Access Data Interactive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md | path_on_datastore = '<path>' uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'. ``` -These Datastore URIs are a known implementation of the [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): a unified pythonic interface to local, remote, and embedded file systems and bytes storage. First, pip install the `azureml-fsspec` package and its dependency `azureml-dataprep` package. Then, you can use the Azure Machine Learning Datastore `fsspec` implementation. +These Datastore URIs are a known implementation of the [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): a unified pythonic interface to local, remote, and embedded file systems and bytes storage. First, use pip to install the `azureml-fsspec` package and its dependency `azureml-dataprep` package. Then, you can use the Azure Machine Learning Datastore `fsspec` implementation. The Azure Machine Learning Datastore `fsspec` implementation automatically handles the credential/identity passthrough that the Azure Machine Learning datastore uses. You can avoid both account key exposure in your scripts, and extra sign-in procedures, on a compute instance. df.head() > 1. Select **Data** from the left-hand menu, then select the **Datastores** tab. > 1. Select your datastore name, and then **Browse**. > 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.-> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: +> :::image type="content" source="media/how-to-access-data-interactive/datastore-uri-copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: You can also instantiate an Azure Machine Learning filesystem, to handle filesystem-like commands - for example `ls`, `glob`, `exists`, `open`. - The `ls()` method lists files in a specific directory. You can use ls(), ls(.), ls (<<folder_level_1>/<folder_level_2>) to list files. We support both '.' and '..', in relative paths. df.head() #### Read a folder of parquet files into Pandas As part of an ETL process, Parquet files are typically written to a folder, which can then emit files relevant to the ETL such as progress, commits, etc. This example shows files created from an ETL process (files beginning with `_`) which then produce a parquet file of data. In these scenarios, you only read the parquet files in the folder, and ignore the ETL process files. This code sample shows how glob patterns can read only parquet files in a folder: df.head() > 1. Select **Data** from the left-hand menu, then select the **Datastores** tab. > 1. Select your datastore name, and then **Browse**. > 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.-> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: +> :::image type="content" source="media/how-to-access-data-interactive/datastore-uri-copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: ##### [HTTP Server](#tab/http) ```python df.head() > 1. Select **Data** from the left-hand menu, then select the **Datastores** tab. > 1. Select your datastore name, and then **Browse**. > 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.-> :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: +> :::image type="content" source="media/how-to-access-data-interactive/datastore-uri-copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: ##### [HTTP Server](#tab/http) |
machine-learning | How To Convert Custom Model To Mlflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-convert-custom-model-to-mlflow.md | Title: Convert custom models to MLflow -description: Convert custom models to MLflow model format for no code deployment with endpoints. +description: Convert custom models to MLflow model format for no code deployment with endpoints in Azure Machine Learning. Previously updated : 04/15/2022 Last updated : 08/16/2024 +#customer intent: As a data scientist, I want to convert a model to an MLflow format to use the benefits of MLflow. # Convert custom ML models to MLflow formatted models -In this article, learn how to convert your custom ML model into MLflow format. [MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. In some cases, you might use a machine learning framework without its built-in MLflow model flavor support. Due to this lack of built-in MLflow model flavor, you cannot log or register the model with MLflow model fluent APIs. To resolve this, you can convert your model to an MLflow format where you can leverage the following benefits of Azure Machine Learning and MLflow models. +In this article, learn how to convert your custom ML model into MLflow format. [MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. In some cases, you might use a machine learning framework without its built-in MLflow model flavor support. Due to this lack of built-in MLflow model flavor, you can't log or register the model with MLflow model fluent APIs. To resolve this issue, you can convert your model to an MLflow format where you can apply the following benefits of Azure Machine Learning and MLflow models. -With Azure Machine Learning, MLflow models get the added benefits of, +With Azure Machine Learning, MLflow models get the added benefits of: -* No code deployment -* Portability as an open source standard format -* Ability to deploy both locally and on cloud +- No code deployment +- Portability as an open source standard format +- Ability to deploy both locally and on cloud -MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) (scikit-learn, Keras, Pytorch, and more); however, it might not cover every use case. For example, you may want to create an MLflow model with a framework that MLflow does not natively support or you may want to change the way your model does pre-processing or post-processing when running jobs. To know more about MLflow models read [From artifacts to models in MLflow](concept-mlflow-models.md). +MLflow provides support for various [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors), such as scikit-learn, Keras, and Pytorch. MLflow might not cover every use case. For example, you might want to create an MLflow model with a framework that MLflow doesn't natively support. You might want to change the way your model does preprocessing or post-processing when running jobs. To learn more about MLflow models, see [From artifacts to models in MLflow](concept-mlflow-models.md). -If you didn't train your model with MLFlow and want to use Azure Machine Learning's MLflow no-code deployment offering, you need to convert your custom model to MLFLow. Learn more about [custom Python models and MLflow](https://mlflow.org/docs/latest/models.html#custom-python-models). +If you didn't train your model with MLFlow and want to use Azure Machine Learning's MLflow no-code deployment offering, you need to convert your custom model to MLFLow. For more information, see [Custom Python Models](https://mlflow.org/docs/latest/models.html#custom-python-models). ## Prerequisites- -Only the mlflow package installed is needed to convert your custom models to an MLflow format. ++- Install the `mlflow` package ## Create a Python wrapper for your model -Before you can convert your model to an MLflow supported format, you need to first create a Python wrapper for your model. -The following code demonstrates how to create a Python wrapper for an `sklearn` model. +Before you can convert your model to an MLflow supported format, you need to create a Python wrapper for your model. The following code demonstrates how to create a Python wrapper for an `sklearn` model. ```python class SKLearnWrapper(mlflow.pyfunc.PythonModel): return self.sklearn_model.predict(data) ``` -## Create a Conda environment +## Create a Conda environment -Next, you need to create Conda environment for the new MLflow Model that contains all necessary dependencies. If not indicated, the environment is inferred from the current installation. If not, it can be specified. +Next, create Conda environment for the new MLflow Model that contains all necessary dependencies. If not indicated, the environment is inferred from the current installation. If not, it can be specified. ```python conda_env = { } ``` -## Load the MLFlow formatted model and test predictions +## Load the MLflow formatted model and test predictions -Once your environment is ready, you can pass the SKlearnWrapper, the Conda environment, and your newly created artifacts dictionary to the mlflow.pyfunc.save_model() method. Doing so saves the model to your disk. +After your environment is ready, pass the `SKlearnWrapper`, the Conda environment, and your newly created artifacts dictionary to the `mlflow.pyfunc.save_model()` method. Doing so saves the model to your disk. ```python mlflow_pyfunc_model_path = "sklearn_mlflow_pyfunc_custom" mlflow.pyfunc.save_model(path=mlflow_pyfunc_model_path, python_model=SKLearnWrapper(), conda_env=conda_env, artifacts=artifacts)- ``` -To ensure your newly saved MLflow formatted model didn't change during the save, you can load your model and print out a test prediction to compare your original model. +To ensure that your newly saved MLflow formatted model didn't change during the save, load your model and print a test prediction to compare your original model. -The following code prints a test prediction from the mlflow formatted model and a test prediction from the sklearn model that's saved to your disk for comparison. +The following code prints a test prediction from the mlflow formatted model and a test prediction from the sklearn model. It saves the test predictions to your disk for comparison. ```python loaded_model = mlflow.pyfunc.load_model(mlflow_pyfunc_model_path) print(result) ## Register the MLflow formatted model -Once you've confirmed that your model saved correctly, you can create a test run, so you can register and save your MLflow formatted model to your model registry. +After you confirm that your model saved correctly, you can create a test run. Register and save your MLflow formatted model to your model registry. ```python mlflow.end_run() ``` > [!IMPORTANT]-> In some cases, you might use a machine learning framework without its built-in MLflow model flavor support. For instance, the `vaderSentiment` library is a standard natural language processing (NLP) library used for sentiment analysis. Since it lacks a built-in MLflow model flavor, you cannot log or register the model with MLflow model fluent APIs. See an example on [how to save, log and register a model that doesn't have a supported built-in MLflow model flavor](https://mlflow.org/docs/latest/model-registry.html#registering-an-unsupported-machine-learning-model). +> In some cases, you might use a machine learning framework without its built-in MLflow model flavor support. For instance, the `vaderSentiment` library is a standard natural language processing (NLP) library used for sentiment analysis. Since it lacks a built-in MLflow model flavor, you cannot log or register the model with MLflow model fluent APIs. For an example on how to save, log and register a model that doesn't have a supported built-in MLflow model flavor, see [Registering an Unsupported Machine Learning Model](https://mlflow.org/docs/latest/model-registry.html#registering-an-unsupported-machine-learning-model). -## Next steps +## Related content -* [No-code deployment for Mlflow models](how-to-deploy-mlflow-models-online-endpoints.md) -* Learn more about [MLflow and Azure Machine Learning](concept-mlflow.md) +- [Deploy MLflow models to online endpoints](how-to-deploy-mlflow-models-online-endpoints.md) +- [MLflow and Azure Machine Learning](concept-mlflow.md) |
machine-learning | How To Create Compute Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md | description: Learn how to create an Azure Machine Learning compute instance. Use -+ Previously updated : 06/10/2024 Last updated : 08/21/2024+# customer intent: To create a compute instance in Azure Machine Learning for development and testing purposes. # Create an Azure Machine Learning compute instance Or use the following examples to create a compute instance with more options: [!notebook-python[](~/azureml-examples-main/sdk/python/resources/compute/compute.ipynb?name=ci_basic)] -For more information on the classes, methods, and parameters used in this example, see the following reference documents: +For more information on the classes, methods, and parameters for creating a compute instance, see the following reference documents: * [`AmlCompute` class](/python/api/azure-ai-ml/azure.ai.ml.entities.amlcompute)-* [`ComputeInstance` class](/python/api/azure-ai-ml/azure.ai.ml.entities.computeinstance) +* [`ComputeInstance` class](/python/api/azure-ai-ml/azure.ai.ml.entities.computeinstance). +* [`ComputeInstanceSshSettings` class](/python/api/azure-ai-ml/azure.ai.ml.entities.computeinstancesshsettings) # [Azure CLI](#tab/azure-cli) A compute instance is considered inactive if the below conditions are met: * No VS Code connections; you must close your VS Code connection for your compute instance to be considered inactive. Sessions are autoterminated if VS Code detects no activity for 3 hours. * No custom applications are running on the compute -A compute instance won't be considered idle if any custom application is running. There are also some basic bounds around inactivity time periods; compute instance must be inactive for a minimum of 15 mins and a maximum of three days. We also don't track VS Code SSH connections to determine activity. +A compute instance won't be considered idle if any custom application is running. To shutdown a compute with a custom application automatically, a schedule needs to be set up, or the custom application needs to be removed. There are also some basic bounds around inactivity time periods; compute instance must be inactive for a minimum of 15 mins and a maximum of three days. We also don't track VS Code SSH connections to determine activity. Also, if a compute instance has already been idle for a certain amount of time, if idle shutdown settings are updated to an amount of time shorter than the current idle duration, the idle time clock is reset to 0. For example, if the compute instance has already been idle for 20 minutes, and the shutdown settings are updated to 15 minutes, the idle time clock is reset to 0. Access the custom applications that you set up in studio: > It might take a few minutes after setting up a custom application until you can access it via the links. The amount of time taken will depend on the size of the image used for your custom application. If you see a 502 error message when trying to access the application, wait for some time for the application to be set up and try again. > If the custom image is pulled from an Azure Container Registry, you'll need a **Contributor** role for the workspace. For information on assigning roles, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md). -## Next steps +## Related content * [Manage an Azure Machine Learning compute instance](how-to-manage-compute-instance.md) * [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md) * [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)+* Use the compute instance in VS Code: + * [Tutorial: Model development on a cloud workstation](tutorial-cloud-workstation.md) + * [Work in VS Code remotely connected to a compute instance (preview)](how-to-work-in-vs-code-remote.md) |
machine-learning | How To Deploy Automl Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md | Title: Deploy an AutoML model with an online endpoint -description: Learn to deploy your AutoML model as a web service that's automatically managed by Azure. +description: Learn to use the Azure Machine Learning studio, SDK, and CLI to deploy your AutoML model as a web service that Azure automatically manages. Previously updated : 05/11/2022 Last updated : 08/19/2024 ms.devlang: azurecli ms.devlang: azurecli [!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)] -In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online (real-time inference) endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md). +In this article, you learn how to deploy an AutoML-trained machine learning model to an online real-time inference endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more information, see [What is automated machine learning (AutoML)?](concept-automated-ml.md) -In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using: +In the following sections, you learn how to deploy AutoML trained machine learning model to online endpoints using: - Azure Machine Learning studio - Azure Machine Learning CLI v2 In this article you'll know how to deploy AutoML trained machine learning model ## Prerequisites -An AutoML-trained machine learning model. For more, see [Tutorial: Train a classification model with no-code AutoML in the Azure Machine Learning studio](tutorial-first-experiment-automated-ml.md) or [Tutorial: Forecast demand with automated machine learning](tutorial-automated-ml-forecast.md). +- An AutoML-trained machine learning model. For more information, see [Tutorial: Train a classification model with no-code AutoML](tutorial-first-experiment-automated-ml.md) or [Tutorial: Forecast demand with no-code automated machine learning](tutorial-automated-ml-forecast.md). ## Deploy from Azure Machine Learning studio and no code -Deploying an AutoML-trained model from the Automated ML page is a no-code experience. That is, you don't need to prepare a scoring script and environment, both are auto generated. +Deploying an AutoML-trained model from the Automated ML page is a no-code experience. That is, you don't need to prepare a scoring script and environment because both are autogenerated. -1. Go to the Automated ML page in the studio -1. Select your experiment and run -1. Choose the Models tab -1. Select the model you want to deploy -1. Once you select a model, the Deploy button will light up with a drop-down menu -1. Select *Deploy to real-time endpoint* option +1. In Azure Machine Learning studio, go to the **Automated ML** page. +1. Select your experiment and run it. +1. Choose the **Models + child jobs** tab. +1. Select the model that you want to deploy. +1. After you select a model, the **Deploy** button is available with a dropdown menu. +1. Select **Real-time endpoint** option. - :::image type="content" source="media/how-to-deploy-automl-endpoint/deploy-button.png" lightbox="media/how-to-deploy-automl-endpoint/deploy-button.png" alt-text="Screenshot showing the Deploy button's drop-down menu"::: + :::image type="content" source="media/how-to-deploy-automl-endpoint/deploy-button.png" lightbox="media/how-to-deploy-automl-endpoint/deploy-button.png" alt-text="Screenshot showing the Deploy button's drop-down menu."::: - The system will generate the Model and Environment needed for the deployment. -- :::image type="content" source="media/how-to-deploy-automl-endpoint/model.png" lightbox="media/how-to-deploy-automl-endpoint/model.png" alt-text="Screenshot showing the generated Model"::: -- :::image type="content" source="media/how-to-deploy-automl-endpoint/environment.png" lightbox="media/how-to-deploy-automl-endpoint/environment.png" alt-text="Screenshot showing the generated Environment"::: --5. Complete the wizard to deploy the model to an online endpoint -- :::image type="content" source="media/how-to-deploy-automl-endpoint/complete-wizard.png" lightbox="media/how-to-deploy-automl-endpoint/complete-wizard.png" alt-text="Screenshot showing the review-and-create page"::: + The system generates the Model and Environment needed for the deployment. + :::image type="content" source="media/how-to-deploy-automl-endpoint/deploy-model.png" alt-text="Screenshot showing the deployment page where you can change values and then select Deploy."::: ## Deploy manually from the studio or command line -If you wish to have more control over the deployment, you can download the training artifacts and deploy them. +If you want to have more control over the deployment, you can download the training artifacts and deploy them. ++To download the components, you need for deployment: -To download the components you'll need for deployment: +1. Go to your Automated ML experiment and run it in your machine learning workspace. +1. Choose the **Models + child jobs** tab. +1. Select the model you want to use. After you select a model, the **Download** button is enabled. +1. Choose **Download**. -1. Go to your Automated ML experiment and run in your machine learning workspace -1. Choose the Models tab -1. Select the model you wish to use. Once you select a model, the *Download* button will become enabled -1. Choose *Download* + :::image type="content" source="media/how-to-deploy-automl-endpoint/download-model.png" lightbox="media/how-to-deploy-automl-endpoint/download-model.png" alt-text="Screenshot showing the selection of the model and download button."::: +You receive a *.zip* file that contains: -You'll receive a zip file containing: -* A conda environment specification file named `conda_env_<VERSION>.yml` -* A Python scoring file named `scoring_file_<VERSION>.py` -* The model itself, in a Python `.pkl` file named `model.pkl` +- A conda environment specification file named *conda_env_\<VERSION>.yml* +- A Python scoring file named *scoring_file_\<VERSION>.py* +- The model itself, in a Python *.pkl* file named *model.pkl* To deploy using these files, you can use either the studio or the Azure CLI. # [Studio](#tab/Studio) -1. Go to the Models page in Azure Machine Learning studio --1. Select + Register Model option +1. In Azure Machine Learning studio, go to the **Models** page. +1. Select Select **+ Register** > **From local files**. +1. Register the model you downloaded from Automated ML run. +1. Go to the Environments page, select **Custom environment**, and select **+ Create** to create an environment for your deployment. Use the downloaded conda yaml to create a custom environment. +1. Select the model, and from the D**eploy** dropdown menu, select **Real-time endpoint**. +1. Complete all the steps in wizard to create an online endpoint and deployment. -1. Register the model you downloaded from Automated ML run --1. Go to Environments page, select Custom environment, and select + Create option to create an environment for your deployment. Use the downloaded conda yaml to create a custom environment --1. Select the model, and from the Deploy drop-down option, select Deploy to real-time endpoint --1. Complete all the steps in wizard to create an online endpoint and deployment -- # [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] -## Configure the CLI +### Configure the CLI -To create a deployment from the CLI, you'll need the Azure CLI with the ML v2 extension. Run the following command to confirm that you've both: +To create a deployment from the CLI, you need the Azure CLI with the ML v2 extension. Run the following command to confirm: :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_version"::: If you receive an error message or you don't see `Extensions: ml` in the response, follow the steps at [Install and set up the CLI (v2)](how-to-configure-cli.md). -Sign in: +1. Sign in. + :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_login"::: -If you've access to multiple Azure subscriptions, you can set your active subscription: +1. If you have access to multiple Azure subscriptions, you can set your active subscription. + :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_account_set"::: -Set the default resource group and workspace to where you wish to create the deployment: +1. Set the default resource group and workspace to where you want to create the deployment. + :::code language="azurecli" source="~/azureml-examples-main/cli/setup.sh" id="az_configure_defaults"::: -## Put the scoring file in its own directory +### Put the scoring file in its own directory -Create a directory called `src/` and place the scoring file you downloaded into it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file. +Create a directory called *src*. Save the scoring file that you downloaded to it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file. -## Create the endpoint and deployment yaml file +### Create the endpoint and deployment yaml file -To create an online endpoint from the command line, you'll need to create an *endpoint.yml* and a *deployment.yml* file. The following code, taken from the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples) shows the _endpoints/online/managed/sample/_, which captures all the required inputs: +To create an online endpoint from the command line, create an *endpoint.yml* and a *deployment.yml* file. The following code, taken from the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples), shows the *endpoints/online/managed/sample/*, which captures all the required inputs. -__automl_endpoint.yml__ +*automl_endpoint.yml* ::: code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/sample/endpoint.yml" ::: -__automl_deployment.yml__ +*automl_deployment.yml* ::: code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/sample/blue-deployment.yml" ::: -You'll need to modify this file to use the files you downloaded from the AutoML Models page. +You need to modify this file to use the files you downloaded from the AutoML Models page. -1. Create a file `automl_endpoint.yml` and `automl_deployment.yml` and paste the contents of the above example. +1. Create a file *automl_endpoint.yml* and *automl_deployment.yml* and paste the contents of the preceding examples. 1. Change the value of the `name` of the endpoint. The endpoint name needs to be unique within the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. -1. In the `automl_deployment` file, change the value of the keys at the following paths: +1. In the *automl_deployment.yml* file, change the value of the keys at the following paths. - | Path | Change to | - | | | - | `model:path` | The path to the `model.pkl` file you downloaded. | - | `code_configuration:code:path` | The directory in which you placed the scoring file. | - | `code_configuration:scoring_script` | The name of the Python scoring file (`scoring_file_<VERSION>.py`). | - | `environment:conda_file` | A file URL for the downloaded conda environment file (`conda_env_<VERSION>.yml`). | + | Path | Change to | + |:- |: | + | `model:path` | The path to the *model.pkl* file you downloaded. | + | `code_configuration:code:path` | The directory in which you placed the scoring file. | + | `code_configuration:scoring_script` | The name of the Python scoring file (*scoring_file_\<VERSION>.py*). | + | `environment:conda_file` | A file URL for the downloaded conda environment file (*conda_env_\<VERSION>.yml*). | > [!NOTE] > For a full description of the YAML, see [Online endpoint YAML reference](reference-yaml-endpoint-online.md). -1. From the command line, run: +1. From the command line, run: [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] You'll need to modify this file to use the files you downloaded from the AutoML az ml online-endpoint create -f automl_endpoint.yml az ml online-deployment create -f automl_deployment.yml ```- -After you create a deployment, you can score it as described in [Invoke the endpoint to score data by using your model](how-to-deploy-online-endpoints.md#invoke-the-endpoint-to-score-data-by-using-your-model). +After you create a deployment, you can score it as described in [Invoke the endpoint to score data by using your model](how-to-deploy-online-endpoints.md#invoke-the-endpoint-to-score-data-by-using-your-model). # [Python SDK](#tab/python) [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] -## Configure the Python SDK +### Configure the Python SDK -If you haven't installed Python SDK v2 yet, please install with this command: +If you need to install the Python SDK v2, install with this command: ```azurecli pip install azure-ai-ml azure-identity pip install azure-ai-ml azure-identity For more information, see [Install the Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ai-ml-readme). -## Put the scoring file in its own directory +### Put the scoring file in its own directory -Create a directory called `src/` and place the scoring file you downloaded into it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file. +Create a directory called *src*. Save the scoring file that you downloaded to it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file. -## Connect to Azure Machine Learning workspace +### Connect to Azure Machine Learning workspace -1. Import the required libraries: +1. Import the required libraries. - ```python - # import required libraries - from azure.ai.ml import MLClient - from azure.ai.ml.entities import ( - ManagedOnlineEndpoint, - ManagedOnlineDeployment, - Model, - Environment, - CodeConfiguration, - ) - from azure.identity import DefaultAzureCredential - ``` + ```python + # import required libraries + from azure.ai.ml import MLClient + from azure.ai.ml.entities import ( + ManagedOnlineEndpoint, + ManagedOnlineDeployment, + Model, + Environment, + CodeConfiguration, + ) + from azure.identity import DefaultAzureCredential + ``` -1. Configure workspace details and get a handle to the workspace: -- ```python - # enter details of your Azure Machine Learning workspace - subscription_id = "<SUBSCRIPTION_ID>" - resource_group = "<RESOURCE_GROUP>" - workspace = "<AZUREML_WORKSPACE_NAME>" - ``` +1. Configure workspace details and get a handle to the workspace. - ```python - # get a handle to the workspace - ml_client = MLClient( - DefaultAzureCredential(), subscription_id, resource_group, workspace - ) - ``` + ```python + # enter details of your Azure Machine Learning workspace + subscription_id = "<SUBSCRIPTION_ID>" + resource_group = "<RESOURCE_GROUP>" + workspace = "<AZUREML_WORKSPACE_NAME>" + ``` -## Create the endpoint and deployment + ```python + # get a handle to the workspace + ml_client = MLClient( + DefaultAzureCredential(), subscription_id, resource_group, workspace + ) + ``` -Next, we'll create the managed online endpoints and deployments. +### Create the endpoint and deployment -1. Configure online endpoint: +Create the managed online endpoints and deployments. - > [!TIP] - > * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). - > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). +1. Configure online endpoint. + > [!TIP] + > - `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). + > - `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). ```python # Creating a unique endpoint name with current datetime to avoid conflicts Next, we'll create the managed online endpoints and deployments. ) ``` -1. Create the endpoint: +1. Create the endpoint. - Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues. + Using the `MLClient` created earlier, create the Endpoint in the workspace. This command starts the endpoint creation. It returns a confirmation response while the endpoint creation continues. ```python ml_client.begin_create_or_update(endpoint) ``` -1. Configure online deployment: +1. Configure online deployment. - A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class. + A deployment is a set of resources required for hosting the model that does the actual inferencing. Create a deployment for our endpoint using the `ManagedOnlineDeployment` class. ```python model = Model(path="./src/model.pkl") Next, we'll create the managed online endpoints and deployments. ) ``` - In the above example, we assume the files you downloaded from the AutoML Models page are in the `src` directory. You can modify the parameters in the code to suit your situation. - + In this example, the files you downloaded from the AutoML Models page are in the *src* directory. You can modify the parameters in the code to suit your situation. + | Parameter | Change to |- | | | - | `model:path` | The path to the `model.pkl` file you downloaded. | - | `code_configuration:code:path` | The directory in which you placed the scoring file. | - | `code_configuration:scoring_script` | The name of the Python scoring file (`scoring_file_<VERSION>.py`). | - | `environment:conda_file` | A file URL for the downloaded conda environment file (`conda_env_<VERSION>.yml`). | + |: |: | + | `model:path` | The path to the *model.pkl* file you downloaded. | + | `code_configuration:code:path` | The directory in which you placed the scoring file. | + | `code_configuration:scoring_script` | The name of the Python scoring file (*scoring_file_\<VERSION>.py*). | + | `environment:conda_file` | A file URL for the downloaded conda environment file (*conda_env_\<VERSION>.yml*). | -1. Create the deployment: +1. Create the deployment. - Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues. + Using the `MLClient` created earlier, create the deployment in the workspace. This command starts creating the deployment. It returns a confirmation response while the deployment creation continues. - ```python - ml_client.begin_create_or_update(blue_deployment) - ``` + ```python + ml_client.begin_create_or_update(blue_deployment) + ``` After you create a deployment, you can score it as described in [Test the endpoint with sample data](how-to-deploy-managed-online-endpoint-sdk-v2.md#test-the-endpoint-with-sample-data). -You can learn to deploy to managed online endpoints with SDK more in [Deploy machine learning models to managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md). +To learn more about deploying to managed online endpoints with SDK, see [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-managed-online-endpoint-sdk-v2.md). -## Next steps +## Related content -- [Troubleshooting online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)-- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)+- [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md) +- [Perform safe rollout of new deployments for real-time inference](how-to-safely-rollout-online-endpoints.md) |
machine-learning | How To Deploy Models Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-serverless.md | In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**. 1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region. :::image type="content" source="media/how-to-deploy-models-serverless/deployment-name.png" alt-text="A screenshot showing how to specify the name of the deployment you want to create." lightbox="media/how-to-deploy-models-serverless/deployment-name.png":::+ > [!TIP] + > The **Content filter (preview)** option is enabled by default. Leave the default setting for the service to detect harmful content such as hate, self-harm, sexual, and violent content. For more information about content filtering, see [Content safety for models deployed via serverless APIs](concept-model-catalog.md#content-safety-for-models-deployed-via-maas). 1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page. |
machine-learning | How To Image Processing Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md | Title: "Image processing with batch model deployments" -description: Learn how to deploy a model in batch endpoints that process images +description: Learn how to deploy a model in batch endpoints that process images by using Azure Machine Learning. Previously updated : 10/10/2022 Last updated : 08/20/2024 +#customer intent: As a data scientist, I want to use batch model deployments for machine learning, such as classifying images according to a taxonomy. # Image processing with batch model deployments [!INCLUDE [ml v2](includes/machine-learning-dev-v2.md)] -Batch model deployments can be used for processing tabular data, but also any other file type like images. Those deployments are supported in both MLflow and custom models. In this tutorial, we will learn how to deploy a model that classifies images according to the ImageNet taxonomy. +You can use batch model deployments for processing tabular data, but also any other file types, like images. Those deployments are supported in both MLflow and custom models. In this article, you learn how to deploy a model that classifies images according to the ImageNet taxonomy. ++## Prerequisites + ## About this sample -The model we are going to work with was built using TensorFlow along with the RestNet architecture ([Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)). A sample of this model can be downloaded from [here](https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip). The model has the following constrains that is important to keep in mind for deployment: +This article uses a model that was built using TensorFlow along with the RestNet architecture. For more information, see [Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027). You can download [a sample of this model](https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip). The model has the following constraints: -* It works with images of size 244x244 (tensors of `(224, 224, 3)`). -* It requires inputs to be scaled to the range `[0,1]`. +- It works with images of size 244x244 (tensors of `(224, 224, 3)`). +- It requires inputs to be scaled to the range `[0,1]`. -The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo, and then change directories to the `cli/endpoints/batch/deploy-models/imagenet-classifier` if you are using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/imagenet-classifier` if you are using our SDK for Python. +The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo. Change directories to *cli/endpoints/batch/deploy-models/imagenet-classifier* if you're using the Azure CLI or *sdk/python/endpoints/batch/deploy-models/imagenet-classifier* if you're using the SDK for Python. ```azurecli git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/cli/endpoints/batch/deploy-models/imagenet-classifier ```- ### Follow along in Jupyter Notebooks You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [imagenet-classifier-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/imagenet-classifier/imagenet-classifier-batch.ipynb). -## Prerequisites --- ## Image classification with batch deployments -In this example, we are going to learn how to deploy a deep learning model that can classify a given image according to the [taxonomy of ImageNet](https://image-net.org/). +In this example, you learn how to deploy a deep learning model that can classify a given image according to the [taxonomy of ImageNet](https://image-net.org/). ### Create the endpoint -First, let's create the endpoint that will host the model: +Create the endpoint that hosts the model: # [Azure CLI](#tab/cli) -Decide on the name of the endpoint: +1. Specify the name of the endpoint. -```azurecli -ENDPOINT_NAME="imagenet-classifier-batch" -``` --The following YAML file defines a batch endpoint: + ```azurecli + ENDPOINT_NAME="imagenet-classifier-batch" + ``` -__endpoint.yml__ +1. Create the following YAML file to define the batch endpoint, named *endpoint.yml*: + :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/endpoint.yml"::: -Run the following code to create the endpoint. + To create the endpoint, run the following code: + :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_endpoint" ::: # [Python](#tab/python) -Decide on the name of the endpoint: +1. Specify the name of the endpoint. -```python -endpoint_name="imagenet-classifier-batch" -``` + ```python + endpoint_name="imagenet-classifier-batch" + ``` -Configure the endpoint: +1. Configure the endpoint. -```python -endpoint = BatchEndpoint( - name=endpoint_name, - description="An batch service to perform ImageNet image classification", -) -``` + ```python + endpoint = BatchEndpoint( + name=endpoint_name, + description="An batch service to perform ImageNet image classification", + ) + ``` -Run the following code to create the endpoint: +1. To create the endpoint, run the following code: -```python -ml_client.batch_endpoints.begin_create_or_update(endpoint) -``` + ```python + ml_client.batch_endpoints.begin_create_or_update(endpoint) + ``` -### Registering the model +### Register the model -Model deployments can only deploy registered models so we need to register it. You can skip this step if the model you are trying to deploy is already registered. +Model deployments can only deploy registered models. You need to register the model. You can skip this step if the model you're trying to deploy is already registered. -1. Downloading a copy of the model: +1. Download a copy of the model. - # [Azure CLI](#tab/cli) - - :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_model" ::: - - # [Python](#tab/python) + # [Azure CLI](#tab/cli) - ```python - import os - import urllib.request - from zipfile import ZipFile - - response = urllib.request.urlretrieve('https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip', 'model.zip') + :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_model" ::: ++ # [Python](#tab/python) ++ ```python + import os + import urllib.request + from zipfile import ZipFile - os.mkdirs("imagenet-classifier", exits_ok=True) - with ZipFile(response[0], 'r') as zip: - model_path = zip.extractall(path="imagenet-classifier") - ``` + response = urllib.request.urlretrieve('https://azuremlexampledata.blob.core.windows.net/data/imagenet/model.zip', 'model.zip') -2. Register the model: - - # [Azure CLI](#tab/cli) + os.mkdirs("imagenet-classifier", exits_ok=True) + with ZipFile(response[0], 'r') as zip: + model_path = zip.extractall(path="imagenet-classifier") + ``` - ```azurecli - MODEL_NAME='imagenet-classifier' - az ml model create --name $MODEL_NAME --path "model" - ``` +1. Register the model. - # [Python](#tab/python) + # [Azure CLI](#tab/cli) - ```python - model_name = 'imagenet-classifier' - model = ml_client.models.create_or_update( - Model(name=model_name, path=model_path, type=AssetTypes.CUSTOM_MODEL) - ) - ``` + ```azurecli + MODEL_NAME='imagenet-classifier' + az ml model create --name $MODEL_NAME --path "model" + ``` ++ # [Python](#tab/python) ++ ```python + model_name = 'imagenet-classifier' + model = ml_client.models.create_or_update( + Model(name=model_name, path=model_path, type=AssetTypes.CUSTOM_MODEL) + ) + ``` -### Creating a scoring script +### Create a scoring script -We need to create a scoring script that can read the images provided by the batch deployment and return the scores of the model. The following script: +Create a scoring script that can read the images provided by the batch deployment and return the scores of the model. > [!div class="checklist"]-> * Indicates an `init` function that load the model using `keras` module in `tensorflow`. -> * Indicates a `run` function that is executed for each mini-batch the batch deployment provides. -> * The `run` function read one image of the file at a time -> * The `run` method resizes the images to the expected sizes for the model. -> * The `run` method rescales the images to the range `[0,1]` domain, which is what the model expects. -> * It returns the classes and the probabilities associated with the predictions. +> - The `init` method loads the model using the `keras` module in `tensorflow`. +> - The `run` method runs for each mini-batch the batch deployment provides. +> - The `run` method reads one image of the file at a time. +> - The `run` method resizes the images to the expected sizes for the model. +> - The `run` method rescales the images to the range `[0,1]` domain, which is what the model expects. +> - The script returns the classes and the probabilities associated with the predictions. -__code/score-by-file/batch_driver.py__ +This code is the *code/score-by-file/batch_driver.py* file: :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/code/score-by-file/batch_driver.py" ::: > [!TIP]-> Although images are provided in mini-batches by the deployment, this scoring script processes one image at a time. This is a common pattern as trying to load the entire batch and send it to the model at once may result in high-memory pressure on the batch executor (OOM exeptions). However, there are certain cases where doing so enables high throughput in the scoring task. This is the case for instance of batch deployments over a GPU hardware where we want to achieve high GPU utilization. See [High throughput deployments](#high-throughput-deployments) for an example of a scoring script that takes advantage of it. +> Although images are provided in mini-batches by the deployment, this scoring script processes one image at a time. This is a common pattern because trying to load the entire batch and send it to the model at once might result in high-memory pressure on the batch executor (OOM exceptions). +> +> There are certain cases where doing so enables high throughput in the scoring task. This is the case for batch deployments over GPU hardware where you want to achieve high GPU utilization. For a scoring script that takes advantage of this approach, see [High throughput deployments](#high-throughput-deployments). > [!NOTE]-> If you are trying to deploy a generative model (one that generates files), please read how to author a scoring script as explained at [Deployment of models that produces multiple files](how-to-deploy-model-custom-output.md). +> If you want to deploy a generative model, which generates files, learn how to author a scoring script: [Customize outputs in batch deployments](how-to-deploy-model-custom-output.md). -### Creating the deployment +### Create the deployment -One the scoring script is created, it's time to create a batch deployment for it. Follow the following steps to create it: +After you create the scoring script, create a batch deployment for it. Use the following procedure: -1. Ensure you have a compute cluster created where we can create the deployment. In this example we are going to use a compute cluster named `gpu-cluster`. Although it's not required, we use GPUs to speed up the processing. +1. Ensure that you have a compute cluster created where you can create the deployment. In this example, use a compute cluster named `gpu-cluster`. Although not required, using GPUs speeds up the processing. -1. We need to indicate over which environment we are going to run the deployment. In our case, our model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so we can reutilize this environment. We are just going to add a couple of dependencies in a `conda.yml` file. +1. Indicate over which environment to run the deployment. In this example, the model runs on `TensorFlow`. Azure Machine Learning already has an environment with the required software installed, so you can reuse this environment. You need to add a couple of dependencies in a *conda.yml* file. # [Azure CLI](#tab/cli)- - The environment definition will be included in the deployment file. - ++ The environment definition is included in the deployment file. + :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deployment-by-file.yml" range="7-10":::- + # [Python](#tab/python)- - Let's get a reference to the environment: - ++ Get a reference to the environment. + ```python environment = Environment( name="tensorflow27-cuda11-gpu", One the scoring script is created, it's time to create a batch deployment for it ) ``` -1. Now, let create the deployment. +1. Create the deployment. # [Azure CLI](#tab/cli)- - To create a new deployment under the created endpoint, create a `YAML` configuration like the following. You can check the [full batch endpoint YAML schema](reference-yaml-endpoint-batch.md) for extra properties. - ++ To create a new deployment under the created endpoint, create a `YAML` configuration like the following example. For other properties, see the [full batch endpoint YAML schema](reference-yaml-endpoint-batch.md). + :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deployment-by-file.yml"::: - Then, create the deployment with the following command: - + Create the deployment with the following command: + :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_deployment" :::- + # [Python](#tab/python)- - To create a new deployment with the indicated environment and scoring script use the following code: - ++ To create a new deployment with the indicated environment and scoring script, use the following code: + ```python deployment = BatchDeployment( name="imagenet-classifier-resnetv2", One the scoring script is created, it's time to create a batch deployment for it logging_level="info", ) ```- - Then, create the deployment with the following command: - ++ Create the deployment with the following command: + ```python ml_client.batch_deployments.begin_create_or_update(deployment) ``` -1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself, and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment - and hence changing the model serving the deployment - without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment: +1. Although you can invoke a specific deployment inside of an endpoint, you usually want to invoke the endpoint itself, and let the endpoint decide which deployment to use. Such deployment is called the *default* deployment. ++ This approach lets you change the default deployment and change the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following code to update the default deployment: # [Azure Machine Learning CLI](#tab/cli)- + ```bash az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME ```- + # [Azure Machine Learning SDK for Python](#tab/python)- + ```python endpoint.defaults.deployment_name = deployment.name ml_client.batch_endpoints.begin_create_or_update(endpoint) ``` -1. At this point, our batch endpoint is ready to be used. +Your batch endpoint is ready to be used. -## Testing out the deployment +## Test the deployment -For testing our endpoint, we are going to use a sample of 1000 images from the original ImageNet dataset. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations. +For testing the endpoint, use a sample of 1,000 images from the original ImageNet dataset. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. Upload it to an Azure Machine Learning data store. Create a data asset that can be used to invoke the endpoint for scoring. -1. Let's download the associated sample data: +> [!NOTE] +> Batch endpoints accept data that can be placed in multiple types of locations. ++1. Download the associated sample data. # [Azure CLI](#tab/cli)- + :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="download_sample_data" :::- ++ > [!NOTE] + > If you don't have `wget` installed locally, install it or use a browser to get the *.zip* file. + # [Python](#tab/python)- + ```python !wget https://azuremlexampledata.blob.core.windows.net/data/imagenet-1000.zip !unzip imagenet-1000.zip -d data ``` -2. Now, let's create the data asset from the data just downloaded +1. Create the data asset from the data downloaded. # [Azure CLI](#tab/cli)- - Create a data asset definition in `YAML`: - - __imagenet-sample-unlabeled.yml__ - - :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/imagenet-sample-unlabeled.yml"::: - - Then, create the data asset: - - :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_sample_data_asset" ::: - ++ 1. Create a data asset definition in a `YAML` file called *imagenet-sample-unlabeled.yml*: ++ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/imagenet-sample-unlabeled.yml"::: ++ 1. Create the data asset. ++ :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_sample_data_asset" ::: + # [Python](#tab/python)- - ```python - data_path = "data" - dataset_name = "imagenet-sample-unlabeled" -- imagenet_sample = Data( - path=data_path, - type=AssetTypes.URI_FOLDER, - description="A sample of 1000 images from the original ImageNet dataset", - name=dataset_name, - ) - ``` - - Then, create the data asset: - - ```python - ml_client.data.create_or_update(imagenet_sample) - ``` - - To get the newly created data asset, use: - - ```python - imagenet_sample = ml_client.data.get(dataset_name, label="latest") - ``` - -3. Now that the data is uploaded and ready to be used, let's invoke the endpoint: ++ 1. Specify these values: ++ ```python + data_path = "data" + dataset_name = "imagenet-sample-unlabeled" ++ imagenet_sample = Data( + path=data_path, + type=AssetTypes.URI_FOLDER, + description="A sample of 1000 images from the original ImageNet dataset", + name=dataset_name, + ) + ``` ++ 1. Create the data asset. ++ ```python + ml_client.data.create_or_update(imagenet_sample) + ``` ++ To get the newly created data asset, use this code: ++ ```python + imagenet_sample = ml_client.data.get(dataset_name, label="latest") + ``` ++1. When the data is uploaded and ready to be used, invoke the endpoint. # [Azure CLI](#tab/cli)- + :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="start_batch_scoring_job" :::- + > [!NOTE]- > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/). - + > If the utility `jq` isn't installed, see [Download jq](https://stedolan.github.io/jq/download/). + # [Python](#tab/python) > [!TIP] > [!INCLUDE [batch-endpoint-invoke-inputs-sdk](includes/batch-endpoint-invoke-inputs-sdk.md)] - ```python input = Input(type=AssetTypes.URI_FOLDER, path=imagenet_sample.id) job = ml_client.batch_endpoints.invoke( For testing our endpoint, we are going to use a sample of 1000 images from the o input=input, ) ```+ - + > [!TIP]- > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`. + > You don't indicate the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since the endpoint only has one deployment, that one is the default. You can target an specific deployment by indicating the argument/parameter `deployment_name`. -4. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes: +1. A batch job starts as soon as the command returns. You can monitor the status of the job until it finishes. # [Azure CLI](#tab/cli)- + :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::- + # [Python](#tab/python)- + ```python ml_client.jobs.get(job.name) ``` -5. Once the deployment is finished, we can download the predictions: +1. After the deployment finishes, download the predictions. # [Azure CLI](#tab/cli) For testing our endpoint, we are going to use a sample of 1000 images from the o ml_client.jobs.download(name=job.name, output_name='score', download_path='./') ``` -6. The output predictions will look like the following. Notice that the predictions have been combined with the labels for the convenience of the reader. To know more about how to achieve this see the associated notebook. +1. The predictions look like the following output. The predictions are combined with the labels for the convenience of the reader. To learn more about how to achieve this effect, see the associated notebook. ```python import pandas as pd For testing our endpoint, we are going to use a sample of 1000 images from the o | n02088364_beagle.JPEG | 165 | 0.366914 | bluetick | | n02088466_bloodhound.JPEG | 164 | 0.926464 | bloodhound | | ... | ... | ... | ... |- ## High throughput deployments -As mentioned before, the deployment we just created processes one image a time, even when the batch deployment is providing a batch of them. In most cases this is the best approach as it simplifies how the models execute and avoids any possible out-of-memory problems. However, in certain others we may want to saturate as much as possible the utilization of the underlying hardware. This is the case GPUs for instance. +As mentioned before, the deployment processes one image a time, even when the batch deployment is providing a batch of them. In most cases, this approach is best. It simplifies how the models run and avoids any possible out-of-memory problems. However, in certain other cases, you might want to saturate as much as possible the underlying hardware. This situation is the case GPUs, for instance. -On those cases, we may want to perform inference on the entire batch of data. That implies loading the entire set of images to memory and sending them directly to the model. The following example uses `TensorFlow` to read batch of images and score them all at once. It also uses `TensorFlow` ops to do any data preprocessing so the entire pipeline will happen on the same device being used (CPU/GPU). +On those cases, you might want to do inference on the entire batch of data. That approach implies loading the entire set of images to memory and sending them directly to the model. The following example uses `TensorFlow` to read batch of images and score them all at once. It also uses `TensorFlow` ops to do any data preprocessing. The entire pipeline happens on the same device being used (CPU/GPU). > [!WARNING]-> Some models have a non-linear relationship with the size of the inputs in terms of the memory consumption. Batch again (as done in this example) or decrease the size of the batches created by the batch deployment to avoid out-of-memory exceptions. +> Some models have a non-linear relationship with the size of the inputs in terms of the memory consumption. To avoid out-of-memory exceptions, batch again (as done in this example) or decrease the size of the batches created by the batch deployment. -1. Creating the scoring script: +1. Create the scoring script *code/score-by-batch/batch_driver.py*: - __code/score-by-batch/batch_driver.py__ - - :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/code/score-by-batch/batch_driver.py" ::: + :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/code/score-by-batch/batch_driver.py" ::: - > [!TIP] - > * Notice that this script is constructing a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`. - > * The dataset is batched again (16) send the data to the model. Use this parameter to control how much information you can load into memory and send to the model at once. If running on a GPU, you will need to carefully tune this parameter to achieve the maximum utilization of the GPU just before getting an OOM exception. - > * Once predictions are computed, the tensors are converted to `numpy.ndarray`. + - This script constructs a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`. + - The dataset is batched again (16) to send the data to the model. Use this parameter to control how much information you can load into memory and send to the model at once. If running on a GPU, you need to carefully tune this parameter to achieve the maximum usage of the GPU just before getting an OOM exception. + - After predictions are computed, the tensors are converted to `numpy.ndarray`. -1. Now, let create the deployment. +1. Create the deployment. # [Azure CLI](#tab/cli)- - To create a new deployment under the created endpoint, create a `YAML` configuration like the following. You can check the [full batch endpoint YAML schema](reference-yaml-endpoint-batch.md) for extra properties. - ++ 1. To create a new deployment under the created endpoint, create a `YAML` configuration like the following example. For other properties, see the [full batch endpoint YAML schema](reference-yaml-endpoint-batch.md). + :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deployment-by-batch.yml"::: - Then, create the deployment with the following command: - + 1. Create the deployment with the following command: + :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/deploy-and-run.sh" ID="create_deployment_ht" :::- + # [Python](#tab/python)- - To create a new deployment with the indicated environment and scoring script use the following code: - ++ 1. To create a new deployment with the indicated environment and scoring script, use the following code: + ```python deployment = BatchDeployment( name="imagenet-classifier-resnetv2", On those cases, we may want to perform inference on the entire batch of data. Th logging_level="info", ) ```- - Then, create the deployment with the following command: - ++ 1. Create the deployment with the following command: + ```python ml_client.batch_deployments.begin_create_or_update(deployment) ``` -1. You can use this new deployment with the sample data shown before. Remember that to invoke this deployment you should either indicate the name of the deployment in the invocation method or set it as the default one. +1. You can use this new deployment with the sample data shown before. Remember that to invoke this deployment either indicate the name of the deployment in the invocation method or set it as the default one. ## Considerations for MLflow models processing images MLflow models in Batch Endpoints support reading images as input data. Since MLflow deployments don't require a scoring script, have the following considerations when using them: > [!div class="checklist"]-> * Image files supported includes: `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp` and `.gif`. -> * MLflow models should expect to recieve a `np.ndarray` as input that will match the dimensions of the input image. In order to support multiple image sizes on each batch, the batch executor will invoke the MLflow model once per image file. -> * MLflow models are highly encouraged to include a signature, and if they do it must be of type `TensorSpec`. Inputs are reshaped to match tensor's shape if available. If no signature is available, tensors of type `np.uint8` are inferred. -> * For models that include a signature and are expected to handle variable size of images, then include a signature that can guarantee it. For instance, the following signature example will allow batches of 3 channeled images. +> - Image files supported include: *.png*, *.jpg*, *.jpeg*, *.tiff*, *.bmp*, and *.gif*. +> - MLflow models should expect to receive a `np.ndarray` as input that matches the dimensions of the input image. In order to support multiple image sizes on each batch, the batch executor invokes the MLflow model once per image file. +> - MLflow models are highly encouraged to include a signature. If they do, it must be of type `TensorSpec`. Inputs are reshaped to match tensor's shape if available. If no signature is available, tensors of type `np.uint8` are inferred. +> - For models that include a signature and are expected to handle variable size of images, include a signature that can guarantee it. For instance, the following signature example allows batches of 3 channeled images. ```python import numpy as np signature = ModelSignature(inputs=input_schema) mlflow.<flavor>.log_model(..., signature=signature) ``` -You can find a working example in the Jupyter notebook [imagenet-classifier-mlflow.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/imagenet-classifier/imagenet-classifier-mlflow.ipynb). For more information about how to use MLflow models in batch deployments read [Using MLflow models in batch deployments](how-to-mlflow-batch.md). +You can find a working example in the Jupyter notebook [imagenet-classifier-mlflow.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/imagenet-classifier/imagenet-classifier-mlflow.ipynb). For more information about how to use MLflow models in batch deployments, see [Using MLflow models in batch deployments](how-to-mlflow-batch.md). ## Next steps -* [Using MLflow models in batch deployments](how-to-mlflow-batch.md) -* [NLP tasks with batch deployments](how-to-nlp-processing-batch.md) +- [Deploy MLflow models in batch deployments](how-to-mlflow-batch.md) +- [Deploy language models in batch endpoints](how-to-nlp-processing-batch.md) |
machine-learning | How To R Interactive Development | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-interactive-development.md | ms.devlang: r [!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)] -This article shows how to use R on a compute instance in Azure Machine Learning studio, that runs an R kernel in a Jupyter notebook. +This article shows how to use R in Azure Machine Learning studio on a compute instance that runs an R kernel in a Jupyter notebook. -The popular RStudio IDE also works. You can install RStudio or Posit Workbench in a custom container on a compute instance. However, this has limitations in reading and writing to your Azure Machine Learning workspace. +The popular RStudio IDE also works. You can install RStudio or Posit Workbench in a custom container on a compute instance. However, this has limitations in reading and writing to your Azure Machine Learning workspace. > [!IMPORTANT]-> The code shown in this article works on an Azure Machine Learning compute instance. The compute instance has an environment and configuration file necessary for the code to run successfully. +> The code shown in this article works on an Azure Machine Learning compute instance. The compute instance has an environment and configuration file necessary for the code to run successfully. ## Prerequisites - If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today - An [Azure Machine Learning workspace and a compute instance](quickstart-create-resources.md)-- A basic understand of using Jupyter notebooks in Azure Machine Learning studio. See [Model development on a cloud workstation](tutorial-cloud-workstation.md) for more information.+- A basic understand of using Jupyter notebooks in Azure Machine Learning studio. Visit the [Model development on a cloud workstation](tutorial-cloud-workstation.md) resource for more information. ## Run R in a notebook in studio -You'll use a notebook in your Azure Machine Learning workspace, on a compute instance. +You'll use a notebook in your Azure Machine Learning workspace, on a compute instance. 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com) 1. Open your workspace if it isn't already open You'll use a notebook in your Azure Machine Learning workspace, on a compute ins > If you're not sure how to create and work with notebooks in studio, review [Run Jupyter notebooks in your workspace](how-to-run-jupyter-notebooks.md) 1. Select the notebook.-1. On the notebook toolbar, make sure your compute instance is running. If not, start it now. +1. On the notebook toolbar, make sure your compute instance is running. If not, start it now. 1. On the notebook toolbar, switch the kernel to **R**. :::image type="content" source="media/how-to-r-interactive-development/r-kernel.png" alt-text="Screenshot: Switch the notebook kernel to use R." lightbox="media/how-to-r-interactive-development/r-kernel.png"::: This section describes how to use Python and the `reticulate` package to load yo To install these packages: -1. Create a new file on the compute instance, called **setup.sh**. +1. Create a new file on the compute instance, called **setup.sh**. 1. Copy this code into the file: :::code language="bash" source="~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/01-setup-compute-instance-for-interactive-r/setup-ci-for-interactive-data-reads.sh"::: For data stored in a data asset [created in Azure Machine Learning](how-to-creat > [!NOTE] > Reading a file with `reticulate` only works with tabular data. -1. Ensure you have the correct version of `reticulate`. For a version less than 1.26, try to use a newer compute instance. +1. Ensure you have the correct version of `reticulate`. For a version less than 1.26, try to use a newer compute instance. ```r packageVersion("reticulate") For data stored in a data asset [created in Azure Machine Learning](how-to-creat [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=get-uri)] - 1. Run the code to retrieve the URI. + 1. To retrieve the URI, run the code. [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=py_run_string)] -1. Use Pandas read functions to read the file(s) into the R environment +1. Use Pandas read functions to read the file or files into the R environment. [!Notebook-r[](~/azureml-examples-mavaisma-r-azureml/tutorials/using-r-with-azureml/02-develop-in-interactive-r/work-with-data-assets.ipynb?name=read-uri)] You can also use a Datastore URI to access different files on a registered Datas uri <- paste0("azureml://subscriptions/", subscription, "/resourcegroups/", resource_group, "/workspaces/", workspace, "/datastores/", datastore_name, "/paths/", path_on_datastore) ```- + > [!TIP] > Instead of remembering the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI, if you know the datastore where your file is located: > 1. Navigate to the file/folder you want to read into R- > 1. Select the elipsis (**...**) next to it. + > 1. Select the elipsis (**...**) next to it. > 1. Select from the menu **Copy URI**. > 1. Select the **Datastore URI** to copy into your notebook/script. > Note that you must create a variable for `<path>` in the code.- > :::image type="content" source="media/how-to-access-data-ci/datastore_uri_copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: + > :::image type="content" source="media/how-to-r-interactive-development/datastore-uri-copy.png" alt-text="Screenshot highlighting the copy of the datastore URI."::: - 2. Create a filestore object using the aforementioned URI: + 2. Create a filestore object using the previously mentioned URI: ```r fs <- azureml.fsspec$AzureMachineLearningFileSystem(uri, sep = "") ``` Add `/home/azureuser` to the R library path. ``` > [!TIP]-> You must update the `.libPaths` in each interactive R script to access user installed libraries. Add this code to the top of each interactive R script or notebook. +> You must update the `.libPaths` in each interactive R script to access user installed libraries. Add this code to the top of each interactive R script or notebook. Once the libPath is updated, load libraries as usual. |
machine-learning | How To Use Low Priority Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-low-priority-batch.md | Title: "Using low priority VMs in batch deployments" + Title: "Use low priority VMs in batch deployments" -description: Learn how to use low priority VMs to save costs when running batch jobs. +description: Learn how to use low priority virtual machines in Azure Machine Learning to save costs when you run batch inference jobs. Previously updated : 10/10/2022 Last updated : 08/15/2024 +#customer intent: As an analyst, I want to run batch inference workloads in the most cost efficient way possible. -# Using low priority VMs in batch deployments +# Use low priority VMs for batch deployments [!INCLUDE [cli v2](includes/machine-learning-dev-v2.md)] -Azure Batch Deployments supports low priority VMs to reduce the cost of batch inference workloads. Low priority VMs enable a large amount of compute power to be used for a low cost. Low priority VMs take advantage of surplus capacity in Azure. When you specify low priority VMs in your pools, Azure can use this surplus, when available. +Azure batch deployments support low priority virtual machines (VMs) to reduce the cost of batch inference workloads. Low priority VMs enable a large amount of compute power to be used for a low cost. Low priority virtual machines take advantage of surplus capacity in Azure. When you specify low priority VMs in your pools, Azure can use this surplus, when available. -The tradeoff for using them is that those VMs may not always be available to be allocated, or may be preempted at any time, depending on available capacity. For this reason, __they are most suitable for batch and asynchronous processing workloads__ where the job completion time is flexible and the work is distributed across many VMs. +> [!TIP] +> The tradeoff for using low priority VMs is that those virtual machines might not be available or they might be preempted at any time, depending on available capacity. For this reason, this approach is most suitable for batch and asynchronous processing workloads, where job completion time is flexible and the work is distributed across many virtual machines. -Low priority VMs are offered at a significantly reduced price compared with dedicated VMs. For pricing details, see [Azure Machine Learning pricing](https://azure.microsoft.com/pricing/details/machine-learning/). +Low priority virtual machines are offered at a reduced price compared with dedicated virtual machines. For pricing details, see [Azure Machine Learning pricing](https://azure.microsoft.com/pricing/details/machine-learning/). ## How batch deployment works with low priority VMs Azure Machine Learning Batch Deployments provides several capabilities that make it easy to consume and benefit from low priority VMs: -- Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs. Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.+- Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs. After a deployment is associated with a low priority VMs cluster, all the jobs produced by such deployment use low priority VMs. Per-job configuration isn't possible. - Batch deployment jobs automatically seek the target number of VMs in the available compute cluster based on the number of tasks to submit. If VMs are preempted or unavailable, batch deployment jobs attempt to replace the lost capacity by queuing the failed tasks to the cluster.-- Low priority VMs have a separate vCPU quota that differs from the one for dedicated VMs. Low-priority cores per region have a default limit of 100 to 3,000, depending on your subscription offer type. The number of low-priority cores per subscription can be increased and is a single value across VM families. See [Azure Machine Learning compute quotas](how-to-manage-quotas.md#azure-machine-learning-compute).+- Low priority VMs have a separate vCPU quota that differs from the one for dedicated VMs. Low-priority cores per region have a default limit of 100 to 3,000, depending on your subscription. The number of low-priority cores per subscription can be increased and is a single value across VM families. See [Azure Machine Learning compute quotas](how-to-manage-quotas.md#azure-machine-learning-compute). -## Considerations and use cases +### Considerations and use cases -Many batch workloads are a good fit for low priority VMs. Although this may introduce further execution delays when deallocation of VMs occurs, the potential drops in capacity can be tolerated at expenses of running with a lower cost if there is flexibility in the time jobs have to complete. +Many batch workloads are a good fit for low priority VMs. Using low priority VMs can introduce execution delays when deallocation of VMs occurs. If you have flexibility in the time jobs have to finish, you might tolerate the potential drops in capacity. -When **deploying models** under batch endpoints, rescheduling can be done at the mini batch level. That has the extra benefit that deallocation only impacts those mini-batches that are currently being processed and not finished on the affected node. Every completed progress is kept. +When you deploy models under batch endpoints, rescheduling can be done at the minibatch level. That approach has the benefit that deallocation only impacts those minibatches that are currently being processed and not finished on the affected node. All completed progress is kept. -## Creating batch deployments with low priority VMs +### Limitations -Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs. +- After a deployment is associated with a low priority VMs cluster, all the jobs produced by such deployment use low priority VMs. Per-job configuration isn't possible. +- Rescheduling is done at the mini-batch level, regardless of the progress. No checkpointing capability is provided. ++> [!WARNING] +> In the cases where the entire cluster is preempted or running on a single-node cluster, the job is cancelled because there is no capacity available for it to run. Resubmitting is required in this case. ++## Create batch deployments that use low priority VMs -> [!NOTE] -> Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible. +Batch deployment jobs consume low priority VMs by running on Azure Machine Learning compute clusters created with low priority VMs. ++> [!NOTE] +> After a deployment is associated with a low priority VMs cluster, all the jobs produced by such deployment use low priority VMs. Per-job configuration is not possible. You can create a low priority Azure Machine Learning compute cluster as follows: - # [Azure CLI](#tab/cli) - - Create a compute definition `YAML` like the following one: - - __low-pri-cluster.yml__ - ```yaml - $schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json - name: low-pri-cluster - type: amlcompute - size: STANDARD_DS3_v2 - min_instances: 0 - max_instances: 2 - idle_time_before_scale_down: 120 - tier: low_priority - ``` - - Create the compute using the following command: - - ```azurecli - az ml compute create -f low-pri-cluster.yml - ``` - - # [Python](#tab/sdk) - - To create a new compute cluster with low priority VMs where to create the deployment, use the following script: - - ```python - compute_name = "low-pri-cluster" - compute_cluster = AmlCompute( - name=compute_name, - description="Low priority compute cluster", - min_instances=0, - max_instances=2, - tier='LowPriority' - ) - - ml_client.begin_create_or_update(compute_cluster) - ``` - - - -Once you have the new compute created, you can create or update your deployment to use the new cluster: -- # [Azure CLI](#tab/cli) - - To create or update a deployment under the new compute cluster, create a `YAML` configuration like the following: - - ```yaml - $schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json - endpoint_name: heart-classifier-batch - name: classifier-xgboost - description: A heart condition classifier based on XGBoost - type: model - model: azureml:heart-classifier@latest - compute: azureml:low-pri-cluster - resources: - instance_count: 2 - settings: - max_concurrency_per_instance: 2 - mini_batch_size: 2 - output_action: append_row - output_file_name: predictions.csv - retry_settings: - max_retries: 3 - timeout: 300 - ``` - - Then, create the deployment with the following command: - - ```azurecli - az ml batch-endpoint create -f endpoint.yml - ``` - - # [Python](#tab/sdk) - - To create or update a deployment under the new compute cluster, use the following script: - - ```python - deployment = ModelBatchDeployment( - name="classifier-xgboost", - description="A heart condition classifier based on XGBoost", - endpoint_name=endpoint.name, - model=model, - compute=compute_name, - settings=ModelBatchDeploymentSettings( - instance_count=2, - max_concurrency_per_instance=2, - mini_batch_size=2, - output_action=BatchDeploymentOutputAction.APPEND_ROW, - output_file_name="predictions.csv", - retry_settings=BatchRetrySettings(max_retries=3, timeout=300), - ) +# [Azure CLI](#tab/cli) ++Create a compute definition `YAML` like the following one, *low-pri-cluster.yml*: ++```yaml +$schema: https://azuremlschemas.azureedge.net/latest/amlCompute.schema.json +name: low-pri-cluster +type: amlcompute +size: STANDARD_DS3_v2 +min_instances: 0 +max_instances: 2 +idle_time_before_scale_down: 120 +tier: low_priority +``` ++Create the compute using the following command: ++```azurecli +az ml compute create -f low-pri-cluster.yml +``` ++# [Python](#tab/sdk) ++To create a new compute cluster with low priority VMs where to create the deployment, use the following script: ++```python +from azure.ai.ml.entities import AmlCompute ++compute_name = "low-pri-cluster" +compute_cluster = AmlCompute( + name=compute_name, + description="Low priority compute cluster", + min_instances=0, + max_instances=2, + tier='LowPriority' +) ++ml_client.begin_create_or_update(compute_cluster) +``` ++++After you create the new compute, you can create or update your deployment to use the new cluster: ++# [Azure CLI](#tab/cli) ++To create or update a deployment under the new compute cluster, create a `YAML` configuration file, *endpoint.yml*: ++```yaml +$schema: https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json +endpoint_name: heart-classifier-batch +name: classifier-xgboost +description: A heart condition classifier based on XGBoost +type: model +model: azureml:heart-classifier@latest +compute: azureml:low-pri-cluster +resources: + instance_count: 2 +settings: + max_concurrency_per_instance: 2 + mini_batch_size: 2 + output_action: append_row + output_file_name: predictions.csv + retry_settings: + max_retries: 3 + timeout: 300 +``` ++Then, create the deployment with the following command: ++```azurecli +az ml batch-endpoint create -f endpoint.yml +``` ++# [Python](#tab/sdk) ++To create or update a deployment under the new compute cluster, use the following script: ++```python +deployment = ModelBatchDeployment( + name="classifier-xgboost", + description="A heart condition classifier based on XGBoost", + endpoint_name=endpoint.name, + model=model, + compute=compute_name, + settings=ModelBatchDeploymentSettings( + instance_count=2, + max_concurrency_per_instance=2, + mini_batch_size=2, + output_action=BatchDeploymentOutputAction.APPEND_ROW, + output_file_name="predictions.csv", + retry_settings=BatchRetrySettings(max_retries=3, timeout=300), )- - ml_client.batch_deployments.begin_create_or_update(deployment) - ``` - - +) ++ml_client.batch_deployments.begin_create_or_update(deployment) +``` +++ ## View and monitor node deallocation New metrics are available in the [Azure portal](https://portal.azure.com) for low priority VMs to monitor low priority VMs. These metrics are: New metrics are available in the [Azure portal](https://portal.azure.com) for lo - Preempted nodes - Preempted cores -To view these metrics in the Azure portal +To view these metrics in the Azure portal: 1. Navigate to your Azure Machine Learning workspace in the [Azure portal](https://portal.azure.com).-2. Select **Metrics** from the **Monitoring** section. -3. Select the metrics you desire from the **Metric** list. ---## Limitations +1. Select **Metrics** from the **Monitoring** section. +1. Select the metrics you desire from the **Metric** list. -- Once a deployment is associated with a low priority VMs' cluster, all the jobs produced by such deployment will use low priority VMs. Per-job configuration is not possible.-- Rescheduling is done at the mini-batch level, regardless of the progress. No checkpointing capability is provided.--> [!WARNING] -> In the cases where the entire cluster is preempted (or running on a single-node cluster), the job will be cancelled as there is no capacity available for it to run. Resubmitting will be required in this case. +## Related content +- [Create an Azure Machine Learning compute cluster](how-to-create-attach-compute-cluster.md) +- [Deploy MLflow models in batch deployments](how-to-mlflow-batch.md) +- [Manage compute resources for model training](how-to-create-attach-compute-studio.md) |
machine-learning | How To Use Mlflow Azure Databricks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md | Title: MLflow Tracking for Azure Databricks ML experiments + Title: MLflow tracking for Azure Databricks machine learning experiments -description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from Azure Databricks ML experiments. +description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from Azure Databricks machine learning experiments. Previously updated : 07/01/2022 Last updated : 08/16/2024 +#customer intent: As a data scientist, I want to integrate Azure Databricks with Azure Machine Learning to connect the products. -# Track Azure Databricks ML experiments with MLflow and Azure Machine Learning +# Track Azure Databricks machine learning experiments with MLflow and Azure Machine Learning [MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. You can use MLflow to integrate Azure Databricks with Azure Machine Learning to ensure you get the best from both of the products. -In this article, you will learn: +In this article, you learn: + > [!div class="checklist"] > - The required libraries needed to use MLflow with Azure Databricks and Azure Machine Learning. > - How to [track Azure Databricks runs with MLflow in Azure Machine Learning](#track-azure-databricks-runs-with-mlflow).-> - How to [log models with MLflow](#registering-models-in-the-registry-with-mlflow) to get them registered in Azure Machine Learning. -> - How to [deploy and consume models registered in Azure Machine Learning](#deploying-and-consuming-models-registered-in-azure-machine-learning). +> - How to [log models with MLflow](#register-models-in-the-registry-with-mlflow) to get them registered in Azure Machine Learning. +> - How to [deploy and consume models registered in Azure Machine Learning](#deploy-and-consume-models-registered-in-azure-machine-learning). ## Prerequisites -* Install the `azureml-mlflow` package, which handles the connectivity with Azure Machine Learning, including authentication. -* An [Azure Databricks workspace and cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). -* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md). - * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations). +- The `azureml-mlflow` package, which handles the connectivity with Azure Machine Learning, including authentication. +- An [Azure Databricks workspace and cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal). +- An [Azure Machine Learning Workspace](quickstart-create-resources.md). ++See which [access permissions](how-to-assign-roles.md#mlflow-operations) you need to perform your MLflow operations with your workspace. ### Example notebooks -The [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb) demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also includes how to handle cases where you also want to track the experiments and models with the MLflow instance in Azure Databricks and leverage Azure Machine Learning for deployment. +The [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb) repository demonstrates how to train models in Azure Databricks and deploy them in Azure Machine Learning. It also describes how to track the experiments and models with the MLflow instance in Azure Databricks. It describes how to use Azure Machine Learning for deployment. ## Install libraries -To install libraries on your cluster, navigate to the **Libraries** tab and select **Install New** +To install libraries on your cluster: - ![mlflow with azure databricks](./media/how-to-use-mlflow-azure-databricks/azure-databricks-cluster-libraries.png) +1. Navigate to the **Libraries** tab and select **Install New**. -In the **Package** field, type azureml-mlflow and then select install. Repeat this step as necessary to install other additional packages to your cluster for your experiment. + :::image type="content" source="./media/how-to-use-mlflow-azure-databricks/azure-databricks-cluster-libraries.png" alt-text="Screenshot showing mlflow with azure databricks."::: - ![Azure DB install mlflow library](./media/how-to-use-mlflow-azure-databricks/install-libraries.png) +1. In the **Package** field, type *azureml-mlflow* and then select **Install**. Repeat this step as necessary to install other packages to your cluster for your experiment. ++ :::image type="content" source="./media/how-to-use-mlflow-azure-databricks/install-libraries.png" alt-text="Screenshot showing Azure DB install mlflow library."::: ## Track Azure Databricks runs with MLflow -Azure Databricks can be configured to track experiments using MLflow in two ways: +You can configure Azure Databricks to track experiments using MLflow in two ways: -- [Track in both Azure Databricks workspace and Azure Machine Learning workspace (dual-tracking)](#dual-tracking-on-azure-databricks-and-azure-machine-learning)-- [Track exclusively on Azure Machine Learning](#tracking-exclusively-on-azure-machine-learning-workspace)+- [Track in both Azure Databricks workspace and Azure Machine Learning workspace (dual-tracking)](#dual-track-on-azure-databricks-and-azure-machine-learning) +- [Track exclusively on Azure Machine Learning](#track-exclusively-on-azure-machine-learning-workspace) -By default, dual-tracking is configured for you when you linked your Azure Databricks workspace. +By default, when you link your Azure Databricks workspace, dual-tracking is configured for you. -### Dual-tracking on Azure Databricks and Azure Machine Learning +### Dual-track on Azure Databricks and Azure Machine Learning -Linking your ADB workspace to your Azure Machine Learning workspace enables you to track your experiment data in the Azure Machine Learning workspace and Azure Databricks workspace at the same time. This is referred as Dual-tracking. +Linking your Azure Databricks workspace to your Azure Machine Learning workspace enables you to track your experiment data in the Azure Machine Learning workspace and Azure Databricks workspace at the same time. This configuration is called *Dual-tracking*. -> [!WARNING] -> Dual-tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported by the moment. Configure [exclusive tracking with your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) instead. -> -> [!WARNING] -> Dual-tracking in not supported in Microsoft Azure operated by 21Vianet by the moment. Configure [exclusive tracking with your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) instead. +Dual-tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) isn't currently supported. Configure [exclusive tracking with your Azure Machine Learning workspace](#track-exclusively-on-azure-machine-learning-workspace) instead. ++Dual-tracking isn't currently supported in Microsoft Azure operated by 21Vianet. Configure [exclusive tracking with your Azure Machine Learning workspace](#track-exclusively-on-azure-machine-learning-workspace) instead. ++To link your Azure Databricks workspace to a new or existing Azure Machine Learning workspace: ++1. Sign in to the [Azure portal](https://portal.azure.com). +1. Navigate to your Azure Databricks workspace **Overview** page. +1. Select **Link Azure Machine Learning workspace**. -To link your ADB workspace to a new or existing Azure Machine Learning workspace, -1. Sign in to [Azure portal](https://portal.azure.com). -1. Navigate to your ADB workspace's **Overview** page. -1. Select the **Link Azure Machine Learning workspace** button on the bottom right. + :::image type="content" source="./media/how-to-use-mlflow-azure-databricks/link-workspaces.png" lightbox="./media/how-to-use-mlflow-azure-databricks/link-workspaces.png" alt-text="Screenshot shows the Link Azure Databricks and Azure Machine Learning workspaces option."::: - ![Link Azure DB and Azure Machine Learning workspaces](./media/how-to-use-mlflow-azure-databricks/link-workspaces.png) - -After you link your Azure Databricks workspace with your Azure Machine Learning workspace, MLflow Tracking is automatically set to be tracked in all of the following places: +After you link your Azure Databricks workspace with your Azure Machine Learning workspace, MLflow tracking is automatically tracked in the following places: -* The linked Azure Machine Learning workspace. -* Your original ADB workspace. +- The linked Azure Machine Learning workspace. +- Your original Azure Databricks workspace. -You can use then MLflow in Azure Databricks in the same way as you're used to. The following example sets the experiment name as it is usually done in Azure Databricks and start logging some parameters: +You can use then MLflow in Azure Databricks in the same way that you're used to. The following example sets the experiment name as usual in Azure Databricks and start logging some parameters. ```python import mlflow with mlflow.start_run(): pass ``` -> [!NOTE] -> As opposite to tracking, model registries don't support registering models at the same time on both Azure Machine Learning and Azure Databricks. Either one or the other has to be used. Please read the section [Registering models in the registry with MLflow](#registering-models-in-the-registry-with-mlflow) for more details. +> [!NOTE] +> As opposed to tracking, model registries don't support registering models at the same time on both Azure Machine Learning and Azure Databricks. For more information, see [Register models in the registry with MLflow](#register-models-in-the-registry-with-mlflow). -### Tracking exclusively on Azure Machine Learning workspace +### Track exclusively on Azure Machine Learning workspace -If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace. This configuration has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options. +If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to *only* track in your Azure Machine Learning workspace. This configuration has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options. > [!WARNING] > For [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md), you have to [deploy Azure Databricks in your own network (VNet injection)](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject) to ensure proper connectivity. -You have to configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as it is demonstrated in the following example: --**Configure tracking URI** --1. Get the tracking URI for your workspace: -- # [Azure CLI](#tab/cli) - - [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] - - 1. Login and configure your workspace: - - ```bash - az account set --subscription <subscription> - az configure --defaults workspace=<workspace> group=<resource-group> location=<location> - ``` - - 1. You can get the tracking URI using the `az ml workspace` command: - - ```bash - az ml workspace show --query mlflow_tracking_uri - ``` - - # [Python](#tab/python) - - [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] - - You can get the Azure Machine Learning MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the compute you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. - - 1. Login into your workspace using the `MLClient`. The easier way to do that is by using the workspace config file: - - ```python - from azure.ai.ml import MLClient - from azure.identity import DefaultAzureCredential - - ml_client = MLClient.from_config(credential=DefaultAzureCredential()) - ``` - - > [!TIP] - > You can download the workspace configuration file by: - > 1. Navigate to [Azure Machine Learning studio](https://ml.azure.com) - > 2. Click on the upper-right corner of the page -> Download config file. - > 3. Save the file `config.json` in the same directory where you are working on. - - 1. Alternatively, you can use the subscription ID, resource group name and workspace name to get it: - - ```python - from azure.ai.ml import MLClient - from azure.identity import DefaultAzureCredential - - #Enter details of your Azure Machine Learning workspace - subscription_id = '<SUBSCRIPTION_ID>' - resource_group = '<RESOURCE_GROUP>' - workspace_name = '<WORKSPACE_NAME>' - - ml_client = MLClient(credential=DefaultAzureCredential(), - subscription_id=subscription_id, - resource_group_name=resource_group) - ``` - - > [!IMPORTANT] - > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in [`azure.identity`](https://pypi.org/project/azure-identity/) package. - - 1. Get the Azure Machine Learning Tracking URI: - - ```python - mlflow_tracking_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri - ``` - - # [Studio](#tab/studio) - - Use the Azure Machine Learning portal to get the tracking URI: - - 1. Open the [Azure Machine Learning studio portal](https://ml.azure.com) and log in using your credentials. - 1. In the upper right corner, click on the name of your workspace to show the __Directory + Subscription + Workspace__ blade. - 1. Click on __View all properties in Azure Portal__. - 1. On the __Essentials__ section, you will find the property __MLflow tracking URI__. - - - # [Manually](#tab/manual) - - The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how: - - > [!WARNING] - > If you are working in a private link-enabled workspace, the MLflow endpoint will also use a private link to communicate with Azure Machine Learning. As a consequence, the tracking URI will look different as proposed here. You need to get the tracking URI using the Azure Machine Learning SDK or CLI v2 on those cases. - - ```python - region = "<LOCATION>" - subscription_id = '<SUBSCRIPTION_ID>' - resource_group = '<RESOURCE_GROUP>' - workspace_name = '<AML_WORKSPACE_NAME>' - - mlflow_tracking_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}" - ``` --1. Configuring the tracking URI: -- # [Using MLflow SDK](#tab/mlflow) - - Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI. - - ```python - import mlflow - - mlflow.set_tracking_uri(mlflow_tracking_uri) - ``` - - # [Using environment variables](#tab/environ) - - You can set the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) in your compute to make any interaction with MLflow in that compute to point by default to Azure Machine Learning. - - ```bash - MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri | sed 's/"//g') - ``` - - -- > [!TIP] - > When working on shared environments, like an Azure Databricks cluster, Azure Synapse Analytics cluster, or similar, it is useful to set the environment variable `MLFLOW_TRACKING_URI` at the cluster level to automatically configure the MLflow tracking URI to point to Azure Machine Learning for all the sessions running in the cluster rather than to do it on a per-session basis. - > - > ![Configure the environment variables in an Azure Databricks cluster](./media/how-to-use-mlflow-azure-databricks/env.png) - > - > Once the environment variable is configured, any experiment running in such cluster will be tracked in Azure Machine Learning. ---**Configure authentication** --Once the tracking is configured, you'll also need to configure how the authentication needs to happen to the associated workspace. By default, the Azure Machine Learning plugin for MLflow will perform interactive authentication by opening the default browser to prompt for credentials. Refer to [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication) to additional ways to configure authentication for MLflow in Azure Machine Learning workspaces. +Configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as shown in the following example: ++#### Configure tracking URI ++1. Get the tracking URI for your workspace. ++ # [Azure CLI](#tab/cli) ++ [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ++ 1. Sign in and configure your workspace. ++ ```bash + az account set --subscription <subscription> + az configure --defaults workspace=<workspace> group=<resource-group> location=<location> + ``` ++ 1. You can get the tracking URI using the `az ml workspace` command. ++ ```bash + az ml workspace show --query mlflow_tracking_uri + ``` ++ # [Python](#tab/python) ++ [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] ++ You can get the Azure Machine Learning MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the compute that you're using. The following sample gets the unique MLFLow tracking URI associated with your workspace. ++ 1. Sign in into your workspace using the `MLClient`. The easier way to do that is by using the workspace config file. ++ ```python + from azure.ai.ml import MLClient + from azure.identity import DefaultAzureCredential ++ ml_client = MLClient.from_config(credential=DefaultAzureCredential()) + ``` ++ > [!TIP] + > To download the workspace configuration file: + > + > 1. Navigate to [Azure Machine Learning studio](https://ml.azure.com). + > 1. Select the upper-right corner of the page > **Download config file**. + > 1. Save the file `config.json` in the same directory where you are working. ++ Alternatively, you can use the subscription ID, resource group name, and workspace name to get it. ++ ```python + from azure.ai.ml import MLClient + from azure.identity import DefaultAzureCredential ++ #Enter details of your Azure Machine Learning workspace + subscription_id = '<SUBSCRIPTION_ID>' + resource_group = '<RESOURCE_GROUP>' + workspace_name = '<WORKSPACE_NAME>' ++ ml_client = MLClient(credential=DefaultAzureCredential(), + subscription_id=subscription_id, + resource_group_name=resource_group) + ``` ++ > [!IMPORTANT] + > `DefaultAzureCredential` tries to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in [`azure.identity`](https://pypi.org/project/azure-identity/) package. ++ 1. Get the Azure Machine Learning Tracking URI. ++ ```python + mlflow_tracking_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri + ``` ++ # [Studio](#tab/studio) ++ Use the Azure Machine Learning portal to get the tracking URI. ++ 1. Open the [Azure Machine Learning studio portal](https://ml.azure.com) and sign in using your credentials. + 1. Select the name of your workspace to show the **Directory + Subscription + Workspace** page. + 1. Select **View all properties in Azure Portal**. + 1. On the **Essentials** section, find the property **MLflow tracking URI**. ++ # [Manually](#tab/manual) ++ You can construct the Azure Machine Learning Tracking URI using the subscription ID, region of where the resource is deployed, resource group name, and workspace name. The following code sample shows how. ++ > [!WARNING] + > If you're working in a private link-enabled workspace, the MLflow endpoint also uses a private link to communicate with Azure Machine Learning. Consequently, the tracking URI looks different than shown here. You need to get the tracking URI using the Azure Machine Learning SDK or CLI v2 on those cases. ++ ```python + region = "<LOCATION>" + subscription_id = '<SUBSCRIPTION_ID>' + resource_group = '<RESOURCE_GROUP>' + workspace_name = '<AML_WORKSPACE_NAME>' ++ mlflow_tracking_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}" + ``` ++1. Configure the tracking URI. ++ # [Use MLflow SDK](#tab/mlflow) ++ The method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI. ++ ```python + import mlflow ++ mlflow.set_tracking_uri(mlflow_tracking_uri) + ``` ++ # [Use environment variables](#tab/environ) ++ You can set the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) in your compute to make any interaction with MLflow in the compute to point by default to Azure Machine Learning. ++ ```bash + MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri | sed 's/"//g') + ``` ++ ++> [!TIP] +> When working with shared environments, like an Azure Databricks cluster, Azure Synapse Analytics cluster, or similar, you can set the environment variable `MLFLOW_TRACKING_URI` at the cluster level. This approach allows you to automatically configure the MLflow tracking URI to point to Azure Machine Learning for all the sessions that run in the cluster rather than to do it on a per-session basis. +> +> :::image type="content" source="./media/how-to-use-mlflow-azure-databricks/env.png" alt-text="Screenshot shows Advanced options where you can configure the environment variables in an Azure Databricks cluster."::: +> +> After you configure the environment variable, any experiment running in such cluster is tracked in Azure Machine Learning. ++#### Configure authentication ++After you configure tracking, configure how to authenticate to the associated workspace. By default, the Azure Machine Learning plugin for MLflow opens a browser to interactively prompt for credentials. For other ways to configure authentication for MLflow in Azure Machine Learning workspaces, see [Configure MLflow for Azure Machine Learning: Configure authentication](how-to-use-mlflow-configure-tracking.md#configure-authentication). [!INCLUDE [configure-mlflow-auth](includes/machine-learning-mlflow-configure-auth.md)] -#### Experiment's names in Azure Machine Learning +#### Name experiment in Azure Machine Learning -When MLflow is configured to exclusively track experiments in Azure Machine Learning workspace, the experiment's naming convention has to follow the one used by Azure Machine Learning. In Azure Databricks, experiments are named with the path to where the experiment is saved like `/Users/alice@contoso.com/iris-classifier`. However, in Azure Machine Learning, you have to provide the experiment name directly. As in the previous example, the same experiment would be named `iris-classifier` directly: +When you configure MLflow to exclusively track experiments in Azure Machine Learning workspace, the experiment naming convention has to follow the one used by Azure Machine Learning. In Azure Databricks, experiments are named with the path to where the experiment is saved, for instance `/Users/alice@contoso.com/iris-classifier`. However, in Azure Machine Learning, you provide the experiment name directly. The same experiment would be named `iris-classifier` directly. ```python mlflow.set_experiment(experiment_name="experiment-name") ``` -### Tracking parameters, metrics and artifacts +#### Track parameters, metrics and artifacts -You can use then MLflow in Azure Databricks in the same way as you're used to. For details see [Log & view metrics and log files](how-to-log-view-metrics.md). +After this configuration, you can use MLflow in Azure Databricks in the same way as you're used to. For more information, see [Log & view metrics and log files](how-to-log-view-metrics.md). -## Logging models with MLflow +## Log models with MLflow -After your model is trained, you can log it to the tracking server with the `mlflow.<model_flavor>.log_model()` method. `<model_flavor>`, refers to the framework associated with the model. [Learn what model flavors are supported](https://mlflow.org/docs/latest/models.html#model-api). In the following example, a model created with the Spark library MLLib is being registered: +After your model is trained, you can log it to the tracking server with the `mlflow.<model_flavor>.log_model()` method. `<model_flavor>` refers to the framework associated with the model. [Learn what model flavors are supported](https://mlflow.org/docs/latest/models.html#model-api). +In the following example, a model created with the Spark library MLLib is being registered. ```python mlflow.spark.log_model(model, artifact_path = "model") ``` -It's worth to mention that the flavor `spark` doesn't correspond to the fact that we are training a model in a Spark cluster but because of the training framework it was used (you can perfectly train a model using TensorFlow with Spark and hence the flavor to use would be `tensorflow`). +The flavor `spark` doesn't correspond to the fact that you're training a model in a Spark cluster. Instead, it follows from the training framework used. You can train a model using TensorFlow with Spark. The flavor to use would be `tensorflow`. -Models are logged inside of the run being tracked. That means that models are available in either both Azure Databricks and Azure Machine Learning (default) or exclusively in Azure Machine Learning if you configured the tracking URI to point to it. +Models are logged inside of the run being tracked. That fact means that models are available in either both Azure Databricks and Azure Machine Learning (default) or exclusively in Azure Machine Learning if you configured the tracking URI to point to it. > [!IMPORTANT]-> Notice that here the parameter `registered_model_name` has not been specified. Read the section [Registering models in the registry with MLflow](#registering-models-in-the-registry-with-mlflow) for more details about the implications of such parameter and how the registry works. +> The parameter `registered_model_name` has not been specified. For more information about this parameter and the registry, see [Registering models in the registry with MLflow](#register-models-in-the-registry-with-mlflow). -## Registering models in the registry with MLflow +## Register models in the registry with MLflow -As opposite to tracking, **model registries can't operate** at the same time in Azure Databricks and Azure Machine Learning. Either one or the other has to be used. By default, the Azure Databricks workspace is used for model registries; unless you chose to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace), then the model registry is the Azure Machine Learning workspace. +As opposed to tracking, model registries can't operate at the same time in Azure Databricks and Azure Machine Learning. They have to use either one or the other. By default, model registries use the Azure Databricks workspace. If you choose to [set MLflow tracking to only track in your Azure Machine Learning workspace](#track-exclusively-on-azure-machine-learning-workspace), the model registry is the Azure Machine Learning workspace. -Then, considering you're using the default configuration, the following line will log a model inside the corresponding runs of both Azure Databricks and Azure Machine Learning, but it will register it only on Azure Databricks: +If you use the default configuration, the following code logs a model inside the corresponding runs of both Azure Databricks and Azure Machine Learning, but it registers it only on Azure Databricks. ```python mlflow.spark.log_model(model, artifact_path = "model", registered_model_name = 'model_name') ``` -* **If a registered model with the name doesnΓÇÖt exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object. +- If a registered model with the name doesnΓÇÖt exist, the method registers a new model, creates version 1, and returns a `ModelVersion` MLflow object. +- If a registered model with the name already exists, the method creates a new model version and returns the version object. -* **If a registered model with the name already exists**, the method creates a new model version and returns the version object. +### Use Azure Machine Learning registry with MLflow -### Using Azure Machine Learning Registry with MLflow +If you want to use Azure Machine Learning Model Registry instead of Azure Databricks, we recommend that you [set MLflow tracking to only track in your Azure Machine Learning workspace](#track-exclusively-on-azure-machine-learning-workspace). This approach removes the ambiguity of where models are being registered and simplifies the configuration. -If you want to use Azure Machine Learning Model Registry instead of Azure Databricks, we recommend you to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace). This will remove the ambiguity of where models are being registered and simplifies complexity. --However, if you want to continue using the dual-tracking capabilities but register models in Azure Machine Learning, you can instruct MLflow to use Azure Machine Learning for model registries by configuring the MLflow Model Registry URI. This URI has the exact same format and value that the MLflow tracking URI. +If you want to continue using the dual-tracking capabilities but register models in Azure Machine Learning, you can instruct MLflow to use Azure Machine Learning for model registries by configuring the MLflow Model Registry URI. This URI has the same format and value that the MLflow that tracks URI. ```python mlflow.set_registry_uri(azureml_mlflow_uri) ``` > [!NOTE]-> The value of `azureml_mlflow_uri` was obtained in the same way it was demostrated in [Set MLflow Tracking to only track in your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) +> The value of `azureml_mlflow_uri` was obtained in the same way as described in [Set MLflow tracking to only track in your Azure Machine Learning workspace](#track-exclusively-on-azure-machine-learning-workspace). -For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb). +For a complete example of this scenario, see [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb). -## Deploying and consuming models registered in Azure Machine Learning +## Deploy and consume models registered in Azure Machine Learning -Models registered in Azure Machine Learning Service using MLflow can be consumed as: +Models registered in Azure Machine Learning Service using MLflow can be consumed as: -* An Azure Machine Learning endpoint (real-time and batch): This deployment allows you to leverage Azure Machine Learning deployment capabilities for both real-time and batch inference in Azure Container Instances (ACI), Azure Kubernetes (AKS) or our Managed Inference Endpoints. +- An Azure Machine Learning endpoint (real-time and batch). This deployment allows you to use Azure Machine Learning deployment capabilities for both real-time and batch inference in Azure Container Instances, Azure Kubernetes, or Managed Inference Endpoints. +- MLFlow model objects or Pandas user-defined functions (UDFs), which can be used in Azure Databricks notebooks in streaming or batch pipelines. -* MLFlow model objects or Pandas UDFs, which can be used in Azure Databricks notebooks in streaming or batch pipelines. +### Deploy models to Azure Machine Learning endpoints -### Deploy models to Azure Machine Learning endpoints -You can leverage the `azureml-mlflow` plugin to deploy a model to your Azure Machine Learning workspace. Check [How to deploy MLflow models](how-to-deploy-mlflow-models.md) page for a complete detail about how to deploy models to the different targets. +You can use the `azureml-mlflow` plugin to deploy a model to your Azure Machine Learning workspace. For more information about how to deploy models to the different targets [How to deploy MLflow models](how-to-deploy-mlflow-models.md). > [!IMPORTANT]-> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models happen to be registered in the MLflow instance inside Azure Databricks, you will have to register them again in Azure Machine Learning. If this is you case, please check the example [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb) +> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models are registered in the MLflow instance inside Azure Databricks, register them again in Azure Machine Learning. For more information, see [Training models in Azure Databricks and deploying them on Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/track_with_databricks_deploy_aml.ipynb) -### Deploy models to ADB for batch scoring using UDFs +### Deploy models to Azure Databricks for batch scoring using UDFs -You can choose Azure Databricks clusters for batch scoring. By leveraging Mlflow, you can resolve any model from the registry you are connected to. You will typically use one of the following two methods: +You can choose Azure Databricks clusters for batch scoring. By using Mlflow, you can resolve any model from the registry you're connected to. You usually use one of the following methods: -- If your model was trained and built with Spark libraries (like `MLLib`), use `mlflow.pyfunc.spark_udf` to load a model and used it as a Spark Pandas UDF to score new data.-- If your model wasn't trained or built with Spark libraries, either use `mlflow.pyfunc.load_model` or `mlflow.<flavor>.load_model` to load the model in the cluster driver. Notice that in this way, any parallelization or work distribution you want to happen in the cluster needs to be orchestrated by you. Also, notice that MLflow doesn't install any library your model requires to run. Those libraries need to be installed in the cluster before running it.+- If your model was trained and built with Spark libraries like `MLLib`, use `mlflow.pyfunc.spark_udf` to load a model and used it as a Spark Pandas UDF to score new data. +- If your model wasn't trained or built with Spark libraries, either use `mlflow.pyfunc.load_model` or `mlflow.<flavor>.load_model` to load the model in the cluster driver. You need to orchestrate any parallelization or work distribution you want to happen in the cluster. MLflow doesn't install any library your model requires to run. Those libraries need to be installed in the cluster before running it. The following example shows how to load a model from the registry named `uci-heart-classifier` and used it as a Spark Pandas UDF to score new data. model_uri = "models:/"+model_name+"/latest" pyfunc_udf = mlflow.pyfunc.spark_udf(spark, model_uri) ``` -> [!TIP] -> Check [Loading models from registry](how-to-manage-models-mlflow.md#load-models-from-registry) for more ways to reference models from the registry. +For more ways to reference models from the registry, see [Loading models from registry](how-to-manage-models-mlflow.md#load-models-from-registry). -Once the model is loaded, you can use to score new data: +After the model is loaded, you can use this command to score new data. ```python #Load Scoring Data into Spark Dataframe display(preds) ## Clean up resources -If you wish to keep your Azure Databricks workspace, but no longer need the Azure Machine Learning workspace, you can delete the Azure Machine Learning workspace. This action results in unlinking your Azure Databricks workspace and the Azure Machine Learning workspace. --If you don't plan to use the logged metrics and artifacts in your workspace, the ability to delete them individually is unavailable at this time. Instead, delete the resource group that contains the storage account and workspace, so you don't incur any charges: --1. In the Azure portal, select **Resource groups** on the far left. -- ![Delete in the Azure portal](./media/how-to-use-mlflow-azure-databricks/delete-resources.png) +If you want to keep your Azure Databricks workspace, but no longer need the Azure Machine Learning workspace, you can delete the Azure Machine Learning workspace. This action results in unlinking your Azure Databricks workspace and the Azure Machine Learning workspace. -1. From the list, select the resource group you created. +If you don't plan to use the logged metrics and artifacts in your workspace, delete the resource group that contains the storage account and workspace. -1. Select **Delete resource group**. +1. In the Azure portal, search for *Resource groups*. Under **services**, select **Resource groups**. +1. In the **Resource groups** list, find and select the resource group that you created to open it. +1. In the **Overview** page, select **Delete resource group**. +1. To verify deletion, enter the resource group's name. -1. Enter the resource group name. Then select **Delete**. +## Related content -## Next steps -* [Deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md). -* [Manage your models](concept-model-management-and-deployment.md). -* [Track experiment jobs with MLflow and Azure Machine Learning](how-to-use-mlflow.md). -* Learn more about [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/). +- [Deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md) +- [Manage your models](concept-model-management-and-deployment.md) +- [Track experiment jobs with MLflow and Azure Machine Learning](how-to-use-mlflow.md) +- [Azure Databricks and MLflow](/azure/databricks/applications/mlflow/) |
machine-learning | How To View Online Endpoints Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-view-online-endpoints-costs.md | Title: View costs for managed online endpoints -description: 'Learn to how view costs for a managed online endpoint in Azure Machine Learning.' +description: 'Learn to view costs for a managed online endpoint in Azure Machine Learning in the Azure portal.' Previously updated : 11/04/2022 Last updated : 08/15/2024 +#customer intent: As an analyst, I need to view the costs associated with the machine learning endpoints for a workspace. # View costs for an Azure Machine Learning managed online endpoint -Learn how to view costs for a managed online endpoint. Costs for your endpoints will accrue to the associated workspace. You can see costs for a specific endpoint using tags. +Learn how to view costs for a managed online endpoint. Costs for your endpoints accrue to the associated workspace. You can see costs for a specific endpoint by using tags. > [!IMPORTANT]-> This article only applies to viewing costs for Azure Machine Learning managed online endpoints. Managed online endpoints are different from other resources since they must use tags to track costs. For more information on managing and optimizing cost for Azure Machine Learning, see [How to manage and optimize cost](how-to-manage-optimize-cost.md). For more information on viewing the costs of other Azure resources, see [Quickstart: Explore and analyze costs with cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md). +> This article only applies to viewing costs for Azure Machine Learning managed online endpoints. Managed online endpoints are different from other resources since they must use tags to track costs. +> +> For more information on managing and optimizing cost for Azure Machine Learning, see [Manage and optimize Azure Machine Learning costs](how-to-manage-optimize-cost.md). For more information on viewing the costs of other Azure resources, see [Quickstart: Start using Cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md). ## Prerequisites - Deploy an Azure Machine Learning managed online endpoint.-- Have at least [Billing Reader](../role-based-access-control/role-assignments-portal.yml) access on the subscription where the endpoint is deployed+- Have at least [Billing Reader](../role-based-access-control/role-assignments-portal.yml) access on the subscription where the endpoint is deployed. ## View costs Navigate to the **Cost Analysis** page for your subscription: -1. In the [Azure portal](https://portal.azure.com), Select **Cost Analysis** for your subscription. +- In the [Azure portal](https://portal.azure.com), select **Cost Analysis** for your subscription. - [![Managed online endpoint cost analysis: screenshot of a subscription in the Azure portal showing red box around "Cost Analysis" button on the left hand side.](./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis.png)](./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis.png#lightbox) + :::image type="content" source="./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis.png" alt-text="Screenshot of a subscription in the Azure portal showing red box around Cost Analysis button."::: Create a filter to scope data to your Azure Machine Learning workspace resource: Create a filter to scope data to your Azure Machine Learning workspace resource: 1. In the second filter dropdown, select your Azure Machine Learning workspace. - [![Managed online endpoint cost analysis: screenshot of the Cost Analysis view showing a red box around the "Add filter" button at the top right.](./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-add-filter.png)](./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-add-filter.png#lightbox) + :::image type="content" source="./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-add-filter.png" lightbox="./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-add-filter.png" alt-text="Screenshot of the Cost Analysis view showing a red box around the Add filter button."::: -Create a tag filter to show your managed online endpoint and/or managed online deployment: -1. Select **Add filter** > **Tag** > **azuremlendpoint**: "\<your endpoint name>" -1. Select **Add filter** > **Tag** > **azuremldeployment**: "\<your deployment name>". +Create a tag filter to show your managed online endpoint and managed online deployment: - > [!NOTE] - > Dollar values in this image are fictitious and do not reflect actual costs. +1. Select **Add filter** > **Tag** > **azuremlendpoint**: *\<your endpoint name>*. - [![Managed online endpoint cost analysis: screenshot of the Cost Analysis view showing a red box around the "Tag" buttons in the top right.](./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-select-endpoint-deployment.png)](./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-select-endpoint-deployment.png#lightbox) +1. Select **Add filter** > **Tag** > **azuremldeployment**: *\<your deployment name>*. ++ > [!NOTE] + > Dollar values in this image are fictitious and do not reflect actual costs. ++ :::image type="content" source="./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-select-endpoint-deployment.png" lightbox="./media/how-to-view-online-endpoints-costs/online-endpoints-cost-analysis-select-endpoint-deployment.png" alt-text="Screenshot of the Cost Analysis view showing a red box around the Tag buttons."::: > [!TIP]-> - Managed online endpoint uses VMs for the deployments. If you submitted request to create an online deployment and it failed, it may have passed the stage when compute is created. In that case, the failed deployment would incur charges. If you finished debugging or investigation for the failure, you may delete the failed deployments to save the cost. +> Managed online endpoints use virtual machines (VMs) for the deployments. If you submitted a request to create an online deployment and it failed, it might have passed the stage when compute is created. In that case, the failed deployment would incur charges. If you finished debugging or investigating the failure, you can delete the failed deployments to save the cost. ++## Related content -## Next steps -- [What are endpoints?](concept-endpoints.md)-- Learn how to [monitor your managed online endpoint](./how-to-monitor-online-endpoints.md).-- [How to deploy an ML model with an online endpoint](how-to-deploy-online-endpoints.md)-- [How to manage and optimize cost for Azure Machine Learning](how-to-manage-optimize-cost.md)+- [Endpoints for inference in production](concept-endpoints.md) +- [Monitor online endpoints](./how-to-monitor-online-endpoints.md) +- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) +- [Manage and optimize Azure Machine Learning costs](how-to-manage-optimize-cost.md) |
machine-learning | Reference Yaml Job Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md | |
machine-learning | Tutorial Auto Train Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md | You write code using the Python SDK in this tutorial and learn the following tas * Python 3.6 or 3.7 are supported for this feature -* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook. +* Download and unzip the [**odFridgeObjects.zip*](https://automlsamplenotebookdata.blob.core.windows.net/image-object-detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook. * Use a compute instance to follow this tutorial without further installation. (See how to [create a compute instance](./quickstart-create-resources.md#create-a-compute-instance).) Or install the CLI/SDK to use your own local environment. |
managed-ccf | Application Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/application-scenarios.md | A recommended approach is to deploy the LOB application on an Azure Confidential Due to several recent breaches in the supply chain industry, organizations are exploring ways to increase visibility and auditability into their manufacturing process. On the other hand, consumer awareness on unfair manufacturing processes and mistreatment of workforce has increased. In this example, we describe a scenario that tracks the life of coffee beans from the farm to the cup. Fabrikam is a coffee bean roaster and retailer. It hosts an existing LOB web application that is used by different personas like farmers, distributors, Fabrikam's procurement team and the end consumers. To improve security and auditability, Fabrikam deploys the web application to an Azure Confidential VM and uses decentralized RBAC managed in Managed CCF by a consortium of members. -A sample [decentralized RBAC application](https://github.com/microsoft/ccf-app-samples/tree/main/decentralize-rbac-app) is published in GitHub for reference. +A sample [decentralized RBAC application](https://github.com/microsoft/ccf-app-samples/tree/main/decentralized-rbac-app) is published in GitHub for reference. ## Data for Purpose |
migrate | How To Create A Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-a-group.md | If you want to create a group manually outside of creating an assessment, do the - If you haven't yet added the Azure Migrate: Discovery and assessment tool, click to add it. [Learn more](how-to-assess.md). - If you haven't yet created an Azure Migrate project, [learn more](./create-manage-projects.md). - ![Select groups](./media/how-to-create-a-group/select-groups.png) - 2. Click the **Group** icon. 3. In **Create group**, specify a group name, and in **Appliance name**, select the Azure Migrate appliance you're using for server discovery. 4. From the server list, select the servers you want to add to the group > **Create**. - ![Create group](./media/how-to-create-a-group/create-group.png) - You can now use this group when you [create an Azure VM assessment](how-to-create-assessment.md) or [an Azure VMware Solution (AVS) assessment](how-to-create-azure-vmware-solution-assessment.md) or [an Azure SQL assessment](how-to-create-azure-sql-assessment.md) or [an Azure App Service assessment](how-to-create-azure-app-service-assessment.md). ## Refine a group with dependency mapping Dependency mapping helps you to visualize dependencies across servers. You typic If you've already [set up dependency mapping](how-to-create-group-machine-dependencies.md), and want to refine an existing group, do the following: -1. In the **Servers** tab, in **Azure Migrate: Discovery and assessment** tile, click **Groups**. -2. Click the group you want to refine. +1. In the **Servers, databases and web apps** tab, in **Azure Migrate: Discovery and assessment** tile, select **Groups**. +2. Select the group you want to refine. - If you haven't yet set up dependency mapping, the **Dependencies** column will show a **Requires installation** status. For each server for which you want to visualize dependencies, click **Requires installation**. Install a couple of agents on each server, before you can map the server dependencies. [Learn more](how-to-create-group-machine-dependencies.md). ![Add dependency mapping](./media/how-to-create-a-group/add-dependency-mapping.png) |
migrate | How To Create Group Machine Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-group-machine-dependencies.md | This article describes how to set up agent-based dependency analysis in Azure Mi ## Associate a workspace -1. After you've discovered servers for assessment, in **Servers** > **Azure Migrate: Discovery and assessment**, click **Overview**. +1. After you've discovered servers for assessment, in **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Overview**. 2. In **Azure Migrate: Discovery and assessment**, click **Essentials**. 3. In **OMS Workspace**, click **Requires configuration**. |
migrate | How To Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-migrate.md | This article describes how to add migration tools in [Azure Migrate](./migrate-s 1. In the Azure Migrate project, select **Overview**. 2. Select the migration scenario you want to use: - - To migrate machines and workloads to Azure, select **Assess and migrate servers**. + - To migrate machines and workloads to Azure, select **Discover, assess and migrate**. - To migrate on-premises databases, select **Assess and migrate databases**. - To migrate on-premises web apps, select **Explore more** > **Web Apps**. - To migrate data to Azure using Data box, select **Explore more** > **Data box**. - ![Options for selecting a migrate scenario](./media/how-to-migrate/migrate-scenario.png) -- ## Select a migration tool 1. Add a tool: - - If you created an Azure Migrate project using the **Assess and migrate servers** option in the portal, the Migration and modernization tool is automatically added to the project. To add additional migration tools, in **Servers**, next to **Migration tools**, select **Add more tools**. - - ![Button to add additional migration tools](./media/how-to-migrate/add-migration-tools.png) -- - If you created a project using a different option, and don't yet have any migration tools, in **Servers** > **Migration tools**, select **Click here**. -- ![Button to add first migration tools](./media/how-to-migrate/no-migration-tool.png) + - If you created an Azure Migrate project using the **Discover, assess and migrate** option in the portal, the Migration and modernization tool is automatically added to the project. To add additional migration tools, in **Servers, databases and web apps** > **Migration and modernization**, select **Click here**. 2. In **Azure Migrate** > **Add tools**, select the tools you want to add. Then select **Add tool**. - ![Select assessment tools from list](./media/how-to-migrate/select-migration-tool.png) -- ## Select a database migration tool If you created an Azure Migrate project using the **Assess and migrate database** option in the portal, the Database Migration tool is automatically added to the project. 1. If the Database Migration tool isn't in the project, in **Databases** > **Assessment tools**, select **Click here**.- - ![Add database migration tools](./media/how-to-migrate/no-database-migration-tool.png) - 2. In **Azure Migrate** > **Add tools**, select the Database Migration tool. Then select **Add tool**. - ![Select the database migration tool from list](./media/how-to-migrate/select-database-migration-tool.png) -- - ## Select a web app migration tool -If you created an Azure Migrate project using the **Explore more** > **WebApps** option in the portal, the Web app migration tool is automatically added to the project. +If you created an Azure Migrate project using the **Explore more** > **Web apps** option in the portal, the Web app migration tool is automatically added to the project. 1. If the Web app migration tool isn't in the project, in **Web apps** > **Assessment tools**, select **Click here**.-- ![Add web app migration tools](./media/how-to-migrate/no-web-app-migration-tool.png) 2. In **Azure Migrate** > **Add tools**, select the Web App Migration tool. Then select **Add tool**. - ![Select webapp assessment tools from list](./media/how-to-migrate/select-web-app-migration-tool.png) - ## Order an Azure Data Box |
migrate | Troubleshoot Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-discovery.md | If the servers don't appear in the portal, wait for a few minutes because it tak If the state still doesn't change: -- Select **Refresh** on the **Servers** tab to see the count of the discovered servers in Azure Migrate: Discovery and assessment and Migration and modernization.+- Select **Refresh** on the **Servers, databases and web apps** tab to see the count of the discovered servers in Azure Migrate: Discovery and assessment and Migration and modernization. If the preceding step doesn't work and you're discovering VMware servers: |
migrate | Tutorial Assess Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-gcp.md | Run an assessment as follows: ![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png) 2. In **Azure Migrate: Discovery and assessment**, select **Assess**.-- ![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png) - 3. In **Assess servers** > **Assessment type**, select **Azure VM**. 4. In **Discovery source**: Run an assessment as follows: 1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment. -1. After the assessment is created, view it in **Servers** > **Azure Migrate: Discovery and assessment** > **Assessments**. +1. After the assessment is created, view it in **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Assessments**. 1. Select **Export assessment** to download it as an Excel file. > [!NOTE] |
migrate | Tutorial Assess Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-physical.md | Run an assessment as follows: ![Location of Assess and migrate servers button.](./media/tutorial-assess-vmware-azure-vm/assess.png) 2. In **Azure Migrate: Discovery and assessment**, select **Assess** > **Azure VM**.-- ![Location of the Assess button](./media/tutorial-assess-vmware-azure-vm/assess-servers.png) - 3. In **Assess servers** > **Assessment type**, select **Azure VM**. 4. In **Discovery source**: Run an assessment as follows: 1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment. -1. After the assessment is created, view it in **Servers** > **Azure Migrate: Discovery and assessment** > **Assessments**. +1. After the assessment is created, view it in **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment** > **Assessments**. 1. Select **Export assessment**, to download it as an Excel file. |
migrate | Tutorial Discover Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md | Set up a new Azure Migrate project if you don't have one. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project. -![Page showing Server Assessment tool added by default](./media/tutorial-discover-import/added-tool.png) - ## Prepare the CSV Download the CSV template and add server information to it. ### Download the template -1. In **Migration goals** > **Servers** > **Azure Migrate: Discovery and assessment**, select **Discover**. +1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**. 2. In **Discover machines**, select **Import using CSV**. 3. Select **Download** to download the CSV template. Alternatively, you can [download it directly](https://go.microsoft.com/fwlink/?linkid=2109031). For example, to specify all fields for a second disk, add these columns: After adding information to the CSV template, import the CSV file. -1. In **Migration goals** > **Servers** > **Azure Migrate: Discovery and assessment**, select **Discover**. +1. In **Migration goals** > **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select **Discover**. 1. In **Discover machines**, select **Import using CSV** 1. Upload the .csv file and select **Import**. 3. The import status is shown. |
migrate | Tutorial Migrate Physical Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md | When delta replication begins, you can run a test migration for the VMs before y To do a test migration: -1. In **Migration goals**, select **Servers** > **Migration and modernization**, selectΓÇ»**Replicated servers** under **Replications**. +1. In **Migration goals**, select **Servers, databases and web apps** > **Migration and modernization**, selectΓÇ»**Replicated servers** under **Replications**. 1. In the **Replicating machines** tab, right-click the VM to test and selectΓÇ»**Test migrate**. |
migrate | Tutorial Migrate Vmware Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-migrate-vmware-agent.md | Download the template as follows: 1. In the Azure Migrate project, select **Servers, databases and web apps** under **Migration goals**. 2. In **Servers, databases and web apps** > **Migration and modernization**, click **Discover**.-- ![Discover VMs](../media/tutorial-migrate-vmware-agent/migrate-discover.png) - 3. In **Discover machines** > **Are your machines virtualized?**, click **Yes, with VMware vSphere hypervisor**. 4. In **How do you want to migrate?**, select **Using agent-based replication**. 5. In **Target region**, select the Azure region to which you want to migrate the machines. 6. Select **Confirm that the target region for migration is region-name**. 7. Click **Create resources**. This creates an Azure Site Recovery vault in the background. You can't change the target region for this project after clicking this button, and all subsequent migrations are to this region. - ![Create Recovery Services vault](../media/tutorial-migrate-vmware-agent/create-resources.png) - > [!NOTE] > If you selected private endpoint as the connectivity method for the Azure Migrate project when it was created, the Recovery Services vault will also be configured for private endpoint connectivity. Ensure that the private endpoints are reachable from the replication appliance: [**Learn more**](../troubleshoot-network-connectivity.md) 8. In **Do you want to install a new replication appliance?**, select **Install a replication appliance**. 9. Click **Download**. This downloads an OVF template.- ![Download OVA](../media/tutorial-migrate-vmware-agent/download-ova.png) 10. Note the name of the resource group and the Recovery Services vault. You need these during appliance deployment. Select VMs for migration. 1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization**, click **Replicate**. - :::image type="content" source="../media/tutorial-migrate-vmware-agent/select-replicate.png" alt-text="Screenshot of the Servers screen in Azure Migrate. The Replicate button is selected in the Migration and modernization tool under Migration tools."::: - 2. In **Replicate**, > **Source settings** > **Are your machines virtualized?**, select **Yes, with VMware vSphere**. 3. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up. 4. In **vCenter server**, specify the name of the vCenter server managing the VMs, or the vSphere server on which the VMs are hosted. 5. In **Process Server**, select the name of the replication appliance. 6. In **Guest credentials**, specify the VM admin account that will be used for push installation of the Mobility service. Then click **Next: Virtual machines**. - :::image type="content" source="../media/tutorial-migrate-vmware-agent/source-settings.png" alt-text="Screenshot of the Source settings tab in the Replicate screen. The Guest credentials field is highlighted and the value is set to VM-admin-account."::: - 7. In **Virtual Machines**, select the machines that you want to replicate. - If you've run an assessment for the VMs, you can apply VM sizing and disk type (premium/standard) recommendations from the assessment results. To do this, in **Import migration settings from an Azure Migrate assessment?**, select the **Yes** option. Select VMs for migration. - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines. 9. Check each VM you want to migrate. Then click **Next: Target settings**. - :::image type="content" source="../media/tutorial-migrate-vmware-agent/select-vms-inline.png" alt-text="Screenshot on selecting VMs." lightbox="../media/tutorial-migrate-vmware-agent/select-vms-expanded.png"::: - 10. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration. 11. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration. 12. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the dropdown if you'd like to specify a different storage account to use as the cache storage account for replication. <br/> Select VMs for migration. - Select **No** if you don't want to apply Azure Hybrid Benefit. Then click **Next**. - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**. - :::image type="content" source="../media/tutorial-migrate-vmware/target-settings.png" alt-text="Screenshot on target settings."::: - 16. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-vmware-migration.md#azure-vm-requirements). - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**. Select VMs for migration. 17. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**. - You can exclude disks from replication. - If you exclude disks, they won't be present on the Azure VM after migration.-- :::image type="content" source="../media/tutorial-migrate-vmware-agent/disks-inline.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box." lightbox="../media/tutorial-migrate-vmware-agent/disks-expanded.png"::: - You can exclude disks if the mobility agent is already installed on that server. [Learn more](../../site-recovery/exclude-disks-replication.md#exclude-limitations). 18. In **Tags**, choose to add tags to your Virtual machines, Disks, and NICs. - :::image type="content" source="../media/tutorial-migrate-vmware/tags-inline.png" alt-text="Screenshot shows the tags tab of the Replicate dialog box." lightbox="../media/tutorial-migrate-vmware/tags-expanded.png"::: - 19. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers. > [!NOTE] Select VMs for migration. 1. Track job status in the portal notifications. - ![Track job](../media/tutorial-migrate-vmware-agent/jobs.png) - 2. To monitor replication status, click **Replicating servers** in **Migration and modernization**. - ![Monitor replication](../media/tutorial-migrate-vmware-agent/replicate-servers.png) - Replication occurs as follows: - When the Start Replication job finishes successfully, the machines begin their initial replication to Azure. - After initial replication finishes, delta replication begins. Incremental changes to on-premises disks are periodically replicated to the replica disks in Azure. When delta replication begins, you can run a test migration for the VMs, before Do a test migration as follows: -1. In **Migration goals** > **Servers** > **Migration and modernization**, select **Test migrated servers**. -- :::image type="content" source="../media/tutorial-migrate-vmware-agent/test-migrated-servers.png" alt-text="Screenshot of Test migrated servers."::: +1. In **Migration goals** > **Servers, databases and web apps** > **Migration and modernization**, select **Test migrated servers**. 2. Right-click the VM to test, and click **Test migrate**. - :::image type="content" source="../media/tutorial-migrate-vmware-agent/test-migrate-inline.png" alt-text="Screenshot of Test migration." lightbox="../media/tutorial-migrate-vmware-agent/test-migrate-expanded.png"::: - 3. In **Test Migration**, select the Azure VNet in which the Azure VM will be located after the migration. We recommend you use a non-production VNet. 4. The **Test migration** job starts. Monitor the job in the portal notifications. 5. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**. 6. After the test is done, right-click the Azure VM in **Replicating machines**, and click **Clean up test migration**. - :::image type="content" source="../media/tutorial-migrate-vmware-agent/clean-up-inline.png" alt-text="Screenshot of Clean up migration." lightbox="../media/tutorial-migrate-vmware-agent/clean-up-expanded.png"::: - > [!NOTE] > You can now register your servers running SQL server with SQL VM RP to take advantage of automated patching, automated backup and simplified license management using SQL IaaS Agent Extension. >- Select **Manage** > **Replicating servers** > **Machine containing SQL server** > **Compute and Network** and select **yes** to register with SQL VM RP. Do a test migration as follows: After you've verified that the test migration works as expected, you can migrate the on-premises machines. 1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization**, select **Replicating servers**.-- ![Replicating servers](../media/tutorial-migrate-vmware-agent/replicate-servers.png) - 2. In **Replicating machines**, right-click the VM > **Migrate**. 3. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**. - By default Azure Migrate shuts down the on-premises VM to ensure minimum data loss. After you've verified that the test migration works as expected, you can migrate - Deploy [Azure Disk Encryption](/azure/virtual-machines/disk-encryption-overview) to help secure disks, and keep data safe from theft and unauthorized access. - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management:- - Consider deploying [Azure Cost Management](../../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending. + - Consider deploying [Microsoft Cost Management](../../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending. |
modeling-simulation-workbench | Concept Chamber | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-chamber.md | The right-sizing feature reduces the Azure spend by identifying idle and underut Learn more about reducing service costs using [Azure Advisor](/azure/advisor/advisor-cost-recommendations#optimize-spend-for-mariadb-mysql-and-postgresql-servers-by-right-sizing) and [right-size VMs best practices](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs#best-practice-right-size-vms). -## Next steps +## Related content - [Connector](./concept-connector.md) |
modeling-simulation-workbench | Concept Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-connector.md | For organizations who have an Azure network setup to manage access for their emp For those organizations who don't have an Azure network setup, or prefer to use the public network, they can configure their connector to allow access to the chamber via allowlisted Public IP addresses. The connector object allows the allowed IP list to be configured at creation time or added or removed dynamically after the connector object is created. -## Next steps +## Related content - [Data pipeline](./concept-data-pipeline.md) |
modeling-simulation-workbench | Concept Data Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-data-pipeline.md | Users with access to the chamber can export data from the chamber via the data p > [!NOTE] > Larger files take longer to be available to download after being approved and to download using AzCopy. Check the expiration on the download URI and request a new one if the window has expired. -## Next steps +## Related content - [License service](./concept-license-service.md) |
modeling-simulation-workbench | Concept License Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-license-service.md | For silicon EDA, our service automation deploys license servers for each of the This flow is extendible and can also include other software vendors across industry verticals." -## Next steps +## Related content -- [Manage users in Azure Modeling and Simulation Workbench](./how-to-guide-manage-users.md)+- Learn more about the benefits and key features of using [Shared storage](./shared-storage.md). |
modeling-simulation-workbench | Concept User Personas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-user-personas.md | The Project Manager, also known as the *Chamber Admin*, is responsible for insta The Design Engineer is responsible for execution of the workflows and simulations leading up to the final design approval. This role is referred to as the *Chamber User*. Chamber Users have a lower level of access to the environment, but can deploy workloads, execute scripts and schedulers based on their access permissions to chamber storages. They can also use the [data pipeline](./concept-data-pipeline.md), to bring data into the chamber and request data to be exported from chamber. -## Next steps +## Related content - [Chamber](./concept-chamber.md) |
modeling-simulation-workbench | Concept Workbench | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/concept-workbench.md | The Azure virtual network enables over-provisioned network resources with high b - Remote desktop service - As robust security is mandatory to protect IP within and outside chambers, remote desktop access needs to be secured, with custom restrictions on data transfer through the sessions. Customer IT admins can enable multifactor authentication through [Microsoft Entra ID](/azure/active-directory/) and provision role assignments to Modeling and Simulation Workbench users. -## Next steps +## Related content - [User personas](./concept-user-personas.md) |
modeling-simulation-workbench | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/disaster-recovery.md | + + Title: Disaster recovery for Modeling and Simulation Workbench +description: This article provides an overview of disaster recovery for Azure Modeling and Simulation Workbench workbench component. ++++ Last updated : 08/21/2024+++# Disaster recovery for Modeling and Simulation Workbench ++This article explains how to configure a disaster recovery environment for Azure Modeling and Simulation Workbench. Azure data center outages are rare but can last anywhere from a few minutes to hours. Data Center outages can cause disruption to the environments hosted in that data center. By following the steps in this article, Azure Modeling and Simulation workbench customers will continue to operate in the cloud in the event of a data center outage for the primary region hosting your workbench instance. ++Planning for disaster recovery involves identifying expected recovery point objectives (RPO) and recovery time objectives (RTO) for your instance. Based upon your risk-tolerance and expected RPO, follow these instructions at an interval appropriate for your business needs. ++A typical disaster recovery workflow starts with a failure of a critical service in your primary region. As the issue gets investigated, Azure publishes an expected time for the primary region to recover. If this timeframe is not acceptable for business continuity, and the problem does not impact your secondary region, you would start the process to fail over to the secondary region. ++## Achieving business continuity for Azure Modeling and Simulation Workbench +To be prepared for a data center outage, you can have a Modeling and Simulation workbench instance provisioned in a secondary region. ++These Workbench resources can be configured to match the resources that exist in the primary Azure Modeling and Simulation workbench instance. Users in the workbench instance environment can be provisioned ahead of time, or when you switch to the secondary region. Chamber and connector resources can be put in a stopped state post deployment to invoke idle billing meters when not being used actively. ++Alternatively, if you don't want to have an Azure Modeling and Simulation Workbench instance provisioned in the secondary region until an outage impacts your primary region, follow the provided steps in the Quickstart, but stop before creating the Azure Modeling and Simulation Workbench instance in the secondary region. That step can be executed when you choose to create the workbench resources in the secondary region as a failover. ++## Prerequisites ++- Ensure that the services and features that your account uses are supported in the target secondary region. ++## Verify Entra ID tenant ++The workspace source and destination can be in the same subscription. If source and destination for workbench are different subscriptions, the subscriptions must exist within the same Entra ID tenant. Use Azure PowerShell to verify that both subscriptions have the same tenant ID. ++```powershell +Get-AzSubscription -SubscriptionId <your-source-subscription>.TenantId ++Get-AzSubscription-SubscriptionId <your-destination-subscription>.TenantId +``` ++## Prepare ++To be prepared to work in a backup region, you would need to do some preparation ahead of the outage. ++First identify your backup region. ++List of supported regions can be found on the [Azure product availability roadmap](https://global.azure.com/product-roadmap/pam/roadmap), and searching for the service name: **Azure Modeling and Simulation Workbench**. + +Then, create a backup of your Azure Key Vault and keys used by Azure Modeling and Simulation in Key Vault including: ++1. Application Client Id key +2. Application Secret key ++## Configure the new instance ++In the event of your primary region failure, and decision to work in a backup region, you would create a Modeling and Simulation Workbench instance in your backup region. ++1. Register to the Azure Modeling and Simulation Workbench Resource Provider as described in [Create an Azure Modeling and Simulation Workbench](/azure/modeling-simulation-workbench/quickstart-create-portal#register-azure-modeling-and-simulation-workbench-resource-provider). ++1. Create an Azure Modeling and Simulation Workbench using this section of the Quickstart. ++1. If desired, upload data into the new backup instance following Upload Files section of instructions. ++You can now do your work in the new workbench instance created in the backup region. +++## Cleanup ++Once your primary region is up and operating, and you no longer need your backup instance, you can delete it. ++## Related content ++- [Backup and disaster recovery for Azure applications](/azure/reliability/cross-region-replication-azure) ++- [Azure status](https://azure.status.microsoft/status) |
modeling-simulation-workbench | Shared Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/shared-storage.md | + + Title: Shared storage for Modeling and Simulation Workbench +description: This article provides an overview of shared storage for Azure Modeling and Simulation Workbench workbench component. ++++ Last updated : 08/21/2024+++# Shared storage for Modeling and Simulation Workbench ++To enable cross team and/or cross-organization collaboration in a secure manner within the workbench, a shared storage resource allows for selective data sharing between collaborating parties. It's an Azure NetApp Files based storage volume and is available to deploy in multiples of 4 TBs. Workbench owners can create multiple shared storage instances on demand and dynamically link them to existing chambers to facilitate secure collaboration. ++Users who are provisioned to a specific chamber can access all shared storage volumes linked to that chamber. Once users get deprovisioned from a chamber or that chamber gets deleted, they lose access to any linked shared storage volumes. ++## Key features of shared storage ++**Performance**: A shared storage resource within the workbench is high-performance Azure NetApp Files based, targeting complex engineering workloads. It isn't limited to being used as a data transfer mechanism. The resource can also be used to run simulations. ++**Scalability**: Users can adjust the storage capacity and performance tier according to their needs, just like chamber private storage. ++**Management**: Workbench Owners can manage storage capacity, resize storage, and change performance tiers through the Azure portal. ++## Related content ++- [Business continuity: Disaster recovery](./disaster-recovery.md) |
openshift | Azure Redhat Openshift Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md | Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to ## Updates - August 2024 -You can now create up to 20 IP addresses per Azure Red Hat OpenShift cluster load balancer. This feature was previously in preview but is now generally available. See [Configure multiple IP addresses per cluster load balancer](howto-multiple-ips.md) for details. Azure Red Hat OpenShift 4.x has a 250 pod-per-node limit and a 250 compute node limit. For instructions on adding large clusters, see [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md). +- You can now create up to 20 IP addresses per Azure Red Hat OpenShift cluster load balancer. This feature was previously in preview but is now generally available. See [Configure multiple IP addresses per cluster load balancer](howto-multiple-ips.md) for details. Azure Red Hat OpenShift 4.x has a 250 pod-per-node limit and a 250 compute node limit. For instructions on adding large clusters, see [Deploy a large Azure Red Hat OpenShift cluster](howto-large-clusters.md). -There's a change in the order of actions performed by Site Reliability Engineers of Azure RedHat OpenShift. To maintain the health of a cluster, a timely action is necessary if control plane resources are over-utilized. Now the control plane is resized proactively to maintain cluster health. After the resize of the control plane, a notification is sent out to you with the details of the changes made to the control plane. Make sure you have the quota available in your subscription for Site Reliability Engineers to perform the cluster resize action. +- There's a change in the order of actions performed by Site Reliability Engineers of Azure RedHat OpenShift. To maintain the health of a cluster, a timely action is necessary if control plane resources are over-utilized. Now the control plane is resized proactively to maintain cluster health. After the resize of the control plane, a notification is sent out to you with the details of the changes made to the control plane. Make sure you have the quota available in your subscription for Site Reliability Engineers to perform the cluster resize action. ## Version 4.14 - May 2024 |
programmable-connectivity | Azure Programmable Connectivity Create Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/programmable-connectivity/azure-programmable-connectivity-create-gateway.md | Title: Create an Azure Programmable Connectivity Gateway -description: In this guide, learn how to create an APC gateway. +description: In this guide, learn how to create an APC Gateway. Last updated 07/22/2024 -# Quickstart: Create an APC gateway +# Quickstart: Create an APC Gateway In this quickstart, you learn how to create an Azure Programmable Connectivity (APC) gateway and subscribe to API plans in the Azure portal. > [!NOTE]-> Deleting and modifying existing APC gateways is not supported during the preview. Please open a support ticket in the Azure Portal if you need to delete an APC Gateway. +> Deleting and modifying existing APC Gateways is not supported during the preview. Please open a support ticket in the Azure Portal if you need to delete an APC Gateway. +> ++> [!NOTE] +> Moving an APC Gateway to a different resource group is not supported. If you need to move an APC Gateway to a different resource group, you must delete it and recreate it. > ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.+- Register your Azure subscription with the provider `Microsoft.ProgrammableConnectivity`, following [these instructions](/azure/azure-resource-manager/management/resource-providers-and-types) ### Sign in to the Azure portal Sign in to the [Azure portal](https://portal.azure.com). -### Create a new APC gateway +### Create a new APC Gateway 1. In the Azure portal, Search for **APC Gateways** and then select **Create**. :::image type="content" source="media/search.jpg" alt-text="Screenshot of Azure portal showing the search box." lightbox="media/search.jpg"::: -1. Select the **Subscription**, **Resource Group** and **Region** for the gateway. +1. Select the **Subscription**, **Resource Group**, and **Region** for the gateway. :::image type="content" source="media/create.jpg" alt-text="Screenshot of the create gateway page in Azure portal." lightbox="media/create.jpg"::: Sign in to the [Azure portal](https://portal.azure.com). ### Provide application details -In order to use the operators API, you must provide extra details, which will be shared with the operator. +In order to use the operators' APIs, you must provide extra details, which are shared with the operators. -1. Fill out the Application name, Application description, Legal entity, Tax number and the privacy manger's email address in the text boxes. +1. Fill out the Application name, Application description, Legal entity, Tax number, and the privacy manager's email address in the text boxes. :::image type="content" source="media/app-details.jpg" alt-text="Screenshot of the application details page in the Azure portal." lightbox="media/app-details.jpg"::: On the **Terms and Conditions** page. :::image type="content" source="media/terms.jpg" alt-text="Screenshot of the terms and conditions page in the Azure portal." lightbox="media/terms.jpg"::: -1. Repeat the above step for each line. +1. Repeat for each line. 1. Click **Next**. ### Verify details and create On the **Terms and Conditions** page. Once you see the **Validation passed** message, click **Create**. :::image type="content" source="media/verify-create.jpg" alt-text="Screenshot of the verify and create page in Azure portal." lightbox="media/verify-create.jpg":::++### Wait for approval ++On clicking **Create** your APC Gateway and one or more Operator API Connections are created. Each Operator API Connection represents a connection between your APC Gateway and a particular API at a particular operator. ++Operator API Connections don't finish deploying until the relevant operator has approved your onboarding request. For some operators approval is a manual process, and so may take several days. ++After an operator approves your onboarding request for an Operator API Connection, you are able to use APC with that Operator API Connection. This is the case even if your APC Gateway has other Operator API Connections that are not yet approved, with that operator or with another operator. |
programmable-connectivity | Azure Programmable Connectivity Using Network Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/programmable-connectivity/azure-programmable-connectivity-using-network-apis.md | Create an APC Gateway, following instructions in [Create an APC Gateway](azure-p - Obtain the Resource ID of your APC Gateway. This can be found by navigating to the APC Gateway in the Azure portal, clicking `JSON View` in the top right, and copying the value of `Resource ID`. Note this as `APC_IDENTIFIER`. - Obtain the URL of your APC Gateway. This can be found by navigating to the APC Gateway in the Azure portal, and obtaining the `Gateway base URL` under Properties. Note this as `APC_URL`.--## Obtain an authentication token --1. Follow the instructions at [How to create a Service Principal](/entra/identity-platform/howto-create-service-principal-portal) to create an App Registration that can be used to access your APC Gateway. - - For the step "Assign a role to the application", go to the APC Gateway in the Azure portal and follow the instructions from `3. Select Access Control (IAM)` onwards. Assign the new App registration the `Azure Programmable Connectivity Gateway Dataplane User` role. - - At the step "Set up authentication", select "Option 3: Create a new client secret". Note the value of the secret as `CLIENT_SECRET`, and store it securely (for example in an Azure Key Vault). - - After you have created the App registration, copy the value of Client ID from the Overview page, and note it as `CLIENT_ID`. -2. Navigate to "Tenant Properties" in the Azure portal. Copy the value of Tenant ID, and note it as `TENANT`. -3. Obtain a token from your App Registration. This can be done using an HTTP request, following the instructions in the [documentation](/entra/identity-platform/v2-oauth2-client-creds-grant-flow#first-case-access-token-request-with-a-shared-secret). Alternatively, you can use an SDK (such as those for [Python](/entra/msal/python/), [.NET](/entra/msal/dotnet/), and [Java](/entra/msal/java/)). - - When asked for `client_id`, `client_secret`, and `tenant`, use the values obtained in this process. Use `https://management.azure.com//.default` as the scope. -4. Note the value of the token you have obtained as `APC_AUTH_TOKEN`. +- APC uses [Azure RBAC](/azure/role-based-access-control/overview) to control access. Choose the identity that you are going to use to access APC. This can be your own user identity, a [Service Principal](/entra/identity-platform/app-objects-and-service-principals), or a [Managed Identity](/entra/identity/managed-identities-azure-resources/overview). Add the role assignment `Azure Programmable Connectivity Gateway Dataplane User` for your chosen identity over a scope that includes your APC Gateway (e.g. the APC Gateway itself, the Resource Group that contains it, or the Subscription that contains it). For more details, see the following instructions: + - [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal) + - [Assign Azure roles using Azure CLI](/azure/role-based-access-control/role-assignments-cli) + - [Assign Azure roles using Azure PowerShell](/azure/role-based-access-control/role-assignments-powershell) ## Use an API Create an APC Gateway, following instructions in [Create an APC Gateway](azure-p - Phone number: a phone number in E.164 format (starting with country code), optionally prefixed with '+'. - Hashed phone number: the SHA-256 hash, in hexadecimal representation, of a phone number -#### Headers --All requests must contain the following headers: --- `Authorization`: This must have the value of `<APC_AUTH_TOKEN>` obtained in [Obtain an authentication token](#obtain-an-authentication-token).-- `apc-gateway-id`: This must have the value of `<APC_IDENTIFIER>` obtained in [Prerequisites](#prerequisites).--Requests may also contain the following optional header: --- `x-ms-client-request-id`: This is a unique ID to identify the specific request. This is useful for diagnosing and fixing errors.- #### Network identifier Each request body contains the field `networkIdentifier`. This object contains the fields `identifierType` and `identifier`. These values are used to identify which Network a request should be routed to. APC can identify the correct Network in one of three ways: | Operator | NetworkCode | | -- | -- |-| Claro Brazil | Claro_Brazil | -| Telefonica Brazil | Telefonica_Brazil | -| TIM Brazil | Tim_Brazil | -| Orange Spain | Orange_Spain | | Telefonica Spain | Telefonica_Spain | -### Retrieve the last time that a SIM card was changed +### Contact APC using an SDK -Make a POST request to the endpoint `https://<APC_URL>/sim-swap/sim-swap:verify`. +You can use a .NET SDK to contact APC. For more information and usage samples, see the [documentation](/dotnet/api/overview/azure/communication.programmableconnectivity-readme). ++If you use the .NET SDK, it is recommended that you also use the [.NET Entra Authentication SDK](/entra/msal/dotnet/). ++### Contact APC using HTTP requests ++#### Obtain an auth token ++To contact APC using HTTP requests, you must obtain an auth token, and set the header `Authorization` to the value of this token. There are numerous ways to do this. Some examples are shown below. ++# [Obtain a token for a Service Principal](#tab/service-principal) ++To obtain a token for a Service Principal, follow instructions in the [Entra documentation](/entra/identity-platform/v2-oauth2-client-creds-grant-flow#get-a-token) -It must contain all common headers specified in [Headers](#headers). +# [Obtain a token for your own user identity](#tab/user-identity) ++Obtain a token for your own user identity using the Azure CLI command [get-access-token](/cli/azure/account?view=azure-cli-latest#az-account-get-access-token&preserve-view=true) or the PowerShell command [Get-AzAccessToken](/powershell/module/az.accounts/get-azaccesstoken). Set the `--resource` parameter (in Azure CLI) or the `-ResourceUrl` parameter (in PowerShell) to `https://management.azure.com/`. ++++#### Set other headers ++All requests must contain the following header: ++- `apc-gateway-id`: This must have the value of `<APC_IDENTIFIER>` obtained in [Prerequisites](#prerequisites). ++Requests may also contain the following optional header: ++- `x-ms-client-request-id`: This is a unique ID to identify the specific request. This is useful for diagnosing and fixing errors. ++#### Retrieve the last time that a SIM card was changed ++Make a POST request to the endpoint `https://<APC_URL>/sim-swap/sim-swap:retrieve`. ++It must contain a [bearer token](#obtain-an-auth-token) and [other required headers](#set-other-headers). The body of the request must take the following form. Replace the example values with real values. The response is of the form: `date` contains the timestamp of the most recent SIM swap in the `date-time` format defined in [RFC 3339](https://datatracker.ietf.org/doc/html/rfc3339#section-5.6). `date` may be `null`: this means that the SIM has never been swapped, or has not been swapped within the timeframe that the Operator maintains data for. -### Verify that a SIM card has been swapped in a certain time frame +#### Verify that a SIM card has been swapped in a certain time frame -Make a POST request to the endpoint `https://<APC_URL>/sim-swap/sim-swap:retrieve`. +Make a POST request to the endpoint `https://<APC_URL>/sim-swap/sim-swap:verify`. -It must contain all common headers specified in [Headers](#headers). +It must contain a [bearer token](#obtain-an-auth-token) and [other required headers](#set-other-headers). The body of the request must take the following form. Replace the example values with real values. The response is of the form: `verificationResult` is a boolean, which is true if the SIM has been swapped in the specified time period, and false otherwise. -### Verify the location of a device +#### Verify the location of a device Make a POST request to the endpoint `https://<APC_URL>/device-location/location:verify`. -It must contain all common headers specified in [Headers](#headers). +It must contain a [bearer token](#obtain-an-auth-token) and [other required headers](#set-other-headers). The body of the request must take one of the following forms, which vary on the format used to identify the device. Replace the example values with real values. The response is of the form: `verificationResult` is a boolean, which is true if the device is within a certain distance (given by `accuracy`) of the given location, and false otherwise. -### Verify the number of a device +#### Verify the number of a device Number verification is different to other APIs, as it requires interaction with a frontend application (i.e. an application running on a device) to verify the number of that device, as part of a flow referred to as "frontend authorization". This means two separate calls to APC must be made: the first to trigger frontend authorization, and the second to request the desired information. To use number verification functionality, you must expose an endpoint on the backend of your application that is accessible from your application's frontend. This endpoint is used to pass the result of frontend authorization to the backend of your application. Note the full URL to this endpoint as `REDIRECT_URI`. -#### Call 1 +##### Frontend authorization call Make a POST request to the endpoint `https://<APC_URL>/number-verification/number:verify`. -It must contain all common headers specified in [Headers](#headers). +It must contain a [bearer token](#obtain-an-auth-token) and [other required headers](#set-other-headers). The body of the request must take one of the following forms. Replace the example values with real values. At the end of the authorization flow, the Network returns a 302 redirect. This r The frontend of your application must follow this `redirectUri`. This delivers the `apcCode` to your application's backend. -#### Call 2 +##### Number Verification call At the end of Call 1, your frontend made a request to the endpoint exposed at `redirectUri` with a parameter `apcCode`. Your backend must obtain the value of `apcCode` and use it in the second call to APC. Make a POST request to the endpoint `https://<APC_URL>/number-verification/number:verify`. -It must contain all common headers specified in [Headers](#headers). +It must contain a [bearer token](#obtain-an-auth-token) and [other required headers](#set-other-headers). The body of the request must take the following form. Replace the value of `apcCode` with the value obtained as a result of the authorization flow. The response is of the form: `verificationResult` is a boolean, which is true if the device has the number (or hashed number) specified in Call 1, and false otherwise. -### Obtain the Network of a device +#### Obtain the network of a device Make a POST request to the endpoint `https://<APC_URL>/device-network/network:retrieve`. -It must contain all common headers specified in [Headers](#headers). +It must contain a [bearer token](#obtain-an-auth-token) and [other required headers](#set-other-headers). The body of the request must take the following form. Replace the example values with real values. |
reliability | Reliability Cosmos Db Nosql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-cosmos-db-nosql.md | |
reliability | Reliability Postgresql Flexible Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgresql-flexible-server.md | |
reliability | Recommend Cosmos Db Nosql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/resiliency-recommendations/recommend-cosmos-db-nosql.md | |
search | Search Create Service Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md | Service name requirements: > [!TIP] > If you have multiple search services, it helps to include the region (or location) in the service name as a naming convention. A name like `mysearchservice-westus` can save you a trip to the properties page when deciding how to combine or attach resources. -## Choose a tier --Azure AI Search is offered in [multiple pricing tiers](https://azure.microsoft.com/pricing/details/search/): Free, Basic, Standard, or Storage Optimized. Each tier has its own [capacity and limits](search-limits-quotas-capacity.md). There are also several features that are tier-dependent. --Review the [tier descriptions](search-sku-tier.md) for computing characteristics and [feature availability](search-sku-tier.md#feature-availability-by-tier). --Basic and Standard are the most common choices for production workloads, but many customers start with the Free service. Among the billable tiers, key differences are partition size and speed, and limits on the number of objects you can create. ---Search services created after April 3, 2024 have larger partitions and higher vector quotas. --Remember, a pricing tier can't be changed once the service is created. If you need a higher or lower tier, you should re-create the service. - ## Choose a region > [!IMPORTANT] If you use multiple Azure services, putting all of them in the same region minim Generally, choose a region near you, unless the following considerations apply: -+ Your nearest region is capacity constrained. West Europe is at capacity and unavailable for new instances. Other regions are [at capacity for specific tiers](search-sku-tier.md#region-availability-by-tier). One advantage to using the Azure portal for resource set up is that it provides only those regions and tiers that are available. You can't select regions or tiers that are unavailable. ++ Your nearest region is capacity constrained. West Europe is at capacity and unavailable for new instances. Other regions are [at capacity for specific tiers](search-sku-tier.md#region-availability-by-tier). One advantage to using the Azure portal for resource setup is that it provides only those regions and tiers that are available. You can't select regions or tiers that are unavailable. + You want to use integrated data chunking and vectorization or built-in skills for AI enrichment. Azure OpenAI and Azure AI services multiservice accounts must be in the same region as Azure AI Search for integration purposes. [Choose a region](search-region-support.md) that provides all necessary resources. Here's a checklist for choosing a region: 1. Is Azure AI Search available in a nearby region? Check the [supported regions list](search-region-support.md). Capacity-constrained regions are indicated in the footnotes. -1. Do you know which tier you want to use? Check [region availability by tier](search-sku-tier.md#region-availability-by-tier) to determine if you can create a search service at the desired tier in your region of choice. +1. Do you know which tier you want to use? Tiers are covered in the next step. Check [region availability by tier](search-sku-tier.md#region-availability-by-tier) to determine if you can create a search service at the desired tier in your region of choice. 1. Do you need [AI enrichment](cognitive-search-concept-intro.md) or [integrated data chunking and vectorization](vector-search-integrated-vectorization.md)? Verify that Azure OpenAI and Azure AI services are [offered in the same region](search-region-support.md) as Azure AI Search. Here's a checklist for choosing a region: 1. Do you have business continuity and disaster recovery (BCDR) requirements? Such requirements dictate creating multiple search services in [regional pairs](../availability-zones/cross-region-replication-azure.md#azure-paired-regions) in [availability zones](search-reliability.md#availability-zones). For example, if you're operating in North America, you might choose East US and West US, or North Central US and South Central US, for each search service. +## Choose a tier ++Azure AI Search is offered in [multiple pricing tiers](https://azure.microsoft.com/pricing/details/search/): Free, Basic, Standard, or Storage Optimized. Each tier has its own [capacity and limits](search-limits-quotas-capacity.md). There are also several features that are tier-dependent. ++Review the [tier descriptions](search-sku-tier.md) for computing characteristics and [feature availability](search-sku-tier.md#feature-availability-by-tier). ++Basic and Standard are the most common choices for production workloads, but many customers start with the Free service. Among the billable tiers, key differences are partition size and speed, and limits on the number of objects you can create. +++Search services created after April 3, 2024 have larger partitions and higher vector quotas. ++Currently, some regions are tier-constrained. For more information, see [region availability by tier](search-sku-tier.md#region-availability-by-tier). ++Remember, a pricing tier can't be changed once the service is created. If you need a higher or lower tier, you should re-create the service. + ## Create your service After you've provided the necessary inputs, go ahead and create the service. |
sentinel | Microsoft Sentinel Defender Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-sentinel-defender-portal.md | description: Learn about changes in the Microsoft Defender portal with the integ Previously updated : 08/13/2024 Last updated : 08/16/2024 appliesto: - Microsoft Sentinel in the Microsoft Defender portal +- Blog post: [Frequently asked questions about the unified security operations platform](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/frequently-asked-questions-about-the-unified-security-operations/ba-p/4212048) - [Connect Microsoft Sentinel to Microsoft Defender XDR](/microsoft-365/security/defender/microsoft-sentinel-onboard) ## New and improved capabilities |
sentinel | Unified Connector Custom Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/unified-connector-custom-device.md | Each application section contains the following information: - The outline of the procedure required to ingest data manually, without using the connector. For the details of this procedure, see [Collect logs from text files with the Azure Monitor Agent and ingest to Microsoft Sentinel](connect-custom-logs-ama.md). - Specific instructions for configuring the originating applications or devices themselves, and/or links to the instructions on the providers' web sites. These steps must be taken whether using the connector or not. -**The following devices' instructions are provided here:** --- [Apache HTTP Server](#apache-http-server)-- [Apache Tomcat](#apache-tomcat)-- [Cisco Meraki](#cisco-meraki) (appliance)-- [Jboss Enterprise Application Platform](#jboss-enterprise-application-platform)-- [JuniperIDP](#juniperidp) (appliance)-- [MarkLogic Audit](#marklogic-audit)-- [MongoDB Audit](#mongodb-audit)-- [NGINX HTTP Server](#nginx-http-server)-- [Oracle WebLogic Server](#oracle-weblogic-server)-- [PostgreSQL Events](#postgresql-events)-- [SecurityBridge Threat Detection for SAP](#securitybridge-threat-detection-for-sap)-- [SquidProxy](#squidproxy)-- [Ubiquiti UniFi](#ubiquiti-unifi) (appliance)-- [VMware vCenter](#vmware-vcenter) (appliance)-- [Zscaler Private Access (ZPA)](#zscaler-private-access-zpa) (appliance)--### Apache HTTP Server +## Apache HTTP Server Follow these steps to ingest log messages from Apache HTTP Server: Follow these steps to ingest log messages from Apache HTTP Server: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### Apache Tomcat +## Apache Tomcat Follow these steps to ingest log messages from Apache Tomcat: Follow these steps to ingest log messages from Apache Tomcat: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### Cisco Meraki +## Cisco Meraki Follow these steps to ingest log messages from Cisco Meraki: Follow these steps to ingest log messages from Cisco Meraki: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### JBoss Enterprise Application Platform +## JBoss Enterprise Application Platform Follow these steps to ingest log messages from JBoss Enterprise Application Platform: Follow these steps to ingest log messages from JBoss Enterprise Application Plat [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### JuniperIDP +## JuniperIDP Follow these steps to ingest log messages from JuniperIDP: Follow these steps to ingest log messages from JuniperIDP: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### MarkLogic Audit +## MarkLogic Audit Follow these steps to ingest log messages from MarkLogic Audit: Follow these steps to ingest log messages from MarkLogic Audit: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### MongoDB Audit +## MongoDB Audit Follow these steps to ingest log messages from MongoDB Audit: Follow these steps to ingest log messages from MongoDB Audit: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### NGINX HTTP Server +## NGINX HTTP Server Follow these steps to ingest log messages from NGINX HTTP Server: Follow these steps to ingest log messages from NGINX HTTP Server: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### Oracle WebLogic Server +## Oracle WebLogic Server Follow these steps to ingest log messages from Oracle WebLogic Server: Follow these steps to ingest log messages from Oracle WebLogic Server: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### PostgreSQL Events +## PostgreSQL Events Follow these steps to ingest log messages from PostgreSQL Events: Follow these steps to ingest log messages from PostgreSQL Events: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### SecurityBridge Threat Detection for SAP +## SecurityBridge Threat Detection for SAP Follow these steps to ingest log messages from SecurityBridge Threat Detection for SAP: Follow these steps to ingest log messages from SecurityBridge Threat Detection f [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### SquidProxy +## SquidProxy Follow these steps to ingest log messages from SquidProxy: Follow these steps to ingest log messages from SquidProxy: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### Ubiquiti UniFi +## Ubiquiti UniFi Follow these steps to ingest log messages from Ubiquiti UniFi: Follow these steps to ingest log messages from Ubiquiti UniFi: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### VMware vCenter +## VMware vCenter Follow these steps to ingest log messages from VMware vCenter: Follow these steps to ingest log messages from VMware vCenter: [Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications) -### Zscaler Private Access (ZPA) +## Zscaler Private Access (ZPA) Follow these steps to ingest log messages from Zscaler Private Access (ZPA): |
storage | Storage Blob Container Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md | The following example creates a new `BlobContainerClient` object with the contai To learn more about creating a container using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerCreate.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for creating a container use the following REST API operation: - [Create Container](/rest/api/storageservices/create-container) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerCreate.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)]+ |
storage | Storage Blob Container Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md | The following example finds a deleted container, gets the version of that delete To learn more about deleting a container using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerDelete.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for deleting or restoring a container use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [Delete Container](/rest/api/storageservices/delete-container) (REST API) - [Restore Container](/rest/api/storageservices/restore-container) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerDelete.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Soft delete for containers](soft-delete-container-overview.md) - [Enable and manage soft delete for containers](soft-delete-container-enable.md)+ |
storage | Storage Blob Container Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md | The following example breaks the lease on a container: To learn more about leasing a container using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerLease.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for leasing a container use the following REST API operation: - [Lease Container](/rest/api/storageservices/lease-container) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerLease.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ## See also - [Managing Concurrency in Blob storage](concurrency-manage.md)+ |
storage | Storage Blob Container Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md | The following example reads in metadata values: To learn more about setting and retrieving container properties and metadata using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerPropertiesMetadata.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for setting and retrieving properties and metadata use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP The `getProperties` method retrieves container properties and metadata by calling both the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation and the [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) operation. -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerPropertiesMetadata.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)]+ |
storage | Storage Blob Container User Delegation Sas Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md | The following code example shows how to use the user delegation SAS created in t To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library method for getting a user delegation key uses the following REST API operation: - [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md) - [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)+ |
storage | Storage Blob Containers List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md | You can also return a smaller set of results, by specifying the size of the page To learn more about listing containers using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerList.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for listing containers use the following REST API operation: - [List Containers](/rest/api/storageservices/list-containers2) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-containers/src/main/java/com/blobs/devguide/containers/ContainerList.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ## See also - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources)+ |
storage | Storage Blob Copy Async Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md | This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blo To learn more about copying blobs using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods covered in this article use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [Copy Blob](/rest/api/storageservices/copy-blob) (REST API) - [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)]+ |
storage | Storage Blob Copy Url Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md | The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobCopy.java) [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)]+ |
storage | Storage Blob Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md | This method restores the content and metadata of a soft-deleted blob and any ass To learn more about how to delete blobs and restore deleted blobs using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDelete.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for deleting blobs and restoring deleted blobs use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [Delete Blob](/rest/api/storageservices/delete-blob) (REST API) - [Undelete Blob](/rest/api/storageservices/undelete-blob) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDelete.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Soft delete for blobs](soft-delete-blob-overview.md) - [Blob versioning](versioning-overview.md)+ |
storage | Storage Blob Download Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md | To learn more about tuning data transfer options, see [Performance tuning for up To learn more about how to download blobs using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for downloading blobs use the following REST API operation: - [Get Blob](/rest/api/storageservices/get-blob) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)]+ |
storage | Storage Blob Java Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md | Blob client library information: ## Authorize access and connect to Blob Storage -To connect an application to Blob Storage, create an instance of the [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) class. You can also use the [BlobServiceAsyncClient](/java/api/com.azure.storage.blob.blobserviceasyncclient) class for [asynchronous programming](/azure/developer/java/sdk/async-programming). This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with. +To connect an app to Blob Storage, create an instance of the [BlobServiceClient](/java/api/com.azure.storage.blob.blobserviceclient) class. You can also use the [BlobServiceAsyncClient](/java/api/com.azure.storage.blob.blobserviceasyncclient) class for [asynchronous programming](/azure/developer/java/sdk/async-programming). This object is your starting point to interact with data resources at the storage account level. You can use it to operate on the storage account and its containers. You can also use the service client to create container clients or blob clients, depending on the resource you need to work with. To learn more about creating and managing client objects, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md). You can authorize a `BlobServiceClient` object by using a Microsoft Entra author ## [Microsoft Entra ID (recommended)](#tab/azure-ad) -To authorize with Microsoft Entra ID, you'll need to use a [security principal](../../active-directory/develop/app-objects-and-service-principals.md). Which type of security principal you need depends on where your application runs. Use the following table as a guide: +To authorize with Microsoft Entra ID, you'll need to use a [security principal](../../active-directory/develop/app-objects-and-service-principals.md). Which type of security principal you need depends on where your app runs. Use the following table as a guide: -| Where the application runs | Security principal | Guidance | +| Where the app runs | Security principal | Guidance | | | | | | Local machine (developing and testing) | Service principal | To learn how to register the app, set up a Microsoft Entra group, assign roles, and configure environment variables, see [Authorize access using developer service principals](/dotnet/azure/sdk/authentication-local-development-service-principal?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json). | | Local machine (developing and testing) | User identity | To learn how to set up a Microsoft Entra group, assign roles, and sign in to Azure, see [Authorize access using developer credentials](/dotnet/azure/sdk/authentication-local-development-dev-accounts?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json). | For information about how to obtain account keys and best practice guidelines fo -## Build your application +## Build your app -As you build applications to work with data resources in Azure Blob Storage, your code primarily interacts with three resource types: storage accounts, containers, and blobs. To learn more about these resource types, how they relate to one another, and how apps interact with resources, see [Understand how apps interact with Blob Storage data resources](storage-blob-object-model.md). +As you build apps to work with data resources in Azure Blob Storage, your code primarily interacts with three resource types: storage accounts, containers, and blobs. To learn more about these resource types, how they relate to one another, and how apps interact with resources, see [Understand how apps interact with Blob Storage data resources](storage-blob-object-model.md). -The following guides show you how to work with data resources and perform specific actions using the Azure Storage client library for Java: +The following guides show you how to access data and perform specific actions using the Azure Storage client library for Java: | Guide | Description | |--||-| [Create a container](storage-blob-container-create-java.md) | Create containers. | -| [Delete and restore containers](storage-blob-container-delete-java.md) | Delete containers, and if soft-delete is enabled, restore deleted containers. | -| [List containers](storage-blob-containers-list-java.md) | List containers in an account and the various options available to customize a listing. | -| [Manage properties and metadata (containers)](storage-blob-container-properties-metadata-java.md) | Get and set properties and metadata for containers. | -| [Create and manage container leases](storage-blob-container-lease-java.md) | Establish and manage a lock on a container. | -| [Create and manage blob leases](storage-blob-lease-java.md) | Establish and manage a lock on a blob. | -| [Upload blobs](storage-blob-upload-java.md) | Learn how to upload blobs by using strings, streams, file paths, and other methods. | -| [Download blobs](storage-blob-download-java.md) | Download blobs by using strings, streams, and file paths. | +| [Configure a retry policy](storage-retry-policy-java.md) | Implement retry policies for client operations. | | [Copy blobs](storage-blob-copy-java.md) | Copy a blob from one location to another. |-| [List blobs](storage-blobs-list-java.md) | List blobs in different ways. | +| [Create a container](storage-blob-container-create-java.md) | Create blob containers. | +| [Create a user delegation SAS (blobs)](storage-blob-user-delegation-sas-create-java.md) | Create a user delegation SAS for a blob. | +| [Create a user delegation SAS (containers)](storage-blob-container-user-delegation-sas-create-java.md) | Create a user delegation SAS for a container. | +| [Create and manage blob leases](storage-blob-lease-java.md) | Establish and manage a lock on a blob. | +| [Create and manage container leases](storage-blob-container-lease-java.md) | Establish and manage a lock on a container. | | [Delete and restore](storage-blob-delete-java.md) | Delete blobs, and if soft-delete is enabled, restore deleted blobs. |+| [Delete and restore containers](storage-blob-container-delete-java.md) | Delete containers, and if soft-delete is enabled, restore deleted containers. | +| [Download blobs](storage-blob-download-java.md) | Download blobs by using strings, streams, and file paths. | | [Find blobs using tags](storage-blob-tags-java.md) | Set and retrieve tags as well as use tags to find blobs. |+| [List blobs](storage-blobs-list-java.md) | List blobs in different ways. | +| [List containers](storage-blob-containers-list-java.md) | List containers in an account and the various options available to customize a listing. | | [Manage properties and metadata (blobs)](storage-blob-properties-metadata-java.md) | Get and set properties and metadata for blobs. |+| [Manage properties and metadata (containers)](storage-blob-container-properties-metadata-java.md) | Get and set properties and metadata for containers. | +| [Performance tuning for data transfers](storage-blobs-tune-upload-download-java.md) | Optimize performance for data transfer operations. | | [Set or change a blob's access tier](storage-blob-use-access-tier-java.md) | Set or change the access tier for a block blob. |+| [Upload blobs](storage-blob-upload-java.md) | Learn how to upload blobs by using strings, streams, file paths, and other methods. | |
storage | Storage Blob Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md | The following example breaks the lease on a blob: To learn more about managing blob leases using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobLease.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for managing blob leases use the following REST API operation: - [Lease Blob](/rest/api/storageservices/lease-blob) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobLease.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Managing Concurrency in Blob storage](concurrency-manage.md)+ |
storage | Storage Blob Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md | The following code example reads metadata on a blob and prints each key/value pa To learn more about how to manage system properties and user-defined metadata using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for managing system properties and user-defined metadata use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) (REST API) - [Get Blob Metadata](/rest/api/storageservices/get-blob-metadata) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)]+ |
storage | Storage Blob Tags Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md | The following example finds all blobs tagged as an image: To learn more about how to use index tags to manage and find data using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for managing and using blob index tags use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [Set Blob Tags](/rest/api/storageservices/set-blob-tags) (REST API) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobPropertiesMetadataTags.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) - [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)+ |
storage | Storage Blob Upload Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md | You can have greater control over how to divide uploads into blocks by manually To learn more about uploading blobs using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for uploading blobs use the following REST API operations: The Azure SDK for Java contains libraries that build on top of the Azure REST AP - [Put Blob](/rest/api/storageservices/put-blob) (REST API) - [Put Block](/rest/api/storageservices/put-block) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) - [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)+ |
storage | Storage Blob Use Access Tier Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md | To learn more about copying a blob with Java, see [Copy a blob with Java](storag To learn more about setting access tiers using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobAccessTier.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for setting access tiers use the following REST API operation: The Azure SDK for Java contains libraries that build on top of the Azure REST AP [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobAccessTier.java)- ### See also - [Access tiers best practices](access-tiers-best-practices.md) - [Blob rehydration from the archive tier](archive-rehydrate-overview.md)+ |
storage | Storage Blob User Delegation Sas Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md | The following code example shows how to use the user delegation SAS created in t To learn more about creating a user delegation SAS using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library method for getting a user delegation key uses the following REST API operation: - [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobSAS.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md) - [Create a user delegation SAS](/rest/api/storageservices/create-user-delegation-sas)+ |
storage | Storage Blobs List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md | Blob name: folderA/folderB/file3.txt To learn more about how to list blobs using the Azure Blob Storage client library for Java, see the following resources. +### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobList.java) + ### REST API operations The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for listing blobs use the following REST API operation: - [List Blobs](/rest/api/storageservices/list-blobs) (REST API) -### Code samples --- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobList.java)- [!INCLUDE [storage-dev-guide-resources-java](../../../includes/storage-dev-guides/storage-dev-guide-resources-java.md)] ### See also - [Enumerating Blob Resources](/rest/api/storageservices/enumerating-blob-resources) - [Blob versioning](versioning-overview.md)+ |
storage | Storage Blobs Tune Upload Download Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-java.md | During a download, the Storage client libraries split a given download request i ## Next steps +- This article is part of the Blob Storage developer guide for Java. See the full list of developer guide articles at [Build your app](storage-blob-java-get-started.md#build-your-app). - To understand more about factors that can influence performance for Azure Storage operations, see [Latency in Blob storage](storage-blobs-latency.md). - To see a list of design considerations to optimize performance for apps using Blob storage, see [Performance and scalability checklist for Blob storage](storage-performance-checklist.md). |
storage | Storage Retry Policy Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy-java.md | BlobServiceClient client = new BlobServiceClientBuilder() In this example, each service request issued from the `BlobServiceClient` object uses the retry options as defined in the `RequestRetryOptions` instance. This policy applies to client requests. You can configure various retry strategies for service clients based on the needs of your app. -## Related content +## Next steps +- This article is part of the Blob Storage developer guide for Java. See the full list of developer guide articles at [Build your app](storage-blob-java-get-started.md#build-your-app). - For architectural guidance and general best practices for retry policies, see [Transient fault handling](/azure/architecture/best-practices/transient-faults). - For guidance on implementing a retry pattern for transient failures, see [Retry pattern](/azure/architecture/patterns/retry). |
synapse-analytics | Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md | To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov |Azure Synapse Workspace|[Known issue incorporating square brackets [] in the value of Tags](#known-issue-incorporating-square-brackets--in-the-value-of-tags)|Has workaround| |Azure Synapse Workspace|[Deployment Failures in Synapse Workspace using Synapse-workspace-deployment v1.8.0 in GitHub actions with ARM templates](#deployment-failures-in-synapse-workspace-using-synapse-workspace-deployment-v180-in-github-actions-with-arm-templates)|Has workaround| |Azure Synapse Workspace|[No `GET` API operation dedicated to the `Microsoft.Synapse/workspaces/trustedServiceBypassEnabled` setting](#no-get-api-operation-dedicated-to-the-microsoftsynapseworkspacestrustedservicebypassenabled-setting)|Has workaround|+|Azure Synapse Apache Spark pool|[Query failure with a LIKE clause using Synapse Dedicated SQL Pool Connector in Spark 3.4 runtime](#query-failure-with-a-like-clause-using-synapse-dedicated-sql-pool-connector-in-spark-34-runtime)|Has Workaround| Suggested workarounds are: - Switch to Managed Identity storage authorization as described in the [storage access control](sql/develop-storage-files-storage-access-control.md?tabs=managed-identity). - Decrease number of security groups (having 90 or fewer security groups results with a token that is of compatible length). - Increase number of security groups over 200 (as that changes how token is constructed, it will contain an MS Graph API URI instead of a full list of groups). It could be achieved by adding dummy/artificial groups by following [managed groups](sql/develop-storage-files-storage-access-control.md?tabs=managed-identity), after you would need to add users to newly created groups.- ++## Azure Synapse Analytics Apache Spark pool active known issues summary ++The following are known issues with the Synapse Spark. ++### Query failure with a LIKE clause using Synapse Dedicated SQL Pool Connector in Spark 3.4 runtime ++The open source Apache Spark 3.4 has introduced an [issue](https://issues.apache.org/jir), it can generate an invalid SQL query for Synapse SQL and the Synapse Spark notebook or batch job would throw an error similar to: ++`com.microsoft.spark.sqlanalytics.SQLAnalyticsConnectorException: com.microsoft.sqlserver.jdbc.SQLServerException: Parse error at line: 1, column: XXX: Incorrect syntax near ''%test%''` ++**Workaround**: The engineering team is currently aware of this behavior and working on a fix. If you encountered a similar error, please engage Microsoft Support Team for assistance and to provide a temporary workaround. ++ ## Recently closed known issues |Synapse Component|Issue|Status|Date Resolved| |
synapse-analytics | Apache Spark 24 Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md | Last updated 04/18/2022 -# Azure Synapse Runtime for Apache Spark 2.4 (deprecated) +# Azure Synapse Runtime for Apache Spark 2.4 (disabled) Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 2.4. > [!CAUTION]-> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 2.4 -> * Effective August 15, 2024, **disablement** of jobs running on Azure Synapse Runtime for Apache Spark 2.4 will be executed. **Immediately** migrate to higher runtime versions otherwise your jobs will stop executing. -> * **All Spark jobs running on Azure Synapse Runtime for Apache Spark 2.4 will be disabled as of August 15, 2024.** -* Effective September 29, 2023, Azure Synapse will discontinue official support for Spark 2.4 Runtimes. -* Post September 29, we will not be addressing any support tickets related to Spark 2.4. There will be no release pipeline in place for bug or security fixes for Spark 2.4. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns. -* Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it. -* **We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md)).** +> Disablement notification for Azure Synapse Runtime for Apache Spark 2.4 +> * Effective August 23, 2024, __Azure Synapse Runtime for Apache Spark 2.4 has been disabled.__   +* Azure Synapse Runtime for Apache Spark 2.4 has been replaced with a **more recent version of the runtime ([Azure Synapse Runtime for Apache Spark 3.4 (GA)](/azure/synapse-analytics/spark/apache-spark-34-runtime).** +* On July 29, 2022, we announced that Azure Synapse Runtime for Apache Spark 2.4 would become deprecated as of September 29, 2023. +* Post September 29, 2023, support ended. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. ++* Announcements about the 2.4 deprecation were also added within the product. The lifecycle for Synapse Spark states that for deprecated runtimes, ["Spark Pool definitions and associated metadata will remain in the Synapse workspace for a defined period after the applicable End of Support date. However, all pipelines, jobs, and notebooks will no longer be able to execute”.](/azure/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability)  ++For migration guidance, please see the [Upgrade to Azure Synapse runtimes for Apache Spark 3.4 & previous runtimes deprecation - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/upgrade-to-azure-synapse-runtimes-for-apache-spark-3-4-amp/ba-p/4177758) blog post. ## Component versions+ | Component | Version | | -- | -- | | Apache Spark | 2.4.8 | |
update-manager | Tutorial Webhooks Using Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/tutorial-webhooks-using-runbooks.md | In this tutorial, you learn how to: #### [Start VMs](#tab/script-on) -``` +```powershell param ( [Parameter(Mandatory=$false)] foreach($id in $jobsList) #### [Stop VMs](#tab/script-off) -``` +```powershell param ( foreach($id in $jobsList) ``` #### [Cancel a schedule](#tab/script-cancel)-``` +```powershell Invoke-AzRestMethod ` -Path "<Correlation ID from EventGrid Payload>?api-version=2023-09-01-preview" ` -Payload |
virtual-desktop | Add Session Hosts Host Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md | For a general idea of what's required, such as supported operating systems, virt - At least one Windows OS image available on the cluster. For more information, see how to [create VM images by using Azure Marketplace images](/azure-stack/hci/manage/virtual-machine-image-azure-marketplace), [use images in an Azure Storage account](/azure-stack/hci/manage/virtual-machine-image-storage-account), and [use images in a local share](/azure-stack/hci/manage/virtual-machine-image-local-share). - - The [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) on Azure Stack HCI VMs created outside the Azure Virtual Desktop service, such as with an automated pipeline. The virtual machines use the agent to communicate with [Azure Instance Metadata Service](../virtual-machines/instance-metadata-service.md), which is a [required endpoint for Azure Virtual Desktop](../virtual-desktop/required-fqdn-endpoint.md). + - The [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) on Azure Stack HCI VMs created outside the Azure Virtual Desktop service, such as with an automated pipeline. The virtual machines use the agent to communicate with [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service), which is a [required endpoint for Azure Virtual Desktop](../virtual-desktop/required-fqdn-endpoint.md). - A logical network that you created on your Azure Stack HCI cluster. DHCP logical networks or static logical networks with automatic IP allocation are supported. For more information, see [Create logical networks for Azure Stack HCI](/azure-stack/hci/manage/create-logical-networks). |
vpn-gateway | Site To Site Vpn Private Peering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/site-to-site-vpn-private-peering.md | description: Learn how to configure site-to-site VPN connections over ExpressRou Previously updated : 07/28/2023 Last updated : 08/22/2024 This feature is only available for standard-IP based gateways. To complete this configuration, verify that you meet the following prerequisites: -* You have a functioning ExpressRoute circuit that is linked to the VNet where the VPN gateway is (or will be) created. +* You have a functioning ExpressRoute circuit that is linked to the virtual network where the VPN gateway is (or will be) created. -* You can reach resources over RFC1918 (private) IP in the VNet over the ExpressRoute circuit. +* You can reach resources over RFC1918 (private) IP in the virtual network over the ExpressRoute circuit. ## <a name="routing"></a>Routing Establishing connectivity is straightforward: ### Traffic from on-premises networks to Azure -For traffic from on-premises networks to Azure, the Azure prefixes are advertised via both the ExpressRoute private peering BGP, and the VPN BGP if BGP is configured on your VPN Gateway. The result is two network routes (paths) toward Azure from the on-premises networks: +For traffic from on-premises networks to Azure, the Azure prefixes are advertised via both the ExpressRoute private peering BGP, and the VPN BGP if BGP is configured on your VPN gateway. The result is two network routes (paths) toward Azure from the on-premises networks: ΓÇó One network route over the IPsec-protected path. The same requirement applies to the traffic from Azure to on-premises networks. In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN connection rather than directly over ExpressRoute without VPN protection. ->[!Warning] ->If you advertise the same prefixes over both ExpressRoute and VPN connections, >Azure will use the ExpressRoute path directly without VPN protection. -> +> [!WARNING] +> If you advertise the same prefixes over both ExpressRoute and VPN connections, Azure will use the ExpressRoute path directly without VPN protection. ## <a name="portal"></a>Portal steps -1. Configure a Site-to-Site connection. For steps, see the [Site-to-site configuration](./tutorial-site-to-site-portal.md) article. Be sure to pick a gateway with a Standard Public IP. -- :::image type="content" source="media/site-to-site-vpn-private-peering/gateway.png" alt-text="Gateway Private IPs"::: +1. Configure a Site-to-Site connection. For steps, see the [Site-to-site configuration](./tutorial-site-to-site-portal.md) article. Be sure to pick a gateway with a Standard Public IP. 1. Enable Private IPs on the gateway. Select **Configuration**, then set **Gateway Private IPs** to **Enabled**. Select **Save** to save your changes.-1. On the **Overview** page, select **See More** to view the private IP address. Write down this information to use later in the configuration steps. +1. On the **Overview** page, select **See More** to view the private IP address. Write down this information to use later in the configuration steps. If you have an active-active mode VPN gateway, you'll see two private IP addresses. - :::image type="content" source="media/site-to-site-vpn-private-peering/gateway-overview.png" alt-text="Overview page" lightbox="media/site-to-site-vpn-private-peering/gateway-overview.png"::: -1. To enable **Use Azure Private IP Address** on the connection, select **Configuration**. Set **Use Azure Private IP Address** to **Enabled**, then select **Save**. + :::image type="content" source="media/site-to-site-vpn-private-peering/see-more.png" alt-text="Screenshot of the Overview page with See More selected." lightbox="media/site-to-site-vpn-private-peering/see-more.png"::: +1. To enable **Use Azure Private IP Address** on the connection, go to the **Configuration** page. Set **Use Azure Private IP Address** to **Enabled**, then select **Save**. - :::image type="content" source="media/site-to-site-vpn-private-peering/connection.png" alt-text="Gateway Private IPs - Enabled"::: -1. Use the private IP that you wrote down in step 3 as the remote IP on your on-premises firewall to establish the Site-to-Site tunnel over the ExpressRoute private peering. +1. Use the private IP address that you wrote down in step 3 as the remote IP on your on-premises firewall to establish the Site-to-Site tunnel over the ExpressRoute private peering. - >[!NOTE] - > Configurig BGP on your VPN Gateway is not required to achieve a VPN connection over ExpressRoute private peering. - > + > [!NOTE] + > Configuring BGP on your VPN gateway is not required to achieve a VPN connection over ExpressRoute private peering. ## <a name="powershell"></a>PowerShell steps In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c ``` You should see a public and a private IP address. Write down the IP address under the ΓÇ£TunnelIpAddressesΓÇ¥ section of the output. You'll use this information in a later step.+ 1. Set the connection to use the private IP address by using the following PowerShell command: ```azurepowershell-interactive In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c Set-AzVirtualNetworkGatewayConnection --VirtualNetworkGatewayConnection $Connection -UseLocalAzureIpAddress $true ```+ 1. From your firewall, ping the private IP that you wrote down in step 2. It should be reachable over the ExpressRoute private peering. 1. Use this private IP as the remote IP on your on-premises firewall to establish the Site-to-Site tunnel over the ExpressRoute private peering. |
vpn-gateway | Vpn Gateway Classic Resource Manager Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md | VPN gateways can now be migrated from the classic deployment model to [Resource > [!IMPORTANT] > [!INCLUDE [classic gateway restrictions](../../includes/vpn-gateway-classic-gateway-restrict-create.md)] -VPN gateways are migrated as part of VNet migration from classic to Resource Manager. This migration is done one VNet at a time. There aren't additional requirements in terms of tools or prerequisites to migrate. Migration steps are identical to the existing VNet migration and are documented at [IaaS resources migration page](../virtual-machines/migration-classic-resource-manager-ps.md). +VPN gateways are migrated as part of VNet migration from classic to Resource Manager. This migration is done one VNet at a time. There aren't additional requirements in terms of tools or prerequisites to migrate. Migration steps are identical to the existing VNet migration and are documented at [IaaS resources migration page](/azure/virtual-machines/migration-classic-resource-manager-ps). There isn't a data path downtime during migration and thus existing workloads continue to function without the loss of on-premises connectivity during migration. The public IP address associated with the VPN gateway doesn't change during the migration process. This implies that you won't need to reconfigure your on-premises router once the migration is completed. Since we transform VNet-to-VNet connectivity without requiring local sites, the ## Next steps -After learning about VPN gateway migration support, go to [platform-supported migration of IaaS resources from classic to Resource Manager](../virtual-machines/migration-classic-resource-manager-ps.md) to get started. +After learning about VPN gateway migration support, go to [platform-supported migration of IaaS resources from classic to Resource Manager](/azure/virtual-machines/migration-classic-resource-manager-ps) to get started. |
vpn-gateway | Vpn Gateway Highlyavailable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-highlyavailable.md | Here you create and set up the Azure VPN gateway in an active-active configurati All gateways and tunnels are active from the Azure side, so the traffic is spread among all 4 tunnels simultaneously, although each TCP or UDP flow will again follow the same tunnel or path from the Azure side. Even though by spreading the traffic, you may see slightly better throughput over the IPsec tunnels, the primary goal of this configuration is for high availability. And due to the statistical nature of the spreading, it's difficult to provide the measurement on how different application traffic conditions will affect the aggregate throughput. -This topology requires two local network gateways and two connections to support the pair of on-premises VPN devices, and BGP is required to allow the two connections to the same on-premises network. These requirements are the same as the [above](#activeactiveonprem). +This topology requires two local network gateways and two connections to support the pair of on-premises VPN devices, and BGP is required to allow simultaneous connectivity on the two connections to the same on-premises network. These requirements are the same as the [above](#activeactiveonprem). ## Highly Available VNet-to-VNet |
vpn-gateway | Vpn Gateway Howto Point To Site Resource Manager Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md | -This article helps you configure the necessary VPN Gateway point-to-site (P2S) server settings to let you securely connect individual clients running Windows, Linux, or macOS to an Azure virtual network (VNet). P2S VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a site-to-site (S2S) VPN when you have only a few clients that need to connect to a virtual network (VNet). P2S connections don't require a VPN device or a public-facing IP address. +This article helps you configure the necessary VPN Gateway point-to-site (P2S) server settings to let you securely connect individual clients running Windows, Linux, or macOS to an Azure virtual network (VNet). P2S VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a site-to-site (S2S) VPN when you have only a few clients that need to connect to a virtual network (VNet). ++P2S connections don't require a VPN device or a public-facing IP address. There are various different configuration options available for P2S. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md). :::image type="content" source="./media/vpn-gateway-howto-point-to-site-rm-ps/point-to-site-diagram.png" alt-text="Diagram of point-to-site connection showing how to connect from a computer to an Azure VNet." lightbox="./media/vpn-gateway-howto-point-to-site-rm-ps/point-to-site-diagram.png"::: -There are various different configuration options available for P2S. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md). This article helps you create a P2S configuration that uses **certificate authentication** and the Azure portal. To create this configuration using the Azure PowerShell, see the [Configure P2S - Certificate - PowerShell](vpn-gateway-howto-point-to-site-rm-ps.md) article. For RADIUS authentication, see the [P2S RADIUS](point-to-site-how-to-radius-ps.md) article. For Microsoft Entra authentication, see the [P2S Microsoft Entra ID](openvpn-azure-ad-tenant.md) article. +The steps in this article create a P2S configuration that uses **certificate authentication** and the Azure portal. To create this configuration using the Azure PowerShell, see the [Configure P2S - Certificate - PowerShell](vpn-gateway-howto-point-to-site-rm-ps.md) article. For RADIUS authentication, see the [P2S RADIUS](point-to-site-how-to-radius-ps.md) article. For Microsoft Entra authentication, see the [P2S Microsoft Entra ID](openvpn-azure-ad-tenant.md) article. [!INCLUDE [P2S basic architecture](../../includes/vpn-gateway-p2s-architecture.md)] As you can tell, planning the tunnel type and authentication type is important w * The Azure VPN Client for Linux supports the OpenVPN tunnel type. * The strongSwan client on Android and Linux can use only the IKEv2 tunnel type to connect. -### <a name="tunneltype"></a>Tunnel type +### Tunnel and authentication type + -On the **Point-to-site configuration** page, select the **Tunnel type**. For this exercise, from the dropdown, select **IKEv2 and OpenVPN(SSL)**. +1. For **Tunnel type**, select the tunnel type that you want to use. For this exercise, from the dropdown, select **IKEv2 and OpenVPN(SSL)**. +1. For **Authentication type**, select the authentication type that you want to use. For this exercise, from the dropdown, select **Azure certificate**. If you're interested in other authentication types, see the articles for [Microsoft Entra ID](openvpn-azure-ad-tenant.md) and [RADIUS](point-to-site-how-to-radius-ps.md). -### <a name="authenticationtype"></a>Authentication type +## <a name="publicip3"></a>Additional IP address -For this exercise, select **Azure certificate** for the authentication type. If you're interested in other authentication types, see the articles for [Microsoft Entra ID](openvpn-azure-ad-tenant.md) and [RADIUS](point-to-site-how-to-radius-ps.md). +If you have an active-active mode gateway that uses an availability zone SKU (AZ SKU), you need a third public IP address. If this setting doesn't apply to your gateway, you don't need to add an additional IP address. ## <a name="uploadfile"></a>Upload root certificate public key information In this section, you upload public root certificate data to Azure. Once the publ 1. Make sure that you exported the root certificate as a **Base-64 encoded X.509 (.CER)** file in the previous steps. You need to export the certificate in this format so you can open the certificate with text editor. You don't need to export the private key. -1. Open the certificate with a text editor, such as Notepad. When copying the certificate data, make sure that you copy the text as one continuous line without carriage returns or line feeds. You might need to modify your view in the text editor to 'Show Symbol/Show all characters' to see the carriage returns and line feeds. Copy only the following section as one continuous line: +1. Open the certificate with a text editor, such as Notepad. When copying the certificate data, make sure that you copy the text as one continuous line: - :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert.png" alt-text="Screenshot showing root certificate information in Notepad." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/notepad-root-cert-expand.png"::: -1. Navigate to your **Virtual network gateway -> Point-to-site configuration** page in the **Root certificate** section. This section is only visible if you have selected **Azure certificate** for the authentication type. + :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/certificate.png" alt-text="Screenshot of data in the certificate." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/certificate-expand.png"::: +1. Go to your **Virtual network gateway -> Point-to-site configuration** page in the **Root certificate** section. This section is only visible if you have selected **Azure certificate** for the authentication type. 1. In the **Root certificate** section, you can add up to 20 trusted root certificates. * Paste the certificate data into the **Public certificate data** field. * **Name** the certificate. - :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/root-certificate.png" alt-text="Screenshot of certificate data field." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/root-certificate.png"::: + :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/public-certificate-data.png" alt-text="Screenshot of certificate data field." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/public-certificate-data.png"::: + 1. Additional routes aren't necessary for this exercise. For more information about the custom routing feature, see [Advertise custom routes](vpn-gateway-p2s-advertise-custom-routes.md). 1. Select **Save** at the top of the page to save all of the configuration settings. - :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/save-configuration.png" alt-text="Screenshot of P2S configuration with Save selected." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/save-configuration.png" ::: - ## <a name="profile-files"></a>Generate VPN client profile configuration files All the necessary configuration settings for the VPN clients are contained in a VPN client profile configuration zip file. VPN client profile configuration files are specific to the P2S VPN gateway configuration for the VNet. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client profile configuration files and apply the new configuration to all of the VPN clients that you want to connect. For more information about P2S connections, see [About point-to-site VPN](point-to-site-about.md). |