Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md | Title: Enable Azure AD B2C custom domains + Title: Enable custom domains in Azure Active Directory B2C -description: Learn how to enable custom domains in your redirect URLs for Azure Active Directory B2C. +description: Learn how to enable custom domains in your redirect URLs for Azure Active Directory B2C, so that my users have a seamless experience. Previously updated : 01/26/2024 Last updated : 02/14/2024 zone_pivot_groups: b2c-policy-type #Customer intent: As a developer, I want to use my own domain name for the sign-in and sign-up experience, so that my users have a seamless experience. -# Enable custom domains for Azure Active Directory B2C +# Enable custom domains in Azure Active Directory B2C [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] When using custom domains, consider the following: - You can set up multiple custom domains. For the maximum number of supported custom domains, see [Microsoft Entra service limits and restrictions](/entra/identity/users/directory-service-limits-restrictions) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-front-door-classic-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).-- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#optional-block-access-to-the-default-domain-name).-- If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.+- If you've multiple applications, migrate all oft them to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used. +- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com*. You need to block access to the default domain so that attackers can't use it to access your apps or run distributed denial-of-service (DDoS) attacks. [Submit a support ticket](find-help-open-support-ticket.md) to request for the blocking of access to the default domain. ++> [!WARNING] +> Don't request blocking of the default domain until your custom domain works properly. + ## Prerequisites |
ai-services | Build Enrollment App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/build-enrollment-app.md | -This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data. +This guide will show you how to get started with a sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-quality face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data. -When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera. +When users launch the app, it shows a detailed consent screen. If the user gives consent, the app prompts them for a username and password and then captures a high-quality face image using the device's camera. -The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future. +The sample app is written using JavaScript and the React Native framework. It can be deployed on Android and iOS devices. ## Prerequisites The sample app is written using JavaScript and the React Native framework. It ca * Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. * You'll need the key and endpoint from the resource you created to connect your application to Face API. -### Important Security Considerations -* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login. -* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones. -* As a best practice, consider having separate API keys for development and production. ++> [!IMPORTANT] +> **Security considerations** +> +> * For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login. +> * Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones. +> * As a best practice, consider having separate API keys for development and production. ## Set up the development environment The sample app is written using JavaScript and the React Native framework. It ca ## Customize the app for your business -Now that you have set up the sample app, you can tailor it to your own needs. +Now that you've set up the sample app, you can tailor it to your own needs. For example, you may want to add situation-specific information on your consent page: For example, you may want to add situation-specific information on your consent * Face size (faces that are distant from the camera) * Face orientation (faces turned or tilted away from camera) * Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise- * Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses) + * Occlusion (partially hidden or obstructed faces), including accessories like hats or thick-rimmed glasses * Blur (such as by rapid face movement when the photograph was taken). The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service. |
ai-services | Storage Lab Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/storage-lab-tutorial.md | Next, you'll add the code that actually uses the Azure AI Vision service to crea 1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file: ```csharp- using Azure.AI.Vision.Common; + using Azure; using Azure.AI.Vision.ImageAnalysis;+ using System; ``` 1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Azure AI Vision to generate a description for that image. The Azure AI Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on. ```csharp- // Submit the image to the Azure AI Vision API - var serviceOptions = new VisionServiceOptions( - Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"]), + // create a new ImageAnalysisClient + ImageAnalysisClient client = new ImageAnalysisClient( + new Uri(Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"])), new AzureKeyCredential(ConfigurationManager.AppSettings["SubscriptionKey"])); - var analysisOptions = new ImageAnalysisOptions() - { - Features = ImageAnalysisFeature.Caption | ImageAnalysisFeature.Tags, - Language = "en", - GenderNeutralCaption = true - }; + VisualFeatures = visualFeatures = VisualFeatures.Caption | VisualFeatures.Tags; - using var imageSource = VisionSource.FromUrl( - new Uri(photo.Uri.ToString())); + ImageAnalysisOptions analysisOptions = new ImageAnalysisOptions() + { + GenderNeutralCaption = true, + Language = "en", + }; - using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions); - var result = analyzer.Analyze(); + Uri imageURL = new Uri(photo.Uri.ToString()); + + ImageAnalysisResult result = client.Analyze(imageURL,visualFeatures,analysisOptions); // Record the image description and tags in blob metadata- photo.Metadata.Add("Caption", result.Caption.ContentCaption.Content); + photo.Metadata.Add("Caption", result.Caption.Text); - for (int i = 0; i < result.Tags.ContentTags.Count; i++) + for (int i = 0; i < result.Tags.Values.Count; i++) { string key = String.Format("Tag{0}", i);- photo.Metadata.Add(key, result.Tags.ContentTags[i]); + photo.Metadata.Add(key, result.Tags.Values[i]); } await photo.SetMetadataAsync(); In this section, you will add a search box to the home page, enabling users to d } ``` - Observe that the **Index** method now accepts a parameter _id_ that contains the value the user typed into the search box. An empty or missing _id_ parameter indicates that all the photos should be displayed. + Observe that the **Index** method now accepts a parameter `id` that contains the value the user typed into the search box. An empty or missing `id` parameter indicates that all the photos should be displayed. 1. Add the following helper method to the **HomeController** class: |
ai-services | Concept Face Detection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-detection.md | Attributes are a set of features that can optionally be detected by the [Face - >[!NOTE] > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04. -## Input data +## Input requirements Use the following tips to make sure that your input images give the most accurate detection results: |
ai-services | Concept Face Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-recognition.md | -This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the act of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be. +This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the process of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be. You can try out the capabilities of face recognition quickly and easily using Vision Studio. > [!div class="nextstepaction"] The recognition operations use mainly the following data structures. These objec See the [Face recognition data structures](./concept-face-recognition-data-structures.md) guide. -## Input data +## Input requirements Use the following tips to ensure that your input images give the most accurate recognition results: |
ai-services | Concept Shelf Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-shelf-analysis.md | Try out the capabilities of Product Recognition quickly and easily in your brows ## Product Recognition features -### Shelf Image Composition +### Shelf image composition The [stitching and rectification APIs](./how-to/shelf-modify-images.md) let you modify images to improve the accuracy of the Product Understanding results. You can use these APIs to: * Stitch together multiple images of a shelf to create a single image. * Rectify an image to remove perspective distortion. -### Shelf Product Recognition (pretrained model) +### Shelf product recognition (pretrained model) The [Product Understanding API](./how-to/shelf-analyze.md) lets you analyze a shelf image using the out-of-box pretrained model. This operation detects products and gaps in the shelf image and returns the bounding box coordinates of each product and gap, along with a confidence score for each. The following JSON response illustrates what the Product Understanding API retur } ``` -### Shelf Product Recognition - Custom (customized model) +### Shelf product recognition (customized model) The Product Understanding API can also be used with a [custom trained model](./how-to/shelf-model-customization.md) to detect your specific products. This operation returns the bounding box coordinates of each product and gap, along with the label of each product. The following JSON response illustrates what the Product Understanding API retur } ``` -### Shelf Planogram Compliance (preview) +### Shelf planogram compliance The [Planogram matching API](./how-to/shelf-planogram.md) lets you compare the results of the Product Understanding API to a planogram document. This operation matches each detected product and gap to its corresponding position in the planogram document. It returns a JSON response that accounts for each position in the planogram docu Get started with Product Recognition by trying out the stitching and rectification APIs. Then do basic analysis with the Product Understanding API. * [Prepare images for Product Recognition](./how-to/shelf-modify-images.md) * [Analyze a shelf image](./how-to/shelf-analyze.md)+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b) |
ai-services | Enrollment Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/enrollment-overview.md | -In order to use the Azure AI Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar data structure. This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy. +In order to use the Azure AI Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar [data structure](/azure/ai-services/computer-vision/concept-face-recognition-data-structures). This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy. ## Meaningful consent Based on Microsoft user research, Microsoft's Responsible AI principles, and [ex This section offers guidance for developing an enrollment application for facial recognition. This guidance has been developed based on Microsoft user research in the context of enrolling individuals in facial recognition for building entry. Therefore, these recommendations might not apply to all facial recognition solutions. Responsible use for Face API depends strongly on the specific context in which it's integrated, so the prioritization and application of these recommendations should be adapted to your scenario. -> [!NOTE] +> [!IMPORTANT] > It is your responsibility to align your enrollment application with applicable legal requirements in your jurisdiction and accurately reflect all of your data collection and processing practices. ## Application development Before you design an enrollment flow, think about how the application you're bui |Category | Recommendations | ||| |Hardware | Consider the camera quality of the enrollment device. |-|Recommended enrollment features | Include a log-on step with multi-factor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. | +|Recommended enrollment features | Include a log-on step with multifactor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. | |Security | Azure AI services follow [best practices](../cognitive-services-virtual-networks.md?tabs=portal) for encrypting user data at rest and in transit. The following are other practices that can help uphold the security promises you make to users during the enrollment experience. </br></br>Take security measures to ensure that no one has access to the person ID at any point during enrollment. Note: PersonID should be treated as a secret in the enrollment system. </br></br>Use [role-based access control](../../role-based-access-control/overview.md) with Azure AI services. </br></br>Use token-based authentication and/or shared access signatures (SAS) over keys and secrets to access resources like databases. By using request or SAS tokens, you can grant limited access to data without compromising your account keys, and you can specify an expiry time on the token. </br></br>Never store any secrets, keys, or passwords in your app. | |User privacy |Provide a range of enrollment options to address different levels of privacy concerns. Do not mandate that people use their personal devices to enroll into a facial recognition system. </br></br>Allow users to re-enroll, revoke consent, and delete data from the enrollment application at any time and for any reason. | |Accessibility |Follow accessibility standards (for example, [ADA](https://www.ada.gov/regs2010/2010ADAStandards/2010ADAstandards.htm) or [W3C](https://www.w3.org/TR/WCAG21/)) to ensure the application is usable by people with mobility or visual impairments. | |
ai-services | Add Faces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/add-faces.md | Title: "Example: Add faces to a PersonGroup - Face" description: This guide demonstrates how to add a large number of persons and faces to a PersonGroup object with the Azure AI Face service. #-+ Previously updated : 04/10/2019- Last updated : 02/14/2024+ ms.devlang: csharp -This guide demonstrates how to add a large number of persons and faces to a PersonGroup object. The same strategy also applies to LargePersonGroup, FaceList, and LargeFaceList objects. This sample is written in C# by using the Azure AI Face .NET client library. +This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C# and uses the Azure AI Face .NET client library. -## Step 1: Initialization +## Initialization -The following code declares several variables and implements a helper function to schedule the face add requests: +The following code declares several variables and implements a helper function to schedule the **face add** requests: - `PersonCount` is the total number of persons. - `CallLimitPerSecond` is the maximum calls per second according to the subscription tier. static async Task WaitCallLimitPerSecondAsync() } ``` -## Step 2: Authorize the API call +## Authorize the API call When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object. -## Step 3: Create the PersonGroup +## Create the PersonGroup -A PersonGroup named "MyPersonGroup" is created to save the persons. -The request time is enqueued to `_timeStampQueue` to ensure the overall validation. +This code creates a **PersonGroup** named `"MyPersonGroup"` to save the persons. The request time is enqueued to `_timeStampQueue` to ensure the overall validation. ```csharp const string personGroupId = "mypersongroupid"; _timeStampQueue.Enqueue(DateTime.UtcNow); await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName); ``` -## Step 4: Create the persons for the PersonGroup +## Create the persons for the PersonGroup -Persons are created concurrently, and `await WaitCallLimitPerSecondAsync()` is also applied to avoid exceeding the call limit. +This code creates **Persons** concurrently, and uses `await WaitCallLimitPerSecondAsync()` to avoid exceeding the call rate limit. ```csharp Person[] persons = new Person[PersonCount]; Parallel.For(0, PersonCount, async i => }); ``` -## Step 5: Add faces to the persons +## Add faces to the persons -Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially. -Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation. +Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially. Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation. ```csharp Parallel.For(0, PersonCount, async i => Parallel.For(0, PersonCount, async i => In this guide, you learned the process of creating a PersonGroup with a massive number of persons and faces. Several reminders: -- This strategy also applies to FaceLists and LargePersonGroups.-- Adding or deleting faces to different FaceLists or persons in LargePersonGroups are processed concurrently.-- Adding or deleting faces to one specific FaceList or person in a LargePersonGroup are done sequentially.-- For simplicity, how to handle a potential exception is omitted in this guide. If you want to enhance more robustness, apply the proper retry policy.+- This strategy also applies to **FaceLists** and **LargePersonGroups**. +- Adding or deleting faces to different **FaceLists** or persons in **LargePersonGroups** are processed concurrently. +- Adding or deleting faces to one specific **FaceList** or persons in a **LargePersonGroup** is done sequentially. -The following features were explained and demonstrated: --- Create PersonGroups by using the [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) API.-- Create persons by using the [PersonGroup Person - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) API.-- Add faces to persons by using the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API. ## Next steps -In this guide, you learned how to add face data to a **PersonGroup**. Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data. +Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data. - [Use the PersonDirectory structure (preview)](use-persondirectory.md) |
ai-services | Find Similar Faces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/find-similar-faces.md | |
ai-services | Identity Detect Faces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md | -This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs. +This guide demonstrates how to use the face detection API to extract attributes from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs. The code snippets in this guide are written in C# by using the Azure AI Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236). |
ai-services | Mitigate Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/mitigate-latency.md | If the image files you use are large, it affects the response time of the Face s The quality of the input images affects both the accuracy and the latency of the Face service. Images with lower quality may result in erroneous results. Images of higher quality may enable more precise interpretations. However, images of higher quality also increase the network latency due to their larger file sizes. The service requires more time to receive the entire file from the client and to process it, in proportion to the file size. Above a certain level, further quality enhancements won't significantly improve the accuracy. To achieve the optimal balance between accuracy and speed, follow these tips to optimize your input data. -- For face detection and recognition operations, see [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).+- For face detection and recognition operations, see [input data for face detection](../concept-face-detection.md#input-requirements) and [input data for face recognition](../concept-face-recognition.md#input-requirements). - For liveness detection, see the [tutorial](../Tutorials/liveness.md#select-a-good-reference-image). #### Other file size tips |
ai-services | Shelf Analyze | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-analyze.md | In this guide, you learned how to make a basic analysis call using the pretraine > [Train a custom model for Product Recognition](../how-to/shelf-model-customization.md) * [Image Analysis overview](../overview-image-analysis.md)+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b) |
ai-services | Shelf Model Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-model-customization.md | -# Shelf Product Recognition - Custom Model (preview) +# Shelf product recognition - custom model (preview) You can train a custom model to recognize specific retail products for use in a Product Recognition scenario. The out-of-box [Analyze](shelf-analyze.md) operation doesn't differentiate between products, but you can build this capability into your app through custom labeling and training. When you go through the labeling workflow, create labels for each of the product ## Analyze shelves with a custom model -When your custom model is trained and ready (you've completed the steps in the [Model customization guide](./model-customization.md)), you can use it through the Shelf Analyze operation. Set the _PRODUCT_CLASSIFIER_MODEL_ URL parameter to the name of your custom model (the _ModelName_ value you used in the creation step). +When your custom model is trained and ready (you've completed the steps in the [Model customization guide](./model-customization.md)), you can use it through the Shelf Analyze operation. The API call will look like this: ```bash-curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{ +curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/<your_model_name>/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{ 'url':'<your_url_string>' }" ``` curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: app 1. Make the following changes in the command where needed: 1. Replace the `<subscriptionKey>` with your Vision resource key. 1. Replace the `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.- 2. Replace the `<your_run_name>` with your unique test run name for the task queue. It is an async API task queue name for you to be able retrieve the API response later. For example, `.../runs/test1?api-version...` + 1. Replace `<your_model_name>` with the name of your custom model (the _ModelName_ value you used in the creation step). + 1. Replace the `<your_run_name>` with your unique test run name for the task queue. It is an async API task queue name for you to be able retrieve the API response later. For example, `.../runs/test1?api-version...` 1. Replace the `<your_url_string>` contents with the blob URL of the image 1. Open a command prompt window. 1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command. In this guide, you learned how to use a custom Product Recognition model to bett > [Planogram matching](shelf-planogram.md) * [Image Analysis overview](../overview-image-analysis.md)+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b) |
ai-services | Shelf Planogram | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-planogram.md | -# Shelf Planogram Compliance (preview) +# Shelf planogram compliance (preview) A planogram is a diagram that indicates the correct placement of retail products on shelves. The Planogram Compliance API lets you compare analysis results from a photo to the store's planogram input. It returns an account of all the positions in the planogram, and whether a product was found in each position. The X and Y coordinates are relative to a top-left origin, and the width and hei Quantities in the planogram schema are in nonspecific units. They can correspond to inches, centimeters, or any other unit of measurement. The matching algorithm calculates the relationship between the photo analysis units (pixels) and the planogram units. -### Planogram API Model +### Planogram API model Describes the planogram for planogram matching operations. Describes the planogram for planogram matching operations. | `fixtures` | [FixtureApiModel](#fixture-api-model) | List of fixtures in the planogram. | Yes | | `positions` | [PositionApiModel](#position-api-model)| List of positions in the planogram. | Yes | -### Product API Model +### Product API model Describes a product in the planogram. Describes a product in the planogram. | `w` | double | Width of the product. | Yes | | `h` | double | Height of the fixture. | Yes | -### Fixture API Model +### Fixture API model Describes a fixture (shelf or similar hardware) in a planogram. Describes a fixture (shelf or similar hardware) in a planogram. | `x` | double | Left offset from the origin, in units of in inches or centimeters. | Yes | | `y` | double | Top offset from the origin, in units of inches or centimeters. | Yes | -### Position API Model +### Position API model Describes a product's position in a planogram. A successful response is returned in JSON, showing the products (or gaps) detect } ``` -### Planogram Matching Position API Model +### Planogram matching position API model Paired planogram position ID and corresponding detected object from product understanding result. Paired planogram position ID and corresponding detected object from product unde ## Next steps * [Image Analysis overview](../overview-image-analysis.md)+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0a) |
ai-services | Specify Detection Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md | Title: How to specify a detection model - Face description: This article will show you how to choose which face detection model to use with your Azure AI Face application. #-+ Previously updated : 03/05/2021- Last updated : 02/14/2024+ ms.devlang: csharp You should be familiar with the concept of AI face detection. If you aren't, see The different face detection models are optimized for different tasks. See the following table for an overview of the differences. -|**detection_01** |**detection_02** |**detection_03** -|||| -|Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations. -|Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. -|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns mask and head pose attributes if they're specified in the detect call. -|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Returns face landmarks if they're specified in the detect call. ++| Model | Description | Performance notes | Attributes | Landmarks | +|||-|-|--| +|**detection_01** | Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. | +|**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Does not return face attributes. | Does not return face landmarks. | +|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. | + The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns. |
ai-services | Use Headpose | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-headpose.md | From here, you can use the returned **Face** objects in your display. The follow ## Next steps -See the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements. +* See the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. +* Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements. |
ai-services | Use Large Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md | -This guide shows you how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. PersonGroups can hold up to 1000 persons in the free tier and 10,000 in the paid tier, while LargePersonGroups can hold up to one million persons in the paid tier. +This guide shows you how to scale up from existing **PersonGroup** and **FaceList** objects to **LargePersonGroup** and **LargeFaceList** objects, respectively. **PersonGroups** can hold up to 1000 persons in the free tier and 10,000 in the paid tier, while **LargePersonGroups** can hold up to one million persons in the paid tier. > [!IMPORTANT] > The newer data structure **PersonDirectory** is recommended for new development. It can hold up to 75 million identities and does not require manual training. For more information, see the [PersonDirectory guide](./use-persondirectory.md). -This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide. +This guide demonstrates the migration process. It assumes a basic familiarity with **PersonGroup** and **FaceList** objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide. -LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture. +**LargePersonGroup** and **LargeFaceList** are collectively referred to as large-scale operations. **LargePersonGroup** can contain up to 1 million persons, each with a maximum of 248 faces. **LargeFaceList** can contain up to 1 million faces. The large-scale operations are similar to the conventional **PersonGroup** and **FaceList** but have some differences because of the new architecture. The samples are written in C# by using the Azure AI Face client library. > [!NOTE]-> To enable Face search performance for Identification and FindSimilar in large scale, introduce a Train operation to preprocess the LargeFaceList and LargePersonGroup. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform Identification and FindSimilar if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed. +> To enable Face search performance for **Identification** and **FindSimilar** in large-scale, introduce a **Train** operation to preprocess the **LargeFaceList** and **LargePersonGroup**. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform **Identification** and **FindSimilar** if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed. ## Step 1: Initialize the client object When you use the Face client library, the key and subscription endpoint are pass ## Step 2: Code migration -This section focuses on how to migrate PersonGroup or FaceList implementation to LargePersonGroup or LargeFaceList. Although LargePersonGroup or LargeFaceList differs from PersonGroup or FaceList in design and internal implementation, the API interfaces are similar for backward compatibility. +This section focuses on how to migrate **PersonGroup** or **FaceList** implementation to **LargePersonGroup** or **LargeFaceList**. Although **LargePersonGroup** or **LargeFaceList** differs from **PersonGroup** or **FaceList** in design and internal implementation, the API interfaces are similar for backward compatibility. -Data migration isn't supported. You re-create the LargePersonGroup or LargeFaceList instead. +Data migration isn't supported. You re-create the **LargePersonGroup** or **LargeFaceList** instead. ### Migrate a PersonGroup to a LargePersonGroup -Migration from a PersonGroup to a LargePersonGroup is simple. They share exactly the same group-level operations. +Migration from a **PersonGroup** to a **LargePersonGroup** is simple. They share exactly the same group-level operations. -For PersonGroup- or person-related implementation, it's necessary to change only the API paths or SDK class/module to LargePersonGroup and LargePersonGroup Person. +For **PersonGroup** or person-related implementation, it's necessary to change only the API paths or SDK class/module to **LargePersonGroup** and **LargePersonGroup** **Person**. -Add all of the faces and persons from the PersonGroup to the new LargePersonGroup. For more information, see [Add faces](add-faces.md). +Add all of the faces and persons from the **PersonGroup** to the new **LargePersonGroup**. For more information, see [Add faces](add-faces.md). ### Migrate a FaceList to a LargeFaceList -| FaceList APIs | LargeFaceList APIs | +| **FaceList** APIs | **LargeFaceList** APIs | |::|::| | Create | Create | | Delete | Delete | Add all of the faces and persons from the PersonGroup to the new LargePersonGrou | - | Train | | - | Get Training Status | -The preceding table is a comparison of list-level operations between FaceList and LargeFaceList. As is shown, LargeFaceList comes with new operations, Train and Get Training Status, when compared with FaceList. Training the LargeFaceList is a precondition of the -[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for FaceList. The following snippet is a helper function to wait for the training of a LargeFaceList: +The preceding table is a comparison of list-level operations between **FaceList** and **LargeFaceList**. As is shown, **LargeFaceList** comes with new operations, **Train** and **Get Training Status**, when compared with **FaceList**. Training the **LargeFaceList** is a precondition of the +[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for **FaceList**. The following snippet is a helper function to wait for the training of a **LargeFaceList**: ```csharp /// <summary> private static async Task TrainLargeFaceList( } ``` -Previously, a typical use of FaceList with added faces and FindSimilar looked like the following: +Previously, a typical use of **FaceList** with added faces and **FindSimilar** looked like the following: ```csharp // Create a FaceList. using (Stream stream = File.OpenRead(QueryImagePath)) } ``` -When migrating it to LargeFaceList, it becomes the following: +When migrating it to **LargeFaceList**, it becomes the following: ```csharp // Create a LargeFaceList. using (Stream stream = File.OpenRead(QueryImagePath)) } ``` -As previously shown, the data management and the FindSimilar part are almost the same. The only exception is that a fresh preprocessing Train operation must complete in the LargeFaceList before FindSimilar works. +As previously shown, the data management and the **FindSimilar** part are almost the same. The only exception is that a fresh preprocessing **Train** operation must complete in the **LargeFaceList** before **FindSimilar** works. ## Step 3: Train suggestions -Although the Train operation speeds up [FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) -and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table. +Although the **Train** operation speeds up **[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)** +and **[Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239)**, the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table. | Scale for faces or persons | Estimated training time | |::|::| To better utilize the large-scale feature, we recommend the following strategies ### Step 3a: Customize time interval -As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList. +As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For **LargeFaceList** with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the **LargeFaceList**. -The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval. +The same strategy also applies to **LargePersonGroup**. For example, when you train a **LargePersonGroup** with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval. ### Step 3b: Small-scale buffer -Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired. +Persons or faces in a **LargePersonGroup** or a **LargeFaceList** are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired. -To mitigate this problem, use an extra small-scale LargePersonGroup or LargeFaceList as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master LargePersonGroup or LargeFaceList by running the master training on a sparser interval. Examples are in the middle of the night and daily. +To mitigate this problem, use an extra small-scale **LargePersonGroup** or **LargeFaceList** as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master **LargePersonGroup** or **LargeFaceList** by running the master training on a sparser interval. Examples are in the middle of the night and daily. An example workflow: -1. Create a master LargePersonGroup or LargeFaceList, which is the master collection. Create a buffer LargePersonGroup or LargeFaceList, which is the buffer collection. The buffer collection is only for newly added persons or faces. +1. Create a master **LargePersonGroup** or **LargeFaceList**, which is the master collection. Create a buffer **LargePersonGroup** or **LargeFaceList**, which is the buffer collection. The buffer collection is only for newly added persons or faces. 1. Add new persons or faces to both the master collection and the buffer collection. 1. Only train the buffer collection with a short time interval to ensure that the newly added entries take effect.-1. Call Identification or FindSimilar against both the master collection and the buffer collection. Merge the results. -1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection. -1. Delete the old buffer collection after the Train operation finishes on the master collection. +1. Call Identification or **FindSimilar** against both the master collection and the buffer collection. Merge the results. +1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the **Train** operation on the master collection. +1. Delete the old buffer collection after the **Train** operation finishes on the master collection. ### Step 3c: Standalone training -If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency. +If a relatively long latency is acceptable, it isn't necessary to trigger the **Train** operation right after you add new data. Instead, the **Train** operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the **Train** frequency. -Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a LargePersonGroup by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is: +Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a **LargePersonGroup** by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is: ```csharp private static void Main() For more information about data management and identification-related implementa ## Summary -In this guide, you learned how to migrate the existing PersonGroup or FaceList code, not data, to the LargePersonGroup or LargeFaceList: +In this guide, you learned how to migrate the existing **PersonGroup** or **FaceList** code, not data, to the **LargePersonGroup** or **LargeFaceList**: -- LargePersonGroup and LargeFaceList work similar to PersonGroup or FaceList, except that the Train operation is required by LargeFaceList.-- Take the proper Train strategy to dynamic data update for large-scale data sets.+- **LargePersonGroup** and **LargeFaceList** work similar to **PersonGroup** or **FaceList**, except that the **Train** operation is required by **LargeFaceList**. +- Take the proper **Train** strategy to dynamic data update for large-scale data sets. ## Next steps -Follow a how-to guide to learn how to add faces to a PersonGroup or write a script to do the Identify operation on a PersonGroup. +Follow a how-to guide to learn how to add faces to a **PersonGroup** or write a script to do the **Identify** operation on a **PersonGroup**. - [Add faces](add-faces.md) - [Face client library quickstart](../quickstarts-sdk/identity-client-library.md) |
ai-services | Studio Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/studio-quickstart.md | |
ai-services | Create Account Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-terraform.md | Title: 'Quickstart: Create an Azure AI services resource using Terraform' description: 'In this article, you create an Azure AI services resource using Terraform' keywords: Azure AI services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence -# Last updated 4/14/2023 |
ai-services | Getting Started Improving Your Classifier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/getting-started-improving-your-classifier.md | Title: Improving your model - Custom Vision Service + Title: Improving your model - Custom Vision service description: In this article you'll learn how the amount, quality and variety of data can improve the quality of your model in the Custom Vision service. #-In this guide, you'll learn how to improve the quality of your Custom Vision Service model. The quality of your [classifier](./getting-started-build-a-classifier.md) or [object detector](./get-started-build-detector.md) depends on the amount, quality, and variety of the labeled data you provide it and how balanced the overall dataset is. A good model has a balanced training dataset that is representative of what will be submitted to it. The process of building such a model is iterative; it's common to take a few rounds of training to reach expected results. +In this guide, you'll learn how to improve the quality of your Custom Vision model. The quality of your [classifier](./getting-started-build-a-classifier.md) or [object detector](./get-started-build-detector.md) depends on the amount, quality, and variety of labeled data you provide and how balanced the overall dataset is. A good model has a balanced training dataset that is representative of what will be submitted to it. The process of building such a model is iterative; it's common to take a few rounds of training to reach expected results. The following is a general pattern to help you train a more accurate model: If you're using an image classifier, you may need to add _negative samples_ to h Object detectors handle negative samples automatically, because any image areas outside of the drawn bounding boxes are considered negative. > [!NOTE]-> The Custom Vision Service supports some automatic negative image handling. For example, if you are building a grape vs. banana classifier and submit an image of a shoe for prediction, the classifier should score that image as close to 0% for both grape and banana. +> The Custom Vision service supports some automatic negative image handling. For example, if you are building a grape vs. banana classifier and submit an image of a shoe for prediction, the classifier should score that image as close to 0% for both grape and banana. > > On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes. When you use or test the model by submitting images to the prediction endpoint, ![screenshot of the predictions tab, with images in view](./media/getting-started-improving-your-classifier/predictions.png) -2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section. +1. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section. To add an image to your existing training data, select the image, set the correct tag(s), and select __Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab. ![Screenshot of the tagging page.](./media/getting-started-improving-your-classifier/tag.png) -3. Then use the __Train__ button to retrain the model. +1. Then use the __Train__ button to retrain the model. ## Visually inspect predictions |
ai-services | Use Prediction Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/use-prediction-api.md | -After you've trained your model, you can test images programmatically by submitting them to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs. +After you've trained your model, you can test it programmatically by submitting images to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs. > [!NOTE] > This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15). |
ai-services | Concept Read | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md | -> For extracting text from external images like labels, street signs, and posters, use the [Azure AI Vision v4.0 preview Read](../../ai-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios. +> For extracting text from external images like labels, street signs, and posters, use the [Azure AI Image Analysis v4.0 Read](../../ai-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios. > Document Intelligence Read Optical Character Recognition (OCR) model runs at a higher resolution than Azure AI Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages. The Read model is the underlying OCR engine for other Document Intelligence prebuilt models like Layout, General Document, Invoice, Receipt, Identity (ID) document, Health insurance card, W2 in addition to custom models. |
ai-services | How To Create Immersive Reader | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-create-immersive-reader.md | Title: "Create an Immersive Reader Resource" + Title: Create an Immersive Reader resource -description: This article shows you how to create a new Immersive Reader resource with a custom subdomain and then configure Microsoft Entra ID in your Azure tenant. +description: Learn how to create a new Immersive Reader resource with a custom subdomain and then configure Microsoft Entra ID in your Azure tenant. # Previously updated : 03/31/2023 Last updated : 02/12/2024 # Create an Immersive Reader resource and configure Microsoft Entra authentication -In this article, we provide a script that creates an Immersive Reader resource and configure Microsoft Entra authentication. Each time an Immersive Reader resource is created, whether with this script or in the portal, it must also be configured with Microsoft Entra permissions. +This article explains how to create an Immersive Reader resource by using the provided script. This script also configures Microsoft Entra authentication. Each time an Immersive Reader resource is created, whether with this script or in the portal, it must be configured with Microsoft Entra permissions. -The script is designed to create and configure all the necessary Immersive Reader and Microsoft Entra resources for you all in one step. However, you can also just configure Microsoft Entra authentication for an existing Immersive Reader resource, if for instance, you happen to have already created one in the Azure portal. +The script creates and configures all the necessary Immersive Reader and Microsoft Entra resources for you. However, you can also configure Microsoft Entra authentication for an existing Immersive Reader resource, if you already created one in the Azure portal. The script first looks for existing Immersive Reader and Microsoft Entra resources in your subscription, and creates them only if they don't already exist. -For some customers, it may be necessary to create multiple Immersive Reader resources, for development vs. production, or perhaps for multiple different regions your service is deployed in. For those cases, you can come back and use the script multiple times to create different Immersive Reader resources and get them configured with the Microsoft Entra permissions. --The script is designed to be flexible. It first looks for existing Immersive Reader and Microsoft Entra resources in your subscription, and creates them only as necessary if they don't already exist. If it's your first time creating an Immersive Reader resource, the script does everything you need. If you want to use it just to configure Microsoft Entra ID for an existing Immersive Reader resource that was created in the portal, it does that too. -It can also be used to create and configure multiple Immersive Reader resources. +For some customers, it might be necessary to create multiple Immersive Reader resources, for development versus production, or perhaps for different regions where your service is deployed. For those cases, you can come back and use the script multiple times to create different Immersive Reader resources and get them configured with Microsoft Entra permissions. ## Permissions If you aren't an owner, the following scope-specific permissions are required: * **Contributor**. You need to have at least a Contributor role associated with the Azure subscription: - :::image type="content" source="media/contributor-role.png" alt-text="Screenshot of contributor built-in role description."::: + :::image type="content" source="media/contributor-role.png" alt-text="Screenshot of contributor built-in role description."::: * **Application Developer**. You need to have at least an Application Developer role associated in Microsoft Entra ID: - :::image type="content" source="media/application-developer-role.png" alt-text="{alt-text}"::: + :::image type="content" source="media/application-developer-role.png" alt-text="Screenshot of the developer built-in role description."::: -For more information, _see_ [Microsoft Entra built-in roles](../../active-directory/roles/permissions-reference.md#application-developer) +For more information, see [Microsoft Entra built-in roles](../../active-directory/roles/permissions-reference.md#application-developer). -## Set up PowerShell environment +## Set up PowerShell resources -1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`. +1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to **PowerShell** in the upper-left hand dropdown or by typing `pwsh`. 1. Copy and paste the following code snippet into the shell. For more information, _see_ [Microsoft Entra built-in roles](../../active-direct Write-Host "Immersive Reader resource created successfully" } - # Create an Azure Active Directory app if it doesn't already exist + # Create an Microsoft Entra app if it doesn't already exist $clientId = az ad app show --id $AADAppIdentifierUri --query "appId" -o tsv if (-not $clientId) {- Write-Host "Creating new Azure Active Directory app" + Write-Host "Creating new Microsoft Entra app" $clientId = az ad app create --display-name $AADAppDisplayName --identifier-uris $AADAppIdentifierUri --query "appId" -o tsv if (-not $clientId) {- throw "Error: Failed to create Azure Active Directory application" + throw "Error: Failed to create Microsoft Entra application" }- Write-Host "Azure Active Directory application created successfully." + Write-Host "Microsoft Entra application created successfully." $clientSecret = az ad app credential reset --id $clientId --end-date "$AADAppClientSecretExpiration" --query "password" | % { $_.Trim('"') } if (-not $clientSecret) {- throw "Error: Failed to create Azure Active Directory application client secret" + throw "Error: Failed to create Microsoft Entra application client secret" }- Write-Host "Azure Active Directory application client secret created successfully." + Write-Host "Microsoft Entra application client secret created successfully." - Write-Host "NOTE: To manage your Active Directory application client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow + Write-Host "NOTE: To manage your Microsoft Entra application client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Microsoft Entra ID -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow } # Create a service principal if it doesn't already exist For more information, _see_ [Microsoft Entra built-in roles](../../active-direct } Write-Host "Service principal access granted successfully" - # Grab the tenant ID, which is needed when obtaining an Azure AD token + # Grab the tenant ID, which is needed when obtaining a Microsoft Entra token $tenantId = az account show --query "tenantId" -o tsv - # Collect the information needed to obtain an Azure AD token into one object + # Collect the information needed to obtain a Microsoft Entra token into one object $result = @{} $result.TenantId = $tenantId $result.ClientId = $clientId For more information, _see_ [Microsoft Entra built-in roles](../../active-direct Write-Host "*****" if($clientSecret -ne $null) { - Write-Host "This function has created a client secret (password) for you. This secret is used when calling Azure Active Directory to fetch access tokens." - Write-Host "This is the only time you will ever see the client secret for your Azure Active Directory application, so save it now." -ForegroundColor Yellow + Write-Host "This function has created a client secret (password) for you. This secret is used when calling Microsoft Entra to fetch access tokens." + Write-Host "This is the only time you will ever see the client secret for your Microsoft Entra application, so save it now." -ForegroundColor Yellow } else{- Write-Host "You will need to retrieve the ClientSecret from your original run of this function that created it. If you don't have it, you will need to go create a new client secret for your Azure Active Directory application. Please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section." -ForegroundColor Yellow + Write-Host "You will need to retrieve the ClientSecret from your original run of this function that created it. If you don't have it, you will need to go create a new client secret for your Microsoft Entra application. Please visit https://portal.azure.com and go to Home -> Microsoft Entra ID -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section." -ForegroundColor Yellow } Write-Host "*****`n" Write-Output (ConvertTo-Json $result) For more information, _see_ [Microsoft Entra built-in roles](../../active-direct 1. Run the function `Create-ImmersiveReaderResource`, supplying the '<PARAMETER_VALUES>' placeholders with your own values as appropriate. ```azurepowershell-interactive- Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>' + Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<MICROSOFT_ENTRA_DISPLAY_NAME>' -AADAppIdentifierUri '<MICROSOFT_ENTRA_IDENTIFIER_URI>' -AADAppClientSecretExpiration '<MICROSOFT_ENTRA_CLIENT_SECRET_EXPIRATION>' ``` - The full command looks something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. __Do not copy or use this command as-is.__ Copy and use the command with your own values. This example has dummy values for the '<PARAMETER_VALUES>'. Yours may be different, as you come up with your own names for these values. + The full command looks something like the following. Here we put each parameter on its own line for clarity, so you can see the whole command. __Do not copy or use this command as-is.__ Copy and use the command with your own values. This example has dummy values for the `<PARAMETER_VALUES>`. Yours might be different, as you come up with your own names for these values. ``` Create-ImmersiveReaderResource For more information, _see_ [Microsoft Entra built-in roles](../../active-direct | Parameter | Comments | | | | | SubscriptionName |Name of the Azure subscription to use for your Immersive Reader resource. You must have a subscription in order to create a resource. |- | ResourceName | Must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters.| - | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters. This parameter is optional if the resource already exists. | - | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). Visit our [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. | + | ResourceName | Must be alphanumeric, and can contain `-`, as long as the `-` isn't the first or last character. Length can't exceed 63 characters.| + | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and can contain `-`, as long as the `-` isn't the first or last character. Length can't exceed 63 characters. This parameter is optional if the resource already exists. | + | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). To learn more about each available SKU, visit our [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/). This parameter is optional if the resource already exists. | | ResourceLocation |Options: `australiaeast`, `brazilsouth`, `canadacentral`, `centralindia`, `centralus`, `eastasia`, `eastus`, `eastus2`, `francecentral`, `germanywestcentral`, `japaneast`, `japanwest`, `jioindiawest`, `koreacentral`, `northcentralus`, `northeurope`, `norwayeast`, `southafricanorth`, `southcentralus`, `southeastasia`, `swedencentral`, `switzerlandnorth`, `switzerlandwest`, `uaenorth`, `uksouth`, `westcentralus`, `westeurope`, `westus`, `westus2`, `westus3`. This parameter is optional if the resource already exists. | | ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group doesn't already exist, a new one with this name is created. | | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. | | AADAppDisplayName |The Microsoft Entra application display name. If an existing Microsoft Entra application isn't found, a new one with this name is created. This parameter is optional if the Microsoft Entra application already exists. | | AADAppIdentifierUri |The URI for the Microsoft Entra application. If an existing Microsoft Entra application isn't found, a new one with this URI is created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we're using the default Microsoft Entra URI scheme prefix of `api://` for compatibility with the [Microsoft Entra policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |- | AADAppClientSecretExpiration |The date or datetime after which your Microsoft Entra Application Client Secret (password) will expire (for example, '2020-12-31T11:59:59+00:00' or '2020-12-31'). This function creates a client secret for you. To manage Microsoft Entra application client secrets after you've created this resource, visit https://portal.azure.com and go to Home -> Microsoft Entra ID -> App Registrations -> (your app) `[AADAppDisplayName]` -> Certificates and Secrets section -> Client Secrets section (as shown in the "Manage your Microsoft Entra application secrets" screenshot).| + | AADAppClientSecretExpiration |The date or datetime after which your Microsoft Entra Application Client Secret (password) expires (for example, '2020-12-31T11:59:59+00:00' or '2020-12-31'). This function creates a client secret for you. | - Manage your Microsoft Entra application secrets + To manage your Microsoft Entra application client secrets after you create this resource, visit the [Azure portal](https://portal.azure.com) and go to **Home** -> **Microsoft Entra ID** -> **App Registrations** -> (your app) `[AADAppDisplayName]` -> **Certificates and Secrets** section -> **Client Secrets** section. - ![Azure portal Certificates and Secrets blade](./media/client-secrets-blade.png) + :::image type="content" source="media/client-secrets-blade.png" alt-text="Screenshot of the Azure portal Certificates and Secrets pane." lightbox="media/client-secrets-blade.png"::: 1. Copy the JSON output into a text file for later use. The output should look like the following. For more information, _see_ [Microsoft Entra built-in roles](../../active-direct } ``` -## Next steps +## Next step -* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js -* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android -* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS -* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python -* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md) +> [!div class="nextstepaction"] +> [How to launch the Immersive Reader](how-to-launch-immersive-reader.md) |
ai-services | Use Native Documents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/use-native-documents.md | -A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities: +A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing before using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities: * [Personally Identifiable Information (PII)](../personally-identifiable-information/overview.md). The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. The `PiiEntityRecognition` API supports native document processing. A native document refers to the file format used to create the original document ## Supported document formats - Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats: + Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats: |File type|File extension|Description| ||--|--| A native document refers to the file format used to create the original document > [!NOTE] > The cURL package is pre-installed on most Windows 10 and Windows 11 and most macOS and Linux distributions. You can check the package version with the following commands:- > Windows: `curl.exe -V`. + > Windows: `curl.exe -V` > macOS `curl -V` > Linux: `curl --version` A native document refers to the file format used to create the original document * [Windows](https://curl.haxx.se/windows/). * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows). -* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). +* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). * An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files: Your Language resource needs granted access to your storage account before it ca * [**Shared access signature (SAS) tokens**](shared-access-signatures.md). User delegation SAS tokens are secured with Microsoft Entra credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account. -* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources +* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources. For this project, we authenticate access to the `source location` and `target location` URLs with Shared Access Signature (SAS) tokens appended as query strings. Each token is assigned to a specific blob (file). For this quickstart, you need a **source document** uploaded to your **source co "language": "en-US", "id": "Output-excel-file", "source": {- "location": "{your-source-container-with-SAS-URL}" + "location": "{your-source-blob-with-SAS-URL}" }, "target": { "location": "{your-target-container-with-SAS-URL}" For this quickstart, you need a **source document** uploaded to your **source co { "kind": "PiiEntityRecognition", "parameters":{- "excludePiiCategoriesredac" : ["PersonType", "Category2", "Category3"], - "redactionPolicy": "UseEntityTypeName" + "excludePiiCategories" : ["PersonType", "Category2", "Category3"], + "redactionPolicy": "UseRedactionCharacterWithRefId" } } ] For this project, you need a **source document** uploaded to your **source conta "documents":[ { "source":{- "location":"{your-source-container-SAS-URL}" + "location":"{your-source-blob-SAS-URL}" }, "targets": { |
ai-services | Assistants Reference Threads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-threads.md | curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads?api-version=2024 ## Retrieve thread ```http-GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads{thread_id}?api-version=2024-02-15-preview +GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview ``` Retrieves a thread. curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api- | `id` | string | The identifier, which can be referenced in API endpoints.| | `object` | string | The object type, which is always thread. | | `created_at` | integer | The Unix timestamp (in seconds) for when the thread was created. |-| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. | +| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. | |
ai-services | Advanced Prompt Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/advanced-prompt-engineering.md | Title: Prompt engineering techniques with Azure OpenAI -description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models -+description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models. + Previously updated : 04/20/2023 Last updated : 02/16/2024 keywords: ChatGPT, GPT-4, prompt engineering, meta prompts, chain of thought zone_pivot_groups: openai-prompt While the principles of prompt engineering can be generalized across many differ Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries. -The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the GPT-35-Turbo models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md). +The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context#limitations), is just as important as understanding how to leverage their strengths. |
ai-services | Gpt With Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md | Enhancements let you incorporate other Azure AI services (such as Azure AI Visio **Optical Character Recognition (OCR)**: Azure AI Vision complements GPT-4 Turbo with Vision by providing high-quality OCR results as supplementary information to the chat model. It allows the model to produce higher quality responses for images with dense text, transformed images, and numbers-heavy financial documents, and increases the variety of languages the model can recognize in text. > [!IMPORTANT]-> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. +> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. :::image type="content" source="../media/concepts/gpt-v/receipts.png" alt-text="Photo of several receipts."::: Enhancements let you incorporate other Azure AI services (such as Azure AI Visio > [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RW1eHRf] > [!NOTE]-> In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in the paid (S0) tier, in addition to your Azure OpenAI resource. +> In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in the paid (S1) tier, in addition to your Azure OpenAI resource. ## Special pricing information |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | See [model versions](../concepts/model-versions.md) to learn about how Azure Ope **<sup>1</sup>** This model will accept requests > 4,096 tokens. It is not recommended to exceed the 4,096 input token limit as the newer version of the model are capped at 4,096 tokens. If you encounter issues when exceeding 4,096 input tokens with this model this configuration is not officially supported. -#### Azure Government regions --The following GPT-3 models are available with [Azure Government](/azure/azure-government/documentation-government-welcome): --|Model ID | Model Availability | -|--|--| -|`gpt-35-turbo` (1106) |US Gov Virginia<br>US Gov Arizona | - ### Embeddings models These models can only be used with Embedding API requests. |
ai-services | Gpt With Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md | The **Optical character recognition (OCR)** integration allows the model to prod The **object grounding** integration brings a new layer to data analysis and user interaction, as the feature can visually distinguish and highlight important elements in the images it processes. > [!IMPORTANT]-> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. +> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. > [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information). GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored Follow these steps to set up a video retrieval system and integrate it with your AI chat model. > [!IMPORTANT]-> To use Vision enhancement, you need an Azure AI Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. +> To use Vision enhancement, you need an Azure AI Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. > [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information). |
ai-services | Switching Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md | |
ai-services | Work With Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/work-with-code.md | Title: 'How to use the Codex models to work with code' -description: Learn how to use the Codex models on Azure OpenAI to handle a variety of coding tasks +description: Learn how to use the Codex models on Azure OpenAI to handle a variety of coding tasks. # Previously updated : 06/24/2022-- Last updated : 02/15/2024++ # Codex models and Azure OpenAI Service +> [!NOTE] +> This article was authored and tested against the [legacy code generation models](/azure/ai-services/openai/concepts/legacy-models). These models use the completions API, and its prompt/completion style of interaction. If you wish to test the techniques described in this article verbatim we recommend using the `gpt-35-turbo-instruct` model which allows access to the completions API. However, for code generation the chat completions API and the latest GPT-4 models will generally yield the best results, but the prompts would need to be converted to the conversational style specific to interacting with those models. + The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. You can use Codex for a variety of tasks including: Codex understands dozens of different programming languages. Many share similar ### Prompt Codex with what you want it to do -If you want Codex to create a webpage, placing the first line of code in an HTML document (`<!DOCTYPE html>`) after your comment tells Codex what it should do next. The same method works for creating a function from a comment (following the comment with a new line starting with func or def). +If you want Codex to create a webpage, placing the initial line of code in an HTML document (`<!DOCTYPE html>`) after your comment tells Codex what it should do next. The same method works for creating a function from a comment (following the comment with a new line starting with func or def). ```html <!-- Create a web page with the title 'Kat Katman attorney at paw' --> animals = [ {"name": "Chomper", "species": "Hamster"}, {"name": ### Lower temperatures give more precise results -Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses. +Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models might produce random or erratic responses. In cases where you need Codex to provide different potential results, start at zero and then increment upwards by 0.1 until you find suitable variation. |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md | |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen **Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)+- `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) +- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) +- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) +- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json) - `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-12-15-preview/inference.json)+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) **Request body** POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen **Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)+- `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) +- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) +- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) +- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json) - `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen **Supported versions** -- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)+- `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) +- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) +- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) +- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json) - `2023-12-01-preview` (required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com | ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. | **Supported versions**-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) +- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) +- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) +- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json) - `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) POST https://{your-resource-name}.openai.azure.com/openai/images/generations:sub **Supported versions** -- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) +- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) +- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) - `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{oper **Supported versions** -- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) +- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) +- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{o **Supported versions** -- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) +- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) +- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) #### Example request POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen **Supported versions** -- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json) - `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen **Supported versions** -- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json) - `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) |
ai-services | Captioning Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md | |
ai-services | Captioning Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-quickstart.md | -# ms.devlang: cpp, csharp zone_pivot_groups: programming-languages-speech-sdk-cli |
ai-services | Get Speech Recognition Results | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-speech-recognition-results.md | -# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-sdk-cli keywords: speech to text, speech to text software |
ai-services | Get Started Intent Recognition Clu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition-clu.md | |
ai-services | Get Started Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition.md | -# ms.devlang: cpp, csharp, java, javascript, python zone_pivot_groups: programming-languages-speech-services keywords: intent recognition keywords: intent recognition # Quickstart: Recognize intents with the Speech service and LUIS > [!IMPORTANT]-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities. +> LUIS will be retired on October 1st 2025. As of April 1st 2023 you can't create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities. > > Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU. |
ai-services | Get Started Speaker Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speaker-recognition.md | -# ms.devlang: cpp, csharp, javascript zone_pivot_groups: programming-languages-speech-services keywords: speaker recognition, voice biometry |
ai-services | Get Started Speech Translation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-translation.md | |
ai-services | Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/intent-recognition.md | Both a Speech resource and Language resource are required to use CLU with the Sp For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](../language-service/conversational-language-understanding/overview.md). > [!IMPORTANT]-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities. +> LUIS will be retired on October 1st 2025. As of April 1st 2023 you can't create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities. > > Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU. |
ai-services | Speech Container Cstt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md | The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive- | Version | Path | |--|| | Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |-| 4.5.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:4.5.0-amd64` | +| 4.6.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:4.6.0-amd64` | All tags, except for `latest`, are in the following format and are case sensitive: |
ai-services | Speech Container Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md | The following table lists the Speech containers available in the Microsoft Conta | Container | Features | Supported versions and locales | |--|--|--|-| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.5.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| -| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [custom speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.5.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | +| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.6.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).| +| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [custom speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.6.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). | | [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). | | [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 3.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). | |
ai-services | Speech Container Stt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-stt.md | The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive- | Version | Path | |--|| | Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest`<br/><br/>The `latest` tag pulls the latest image for the `en-US` locale. |-| 4.5.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:4.5.0-amd64-mr-in` | +| 4.6.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:4.6.0-amd64-mr-in` | All tags, except for `latest`, are in the following format and are case sensitive: |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md | -**Translation - Cloud:** Cloud translation is available in all languages for the Translate operation of Text Translation and for Document Translation. +**Translation - Cloud:** Cloud translation is available in all languages for the `Translate` operation of Text Translation and for Document Translation. **Translation ΓÇô Containers:** Language support for Containers. -**Dictionary:** Use the [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) or [Dictionary Examples](reference/v3-0-dictionary-examples.md) operations from the Text Translation feature to display alternative translations from or to English and examples of words in context. +**Dictionary:** To display alternative translations from or to English and examples of words in context, use the [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) or [Dictionary Examples](reference/v3-0-dictionary-examples.md) operations from the Text Translation feature. ## Translation -|Afrikaans|af|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Albanian|sq|Γ£ö|Γ£ö| |Γ£ö| | -|Amharic|am|Γ£ö|Γ£ö| |Γ£ö| | -|Arabic|ar|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Armenian|hy|Γ£ö|Γ£ö| |Γ£ö| | -|Assamese|as|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Azerbaijani (Latin)|az|Γ£ö|Γ£ö| |Γ£ö| | -|Bangla|bn|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Bashkir|ba|Γ£ö|Γ£ö| |Γ£ö| | -|Basque|eu|Γ£ö|Γ£ö| |Γ£ö| | -|Bhojpuri|bho|Γ£ö|Γ£ö | | | | +|Afrikaans|`af`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Albanian|`sq`|Γ£ö|Γ£ö| |Γ£ö| | +|Amharic|`am`|Γ£ö|Γ£ö| |Γ£ö| | +|Arabic|`ar`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Armenian|`hy`|Γ£ö|Γ£ö| |Γ£ö| | +|Assamese|`as`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Azerbaijani (Latin)|`az`|Γ£ö|Γ£ö| |Γ£ö| | +|Bangla|`bn`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Bashkir|`ba`|Γ£ö|Γ£ö| |Γ£ö| | +|Basque|`eu`|Γ£ö|Γ£ö| |Γ£ö| | +|Bhojpuri|`bho`|Γ£ö|Γ£ö | | | | |Bodo|brx |Γ£ö|Γ£ö | | | |-|Bosnian (Latin)|bs|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Bulgarian|bg|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Cantonese (Traditional)|yue|Γ£ö|Γ£ö| |Γ£ö| | -|Catalan|ca|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Chinese (Literary)|lzh|Γ£ö|Γ£ö| | | | -|Chinese Simplified|zh-Hans|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Chinese Traditional|zh-Hant|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|chiShona|sn|Γ£ö|Γ£ö| | | | -|Croatian|hr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Czech|cs|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Danish|da|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Dari|prs|Γ£ö|Γ£ö| |Γ£ö| | -|Divehi|dv|Γ£ö|Γ£ö| |Γ£ö| | -|Dogri|doi|Γ£ö| | | | | -|Dutch|nl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|English|en|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Estonian|et|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Faroese|fo|Γ£ö|Γ£ö| |Γ£ö| | -|Fijian|fj|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Filipino|fil|Γ£ö|Γ£ö|Γ£ö| | | -|Finnish|fi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|French|fr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|French (Canada)|fr-ca|Γ£ö|Γ£ö| | | | -|Galician|gl|Γ£ö|Γ£ö| |Γ£ö| | -|Georgian|ka|Γ£ö|Γ£ö| |Γ£ö| | -|German|de|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Greek|el|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Gujarati|gu|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Haitian Creole|ht|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| -|Hausa|ha|Γ£ö|Γ£ö| |Γ£ö| | -|Hebrew|he|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Hindi|hi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Hmong Daw (Latin)|mww|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| -|Hungarian|hu|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Icelandic|is|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Igbo|ig|Γ£ö|Γ£ö| |Γ£ö| | -|Indonesian|id|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Inuinnaqtun|ikt|Γ£ö|Γ£ö| | | | -|Inuktitut|iu|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Inuktitut (Latin)|iu-Latn|Γ£ö|Γ£ö| |Γ£ö| | -|Irish|ga|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Italian|it|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Japanese|ja|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Kannada|kn|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Kashmiri|ks|Γ£ö|Γ£ö | | | | -|Kazakh|kk|Γ£ö|Γ£ö| |Γ£ö| | -|Khmer|km|Γ£ö|Γ£ö| |Γ£ö| | -|Kinyarwanda|rw|Γ£ö|Γ£ö| |Γ£ö| | -|Klingon|tlh-Latn|Γ£ö| | |Γ£ö|Γ£ö| -|Klingon (plqaD)|tlh-Piqd|Γ£ö| | |Γ£ö| | -|Konkani|gom|Γ£ö|Γ£ö| | | | -|Korean|ko|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Kurdish (Central)|ku|Γ£ö|Γ£ö| |Γ£ö| | -|Kurdish (Northern)|kmr|Γ£ö|Γ£ö| | | | -|Kyrgyz (Cyrillic)|ky|Γ£ö|Γ£ö| |Γ£ö| | -|Lao|lo|Γ£ö|Γ£ö| |Γ£ö| | -|Latvian|lv|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Lithuanian|lt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Lingala|ln|Γ£ö|Γ£ö| | | | -|Lower Sorbian|dsb|Γ£ö| | | | | -|Luganda|lug|Γ£ö|Γ£ö| | | | -|Macedonian|mk|Γ£ö|Γ£ö| |Γ£ö| | -|Maithili|mai|Γ£ö|Γ£ö| | | | -|Malagasy|mg|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Malay (Latin)|ms|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Malayalam|ml|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Maltese|mt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Maori|mi|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Marathi|mr|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Mongolian (Cyrillic)|mn-Cyrl|Γ£ö|Γ£ö| |Γ£ö| | -|Mongolian (Traditional)|mn-Mong|Γ£ö|Γ£ö| | | | -|Myanmar|my|Γ£ö|Γ£ö| |Γ£ö| | -|Nepali|ne|Γ£ö|Γ£ö| |Γ£ö| | -|Norwegian|nb|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Nyanja|nya|Γ£ö|Γ£ö| | | | -|Odia|or|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Pashto|ps|Γ£ö|Γ£ö| |Γ£ö| | -|Persian|fa|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Polish|pl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Portuguese (Brazil)|pt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Portuguese (Portugal)|pt-pt|Γ£ö|Γ£ö| | | | -|Punjabi|pa|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Queretaro Otomi|otq|Γ£ö|Γ£ö| |Γ£ö| | -|Romanian|ro|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Rundi|run|Γ£ö|Γ£ö| | | | -|Russian|ru|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Samoan (Latin)|sm|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Serbian (Cyrillic)|sr-Cyrl|Γ£ö|Γ£ö| |Γ£ö| | -|Serbian (Latin)|sr-Latn|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Sesotho|st|Γ£ö|Γ£ö| | | | -|Sesotho sa Leboa|nso|Γ£ö|Γ£ö| | | | -|Setswana|tn|Γ£ö|Γ£ö| | | | -|Sindhi|sd|Γ£ö|Γ£ö| |Γ£ö| | -|Sinhala|si|Γ£ö|Γ£ö| |Γ£ö| | -|Slovak|sk|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Slovenian|sl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Somali (Arabic)|so|Γ£ö|Γ£ö| |Γ£ö| | -|Spanish|es|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Swahili (Latin)|sw|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Swedish|sv|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Tahitian|ty|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Tamil|ta|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Tatar (Latin)|tt|Γ£ö|Γ£ö| |Γ£ö| | -|Telugu|te|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Thai|th|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Tibetan|bo|Γ£ö|Γ£ö| |Γ£ö| | -|Tigrinya|ti|Γ£ö|Γ£ö| |Γ£ö| | -|Tongan|to|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | -|Turkish|tr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Turkmen (Latin)|tk|Γ£ö|Γ£ö| |Γ£ö| | -|Ukrainian|uk|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Upper Sorbian|hsb|Γ£ö|Γ£ö| |Γ£ö| | -|Urdu|ur|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Uyghur (Arabic)|ug|Γ£ö|Γ£ö| |Γ£ö| | -|Uzbek (Latin)|uz|Γ£ö|Γ£ö| |Γ£ö| | -|Vietnamese|vi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Welsh|cy|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| -|Xhosa|xh|Γ£ö|Γ£ö| |Γ£ö| | -|Yoruba|yo|Γ£ö|Γ£ö| |Γ£ö| | -|Yucatec Maya|yua|Γ£ö|Γ£ö| |Γ£ö| | -|Zulu|zu|Γ£ö|Γ£ö| |Γ£ö| | +|Bosnian (Latin)|`bs`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Bulgarian|`bg`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Cantonese (Traditional)|`yue`|Γ£ö|Γ£ö| |Γ£ö| | +|Catalan|`ca`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Chinese (Literary)|`lzh`|Γ£ö|Γ£ö| | | | +|Chinese Simplified|`zh-Hans`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Chinese Traditional|`zh-Hant`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|chiShona|`sn`|Γ£ö|Γ£ö| | | | +|Croatian|`hr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Czech|`cs`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Danish|`da`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Dari|`prs`|Γ£ö|Γ£ö| |Γ£ö| | +|Divehi|`dv`|Γ£ö|Γ£ö| |Γ£ö| | +|Dogri|`doi`|Γ£ö| | | | | +|Dutch|`nl`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|English|`en`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Estonian|`et`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Faroese|`fo`|Γ£ö|Γ£ö| |Γ£ö| | +|Fijian|`fj`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Filipino|`fil`|Γ£ö|Γ£ö|Γ£ö| | | +|Finnish|`fi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|French|`fr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|French (Canada)|`fr-ca`|Γ£ö|Γ£ö|Γ£ö| | | +|Galician|`gl`|Γ£ö|Γ£ö| |Γ£ö| | +|Georgian|`ka`|Γ£ö|Γ£ö| |Γ£ö| | +|German|`de`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Greek|`el`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Gujarati|`gu`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Haitian Creole|`ht`|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| +|Hausa|`ha`|Γ£ö|Γ£ö| |Γ£ö| | +|Hebrew|`he`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Hindi|`hi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Hmong Daw (Latin)|`mww`|Γ£ö|Γ£ö| |Γ£ö|Γ£ö| +|Hungarian|`hu`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Icelandic|`is`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Igbo|`ig`|Γ£ö|Γ£ö| |Γ£ö| | +|Indonesian|`id`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Inuinnaqtun|`ikt`|Γ£ö|Γ£ö| | | | +|Inuktitut|`iu`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Inuktitut (Latin)|`iu-Latn`|Γ£ö|Γ£ö| |Γ£ö| | +|Irish|`ga`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Italian|`it`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Japanese|`ja`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Kannada|`kn`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Kashmiri|`ks`|Γ£ö|Γ£ö | | | | +|Kazakh|`kk`|Γ£ö|Γ£ö| |Γ£ö| | +|Khmer|`km`|Γ£ö|Γ£ö| |Γ£ö| | +|Kinyarwanda|`rw`|Γ£ö|Γ£ö| |Γ£ö| | +|Klingon|`tlh-Latn`|Γ£ö| | |Γ£ö|Γ£ö| +|Klingon (plqaD)|`tlh-Piqd`|Γ£ö| | |Γ£ö| | +|Konkani|`gom`|Γ£ö|Γ£ö| | | | +|Korean|`ko`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Kurdish (Central)|`ku`|Γ£ö|Γ£ö| |Γ£ö| | +|Kurdish (Northern)|`kmr`|Γ£ö|Γ£ö| | | | +|Kyrgyz (Cyrillic)|`ky`|Γ£ö|Γ£ö| |Γ£ö| | +|`Lao`|`lo`|Γ£ö|Γ£ö| |Γ£ö| | +|Latvian|`lv`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Lithuanian|`lt`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Lingala|`ln`|Γ£ö|Γ£ö| | | | +|Lower Sorbian|`dsb`|Γ£ö| | | | | +|Luganda|`lug`|Γ£ö|Γ£ö| | | | +|Macedonian|`mk`|Γ£ö|Γ£ö| |Γ£ö| | +|Maithili|`mai`|Γ£ö|Γ£ö| | | | +|Malagasy|`mg`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Malay (Latin)|`ms`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Malayalam|`ml`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Maltese|`mt`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Maori|`mi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Marathi|`mr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Mongolian (Cyrillic)|`mn-Cyrl`|Γ£ö|Γ£ö| |Γ£ö| | +|Mongolian (Traditional)|`mn-Mong`|Γ£ö|Γ£ö| | | | +|Myanmar|`my`|Γ£ö|Γ£ö| |Γ£ö| | +|Nepali|`ne`|Γ£ö|Γ£ö| |Γ£ö| | +|Norwegian|`nb`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Nyanja|`nya`|Γ£ö|Γ£ö| | | | +|Odia|`or`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Pashto|`ps`|Γ£ö|Γ£ö| |Γ£ö| | +|Persian|`fa`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Polish|`pl`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Portuguese (Brazil)|`pt`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Portuguese (Portugal)|pt-pt|Γ£ö|Γ£ö|Γ£ö| | | +|Punjabi|`pa`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Queretaro Otomi|`otq`|Γ£ö|Γ£ö| |Γ£ö| | +|Romanian|`ro`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Rundi|`run`|Γ£ö|Γ£ö| | | | +|Russian|`ru`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Samoan (Latin)|`sm`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Serbian (Cyrillic)|`sr-Cyrl`|Γ£ö|Γ£ö| |Γ£ö| | +|Serbian (Latin)|`sr-Latn`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Sesotho|`st`|Γ£ö|Γ£ö| | | | +|Sesotho sa Leboa|`nso`|Γ£ö|Γ£ö| | | | +|Setswana|`tn`|Γ£ö|Γ£ö| | | | +|Sindhi|`sd`|Γ£ö|Γ£ö| |Γ£ö| | +|Sinhala|`si`|Γ£ö|Γ£ö| |Γ£ö| | +|Slovak|`sk`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Slovenian|`sl`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Somali (Arabic)|`so`|Γ£ö|Γ£ö| |Γ£ö| | +|Spanish|`es`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Swahili (Latin)|`sw`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Swedish|`sv`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Tahitian|`ty`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Tamil|`ta`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Tatar (Latin)|`tt`|Γ£ö|Γ£ö| |Γ£ö| | +|Telugu|`te`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Thai|`th`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Tibetan|`bo`|Γ£ö|Γ£ö| |Γ£ö| | +|Tigrinya|`ti`|Γ£ö|Γ£ö| |Γ£ö| | +|Tongan|`to`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | +|Turkish|`tr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Turkmen (Latin)|`tk`|Γ£ö|Γ£ö| |Γ£ö| | +|Ukrainian|`uk`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Upper Sorbian|`hsb`|Γ£ö|Γ£ö| |Γ£ö| | +|Urdu|`ur`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Uyghur (Arabic)|`ug`|Γ£ö|Γ£ö| |Γ£ö| | +|Uzbek (Latin)|`uz`|Γ£ö|Γ£ö| |Γ£ö| | +|Vietnamese|`vi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Welsh|`cy`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| +|Xhosa|`xh`|Γ£ö|Γ£ö| |Γ£ö| | +|Yoruba|`yo`|Γ£ö|Γ£ö| |Γ£ö| | +|Yucatec Maya|`yua`|Γ£ö|Γ£ö| |Γ£ö| | +|Zulu|`zu`|Γ£ö|Γ£ö| |Γ£ö| | ## Document Translation: scanned PDF support -|Amharic|`am`|No|No| +|Amharic|`am`|Yes|No| |Arabic|`ar`|Yes|Yes|-|Armenian|`hy`|No|No| -|Assamese|`as`|No|No| +|Armenian|`hy`|Yes|No| +|Assamese|`as`|Yes|No| |Azerbaijani (Latin)|`az`|Yes|Yes|-|Bangla|`bn`|No|No| -|Bashkir|`ba`|No|Yes| +|Bangla|`bn`|Yes|No| +|Bashkir|`ba`|Yes|Yes| |Basque|`eu`|Yes|Yes| |Bosnian (Latin)|`bs`|Yes|Yes| |Bulgarian|`bg`|Yes|Yes|-|Cantonese (Traditional)|`yue`|No|Yes| +|Cantonese (Traditional)|`yue`|Yes|Yes| |Catalan|`ca`|Yes|Yes|-|Chinese (Literary)|`lzh`|No|Yes| +|Chinese (Literary)|`lzh`|Yes|Yes| |Chinese Simplified|`zh-Hans`|Yes|Yes| |Chinese Traditional|`zh-Hant`|Yes|Yes| |Croatian|`hr`|Yes|Yes| |Czech|`cs`|Yes|Yes| |Danish|`da`|Yes|Yes|-|Dari|`prs`|No|No| -|Divehi|`dv`|No|No| +|Dari|`prs`|Yes|No| +|Divehi|`dv`|Yes|No| |Dutch|`nl`|Yes|Yes| |English|`en`|Yes|Yes| |Estonian|`et`|Yes|Yes|-|Georgian|`ka`|No|No| +|Georgian|`ka`|Yes|No| |German|`de`|Yes|Yes|-|Greek|`el`|No|No| -|Gujarati|`gu`|No|No| +|Greek|`el`|Yes|No| +|Gujarati|`gu`|Yes|No| |Haitian Creole|`ht`|Yes|Yes|-|Hebrew|`he`|No|No| +|Hebrew|`he`|Yes|No| |Hindi|`hi`|Yes|Yes| |Hmong Daw (Latin)|`mww`|Yes|Yes| |Hungarian|`hu`|Yes|Yes| |Icelandic|`is`|Yes|Yes| |Indonesian|`id`|Yes|Yes| |Interlingua|`ia`|Yes|Yes|-|Inuinnaqtun|`ikt`|No|Yes| -|Inuktitut|`iu`|No|No| +|Inuinnaqtun|`ikt`|Yes|Yes| +|Inuktitut|`iu`|Yes|No| |Inuktitut (Latin)|`iu-Latn`|Yes|Yes| |Irish|`ga`|Yes|Yes| |Italian|`it`|Yes|Yes| |Japanese|`ja`|Yes|Yes|-|Kannada|`kn`|No|Yes| +|Kannada|`kn`|Yes|Yes| |Kazakh (Cyrillic)|`kk`, `kk-cyrl`|Yes|Yes| |Kazakh (Latin)|`kk-latn`|Yes|Yes|-|Khmer|`km`|No|No| -|Klingon|`tlh-Latn`|No|No| -|Klingon (plqaD)|`tlh-Piqd`|No|No| +|Khmer|`km`|Yes|No| +|Klingon|`tlh-Latn`|Yes|No| +|Klingon (plqaD)|`tlh-Piqd`|Yes|No| |Korean|`ko`|Yes|Yes|-|Kurdish (Arabic) (Central)|`ku-arab`,`ku`|No|No| +|Kurdish (Arabic) (Central)|`ku-arab`,`ku`|Yes|No| |Kurdish (Latin) (Northern)|`ku-latn`, `kmr`|Yes|Yes| |Kyrgyz (Cyrillic)|`ky`|Yes|Yes|-|Lao|`lo`|No|No| -|Latvian|`lv`|No|Yes| +|`Lao`|`lo`|Yes|No| +|Latvian|`lv`|Yes|Yes| |Lithuanian|`lt`|Yes|Yes|-|Macedonian|`mk`|No|Yes| -|Malagasy|`mg`|No|Yes| +|Macedonian|`mk`|Yes|Yes| +|Malagasy|`mg`|Yes|Yes| |Malay (Latin)|`ms`|Yes|Yes|-|Malayalam|`ml`|No|Yes| +|Malayalam|`ml`|Yes|Yes| |Maltese|`mt`|Yes|Yes| |Maori|`mi`|Yes|Yes| |Marathi|`mr`|Yes|Yes| |Mongolian (Cyrillic)|`mn-Cyrl`|Yes|Yes|-|Mongolian (Traditional)|`mn-Mong`|No|No| -|Myanmar (Burmese)|`my`|No|No| +|Mongolian (Traditional)|`mn-Mong`|Yes|No| +|Myanmar (Burmese)|`my`|Yes|No| |Nepali|`ne`|Yes|Yes| |Norwegian|`nb`|Yes|Yes|-|Odia|`or`|No|No| -|Pashto|`ps`|No|No| -|Persian|`fa`|No|No| +|Odia|`or`|Yes|No| +|Pashto|`ps`|Yes|No| +|Persian|`fa`|Yes|No| |Polish|`pl`|Yes|Yes| |Portuguese (Brazil)|`pt`, `pt-br`|Yes|Yes| |Portuguese (Portugal)|`pt-pt`|Yes|Yes|-|Punjabi|`pa`|No|Yes| -|Queretaro Otomi|`otq`|No|Yes| +|Punjabi|`pa`|Yes|Yes| +|Queretaro Otomi|`otq`|Yes|Yes| |Romanian|`ro`|Yes|Yes| |Russian|`ru`|Yes|Yes| |Samoan (Latin)|`sm`|Yes|Yes|-|Serbian (Cyrillic)|`sr-Cyrl`|No|Yes| +|Serbian (Cyrillic)|`sr-Cyrl`|Yes|Yes| |Serbian (Latin)|`sr`, `sr-latn`|Yes|Yes| |Slovak|`sk`|Yes|Yes| |Slovenian|`sl`|Yes|Yes|-|Somali|`so`|No|Yes| +|Somali|`so`|Yes|Yes| |Spanish|`es`|Yes|Yes| |Swahili (Latin)|`sw`|Yes|Yes| |Swedish|`sv`|Yes|Yes|-|Tahitian|`ty`|No|Yes| -|Tamil|`ta`|No|Yes| +|Tahitian|`ty`|Yes|Yes| +|Tamil|`ta`|Yes|Yes| |Tatar (Latin)|`tt`|Yes|Yes|-|Telugu|`te`|No|Yes| -|Thai|`th`|No|No| -|Tibetan|`bo`|No|No| -|Tigrinya|`ti`|No|No| +|Telugu|`te`|Yes|Yes| +|Thai|`th`|Yes|No| +|Tibetan|`bo`|Yes|No| +|Tigrinya|`ti`|Yes|No| |Tongan|`to`|Yes|Yes| |Turkish|`tr`|Yes|Yes| |Turkmen (Latin)|`tk`|Yes|Yes|-|Ukrainian|`uk`|No|Yes| +|Ukrainian|`uk`|Yes|Yes| |Upper Sorbian|`hsb`|Yes|Yes|-|Urdu|`ur`|No|No| -|Uyghur (Arabic)|`ug`|No|No| +|Urdu|`ur`|Yes|No| +|Uyghur (Arabic)|`ug`|Yes|No| |Uzbek (Latin)|`uz`|Yes|Yes|-|Vietnamese|`vi`|No|Yes| +|Vietnamese|`vi`|Yes|Yes| |Welsh|`cy`|Yes|Yes| |Yucatec Maya|`yua`|Yes|Yes| |Zulu|`zu`|Yes|Yes| |
ai-services | Use Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/use-key-vault.md | -Use this article to learn how to develop Azure AI services applications securely by using [Azure Key Vault](../key-vault/general/overview.md). +Learn how to develop Azure AI services applications securely by using [Azure Key Vault](../key-vault/general/overview.md). -Key Vault reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application. +Key Vault reduces the risk that secrets may be accidentally leaked, because you avoid storing security information in your application. ## Prerequisites |
ai-studio | Rbac Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md | In this article, you learn how to manage access (authorization) to an Azure AI h > Applying some roles might limit UI functionality in Azure AI Studio for other users. For example, if a user's role does not have the ability to create a compute instance, the option to create a compute instance will not be available in studio. This behavior is expected, and prevents the user from attempting operations that would return an access denied error. ## Azure AI hub resource vs Azure AI project+ In the Azure AI Studio, there are two levels of access: the Azure AI hub resource and the Azure AI project. The resource is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) as well as where you configure your Azure AI services. Azure AI hub resource access can allow you to modify the infrastructure, create new Azure AI hub resources, and create projects. Azure AI projects are a subset of the Azure AI hub resource that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the Azure AI hub resource. ++One of the key benefits of the AI hub and AI project relationship is that developers can create their own projects that inherit the AI hub security settings. You might also have developers who are contributors to a project, and can't create new projects. + ## Default roles for the Azure AI hub resource The Azure AI Studio has built-in roles that are available by default. In addition to the Reader, Contributor, and Owner roles, the Azure AI Studio has a new role called Azure AI Developer. This role can be assigned to enable users to create connections, compute, and projects, but not let them create new Azure AI hub resources or change permissions of the existing Azure AI hub resource. Here's a table of the built-in roles and their permissions for the Azure AI hub The key difference between Contributor and Azure AI Developer is the ability to make new Azure AI hub resources. If you don't want users to make new Azure AI hub resources (due to quota, cost, or just managing how many Azure AI hub resources you have), assign the AI Developer role. -Only the Owner and Contributor roles allow you to make an Azure AI hub resource. At this time, custom roles won't grant you permission to make Azure AI hub resources. +Only the Owner and Contributor roles allow you to make an Azure AI hub resource. At this time, custom roles can't grant you permission to make Azure AI hub resources. The full set of permissions for the new "Azure AI Developer" role are as follows: Here's a table of the built-in roles and their permissions for the Azure AI proj | Azure AI Developer | User can perform most actions, including create deployments, but can't assign permissions to project users. | | Reader | Read only access to the Azure AI project. | -When a user gets access to a project, two more roles are automatically assigned to the project user. The first role is Reader on the Azure AI hub resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```. +When a user is granted access to a project (for example, through the AI Studio permission management), two more roles are automatically assigned to the user. The first role is Reader on the Azure AI hub resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```. In order to complete end-to-end AI development and deployment, users only need these two autoassigned roles and either the Contributor or Azure AI Developer role on a *project*. +The minimum permissions needed to create an AI project resource is a role that has the allowed action of `Microsoft.MachineLearningServices/workspaces/hubs/join` on the AI hub resource. The Azure AI Developer built-in role has this permission. ++## Dependency service RBAC permissions ++The Azure AI hub resource has dependencies on other Azure services. The following table lists the permissions required for these services when you create an Azure AI hub resource. These permissions are needed by the person that creates the AI hub. They aren't needed by the person who creates an AI project from the AI hub. ++| Permission | Purpose | +||-| +| `Microsoft.Storage/storageAccounts/write` | Create a storage account with the specified parameters or update the properties or tags or adds custom domain for the specified storage account. | +| `Microsoft.KeyVault/vaults/write` | Create a new key vault or updates the properties of an existing key vault. Certain properties might require more permissions. | +| `Microsoft.CognitiveServices/accounts/write` | Write API Accounts. | +| `Microsoft.Insights/Components/Write` | Write to an application insights component configuration. | +| `Microsoft.OperationalInsights/workspaces/write` | Create a new workspace or links to an existing workspace by providing the customer ID from the existing workspace. | ++ ## Sample enterprise RBAC setup-Below is an example of how to set up role-based access control for your Azure AI Studio for an enterprise. +The following is an example of how to set up role-based access control for your Azure AI Studio for an enterprise. | Persona | Role | Purpose | | | | | | IT admin | Owner of the Azure AI hub resource | The IT admin can ensure the Azure AI hub resource is set up to their enterprise standards and assign managers the Contributor role on the resource if they want to enable managers to make new Azure AI hub resources or they can assign managers the Azure AI Developer role on the resource to not allow for new Azure AI hub resource creation. |-| Managers | Contributor or Azure AI Developer on the Azure AI hub resource | Managers can create projects for their team and create shared resources (ex: compute and connections) for their group at the Azure AI hub resource level. | -| Managers | Owner of the Azure AI Project | When managers create a project, they become the project owner. This allows them to add their team/developers to the project. Their team/developers can be added as Contributors or Azure AI Developers to allow them to develop in the project. | +| Managers | Contributor or Azure AI Developer on the Azure AI hub resource | Managers can manage the AI hub, audit compute resources, audit connections, and create shared connections. | +| Team lead/Lead developer | Azure AI Developer on the Azure AI hub resource | Lead developers can create projects for their team and create shared resources (ex: compute and connections) at the Azure AI hub resource level. After project creation, project owners can invite other members. | | Team members/developers | Contributor or Azure AI Developer on the Azure AI Project | Developers can build and deploy AI models within a project and create assets that enable development such as computes and connections. | ## Access to resources created outside of the Azure AI hub resource |
ai-studio | Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md | description: Learn how to deploy Llama 2 family of large language models with Az Previously updated : 12/11/2023 Last updated : 02/09/2024 ++#This functionality is also available in Azure Machine Learning: /azure/machine-learning/how-to-deploy-models-llama.md # How to deploy Llama 2 family of large language models with Azure AI Studio +In this article, you learn about the Llama 2 family of large language models (LLMs). You also learn how to use Azure AI Studio to deploy models from this set either as a service with pay-as you go billing or with hosted infrastructure in real-time endpoints. -The Llama 2 family of large language models (LLMs) is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with Reinforcement Learning from Human Feedback (RLHF), called Llama-2-chat. +The Llama 2 family of LLMs is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Llama-2-chat. -Llama 2 can be deployed as a service with pay-as-you-go billing or with hosted infrastructure in real-time endpoints. ## Deploy Llama 2 models with pay-as-you-go Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription. -Llama 2 models deployed as a service with pay-as-you-go are offered by Meta AI through the Azure Marketplace and they might add more terms of use and pricing. +Llama 2 models deployed as a service with pay-as-you-go are offered by Meta AI through Microsoft Azure Marketplace, and they might add more terms of use and pricing. -> [!NOTE] -> Pay-as-you-go offering is only available in projects created in East US 2 and West US 3 regions. +### Azure Marketplace model offerings -### Offerings --The following models are available for Llama 2 when deployed as a service with pay-as-you-go: +The following models are available in Azure Marketplace for Llama 2 when deployed as a service with pay-as-you-go: * Meta Llama-2-7B (preview) * Meta Llama 2 7B-Chat (preview) The following models are available for Llama 2 when deployed as a service with p If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-llama-2-models-to-real-time-endpoints) instead. -### Create a new deployment +### Prerequisites -To create a deployment: +- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin. +- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md). -1. Choose a model you want to deploy from the Azure AI Studio [model catalog](../how-to/model-catalog.md). Alternatively, you can initiate deployment by selecting **+ Create** from the **Deployments** options in the **Build** tab of your project. + > [!IMPORTANT] + > Pay-as-you-go model deployment offering is only available in AI hubs created in **East US 2** and **West US 3** regions. -1. On the detail page, select **Deploy** and then **Pay-as-you-go**. +- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio. +- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions: - :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go.png"::: + - On the Azure subscriptionΓÇöto subscribe the Azure AI project to the Azure Marketplace offering, once for each project, per offering: + - `Microsoft.MarketplaceOrdering/agreements/offers/plans/read` + - `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action` + - `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read` + - `Microsoft.Marketplace/offerTypes/publishers/offers/plans/agreements/read` + - `Microsoft.SaaS/register/action` + + - On the resource groupΓÇöto create and use the SaaS resource: + - `Microsoft.SaaS/resources/read` + - `Microsoft.SaaS/resources/write` + + - On the Azure AI projectΓÇöto deploy endpoints (the Azure AI Developer role contains these permissions already): + - `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*` + - `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*` -1. Select the project where you want to create a deployment. + For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md). -1. On the deployment wizard, you see the option to explore the more terms and conditions applied to the selected model and its pricing. Select **Azure Marketplace Terms** to learn about it. - :::image type="content" source="../media/deploy-monitor/llama/deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/llama/deploy-marketplace-terms.png"::: +### Create a new deployment -1. If this is the first time you deployed the model in the project, you have to sign up your project for the particular offering from the Azure Marketplace. Each project has its own connection to the marketplace's offering, which, allows you to control and monitor spending per project. Select **Subscribe and Deploy**. +To create a deployment: - > [!NOTE] - > Subscribing a project to a particular offering from the Azure Marketplace requires **Contributor** or **Owner** access at the subscription level where the project is created. +1. Sign in to [Azure AI Studio](https://ai.azure.com). +1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models). -1. Once you sign up the project for the offering, subsequent deployments don't require signing up (neither subscription-level permissions). If this is your case, select **Continue to deploy**. + Alternatively, you can initiate deployment by starting from your project in AI Studio. From the **Build** tab of your project, select the **Deployments** option, then select **+ Create**. - :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png"::: +1. On the model's **Details** page, select **Deploy** and then **Pay-as-you-go**. -1. Give the deployment a name. Such name is part of the deployment API URL, which requires to be unique on each Azure region. + :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go.png"::: - :::image type="content" source="../media/deploy-monitor/llama/deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="../media/deploy-monitor/llama/deployment-name.png"::: +1. Select the project in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **West US 3** region. +1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model. +1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Llama-2-70b) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. -1. Select **Deploy**. + > [!NOTE] + > Subscribing a project to a particular Azure Marketplace offering (in this case, Llama-2-70b) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites). -1. Once the deployment is ready, you're redirected to the deployments page. + :::image type="content" source="../media/deploy-monitor/llama/deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/llama/deploy-marketplace-terms.png"::: -1. Once your deployment is ready, you can select **Open in playground** to start interacting with the model. +1. Once you sign up the project for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent deployments. If this scenario applies to you, select **Continue to deploy**. -1. You can also take note of the **Endpoint** URL and the **Secret** to call the deployment and generate completions. + :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png"::: -1. Additionally, you can find the deployment details, URL, and access keys navigating to the tab **Build** > **Components** > **Deployments**. +1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region. ++ :::image type="content" source="../media/deploy-monitor/llama/deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="../media/deploy-monitor/llama/deployment-name.png"::: ++1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page. +1. Select **Open in playground** to start interacting with the model. +1. You can return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**, which you can use to call the deployment and generate completions. +1. You can always find the endpoint's details, URL, and access keys by navigating to the **Build** tab and selecting **Deployments** from the Components section. To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 2 models deployed as a service](#cost-and-quota-considerations-for-llama-2-models-deployed-as-a-service). ### Consume Llama 2 models as a service -Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you have deployed. +Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you deployed. 1. On the **Build** page, select **Deployments**. Models deployed as a service can be consumed using either the chat or the comple 1. Select **Open in playground**. -1. Select **View code** and copy the **Endpoint** URL and the **Key** token values. +1. Select **View code** and copy the **Endpoint** URL and the **Key** value. ++1. Make an API request based on the type of model you deployed. ++ - For completions models, such as `Llama-2-7b`, use the [`/v1/completions`](#completions-api) API. + - For chat models, such as `Llama-2-7b-chat`, use the [`/v1/chat/completions`](#chat-api) API. -1. Make an API request depending on the type of model you deployed. For completions models such as `Llama-2-7b` use the [`/v1/completions`](#completions-api) API, for chat models such as `Llama-2-7b-chat` use the [`/v1/chat/completions`](#chat-api) API. See the [reference](#reference-for-llama-2-models-deployed-as-a-service) section for more details with examples. + For more information on using the APIs, see the [reference](#reference-for-llama-2-models-deployed-as-a-service) section. ### Reference for Llama 2 models deployed as a service Payload is a JSON formatted string containing the following parameters: | Key | Type | Default | Description | ||--||-|-| `prompt` | `string` | No default. This must be specified. | The prompt to send to the model. | +| `prompt` | `string` | No default. This value must be specified. | The prompt to send to the model. | | `stream` | `boolean` | `False` | Streaming allows the generated tokens to be sent as data-only server-sent events whenever they become available. | | `max_tokens` | `integer` | `16` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` can't exceed the model's context length. |-| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering it or `temperature` but not both. | -| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. It's recommend altering this or `top_p` but not both. | -| `n` | `integer` | `1` | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. | +| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering `top_p` or `temperature`, but not both. | +| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. We recommend altering this or `top_p`, but not both. | +| `n` | `integer` | `1` | How many completions to generate for each prompt. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota. | | `stop` | `array` | `null` | String or a list of strings containing the word where the API stops generating further tokens. The returned text won't contain the stop sequence. |-| `best_of` | `integer` | `1` | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota.| -| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API returns a list of the 10 most likely tokens. the API always returns the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. | +| `best_of` | `integer` | `1` | Generates `best_of` completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to returnΓÇô`best_of` must be greater than `n`. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota.| +| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the `logprobs` most likely tokens and the chosen tokens. For example, if `logprobs` is 10, the API returns a list of the 10 most likely tokens. the API always returns the logprob of the sampled token, so there might be up to `logprobs`+1 elements in the response. | | `presence_penalty` | `float` | `null` | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | | `ignore_eos` | `boolean` | `True` | Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. |-| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of must > 1` and `temperature` must be `0`. | -| `stop_token_ids` | `array` | `null` | List of tokens' ID that stops the generation when they're generated. The returned output contains the stop tokens unless the stop tokens are special tokens. | +| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of` must be greater than `1` and `temperature` must be `0`. | +| `stop_token_ids` | `array` | `null` | List of IDs for tokens that, when generated, stop further token generation. The returned output contains the stop tokens unless the stop tokens are special tokens. | | `skip_special_tokens` | `boolean` | `null` | Whether to skip special tokens in the output. | #### Example The response payload is a dictionary with the following fields. | `choices` | `array` | The list of completion choices the model generated for the input prompt. | | `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. | | `model` | `string` | The model_id used for completion. |-| `object` | `string` | The object type, which is always "text_completion". | +| `object` | `string` | The object type, which is always `text_completion`. | | `usage` | `object` | Usage statistics for the completion request. | > [!TIP]-> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. +> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. -The `choice` object is a dictionary with the following fields. +The `choices` object is a dictionary with the following fields. | Key | Type | Description | ||--||-| `index` | `integer` | Choice index. When best_of > 1, the index in this array might not be in order and might not be 0 to n-1. | +| `index` | `integer` | Choice index. When `best_of` > 1, the index in this array might not be in order and might not be 0 to n-1. | | `text` | `string` | Completion result. |-| `finish_reason` | `string` | The reason the model stopped generating tokens: `stop`, model hit a natural stop point, or a provided stop sequence; `length`, if max number of tokens have been reached; `content_filter`, When RAI moderates and CMP forces moderation; `content_filter_error`, an error during moderation and wasn't able to make decision on the response; `null`, API response still in progress or incomplete. | +| `finish_reason` | `string` | The reason the model stopped generating tokens: <br>- `stop`: model hit a natural stop point, or a provided stop sequence. <br>- `length`: if max number of tokens have been reached. <br>- `content_filter`: When RAI moderates and CMP forces moderation. <br>- `content_filter_error`: an error during moderation and wasn't able to make decision on the response. <br>- `null`: API response still in progress or incomplete. | | `logprobs` | `object` | The log probabilities of the generated tokens in the output text. | - The `usage` object is a dictionary with the following fields. | Key | Type | Value | The `usage` object is a dictionary with the following fields. | `prompt_tokens` | `integer` | Number of tokens in the prompt. | | `completion_tokens` | `integer` | Number of tokens generated in the completion. | | `total_tokens` | `integer` | Total tokens. |- + The `logprobs` object is a dictionary with the following fields: | Key | Type | Value | ||-|-| | `text_offsets` | `array` of `integers` | The position or index of each token in the completion output. |-| `token_logprobs` | `array` of `float` | Selected logprobs from dictionary in top_logprobs array | -| `tokens` | `array` of `string` | Selected tokens | +| `token_logprobs` | `array` of `float` | Selected `logprobs` from dictionary in `top_logprobs` array. | +| `tokens` | `array` of `string` | Selected tokens. | | `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. | #### Example Payload is a JSON formatted string containing the following parameters: | Key | Type | Default | Description | |--|--|--|--|-| `messages` | `string` | No default. This must be specified. | The message or history of messages to prompt the model with. | +| `messages` | `string` | No default. This value must be specified. | The message or history of messages to use to prompt the model. | | `stream` | `boolean` | `False` | Streaming allows the generated tokens to be sent as data-only server-sent events whenever they become available. | | `max_tokens` | `integer` | `16` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` can't exceed the model's context length. |-| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering it or `temperature` but not both. | -| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. It's recommend altering this or `top_p` but not both. | -| `n` | `integer` | `1` | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. | +| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering `top_p` or `temperature`, but not both. | +| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. We recommend altering this or `top_p`, but not both. | +| `n` | `integer` | `1` | How many completions to generate for each prompt. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota. | | `stop` | `array` | `null` | String or a list of strings containing the word where the API stops generating further tokens. The returned text won't contain the stop sequence. |-| `best_of` | `integer` | `1` | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota.| -| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API returns a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. | +| `best_of` | `integer` | `1` | Generates `best_of` completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to returnΓÇö`best_of` must be greater than `n`. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota.| +| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the `logprobs` most likely tokens and the chosen tokens. For example, if `logprobs` is 10, the API returns a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to `logprobs`+1 elements in the response. | | `presence_penalty` | `float` | `null` | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | | `ignore_eos` | `boolean` | `True` | Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. |-| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of must > 1` and `temperature` must be `0`. | -| `stop_token_ids` | `array` | `null` | List of token IDs that stop the generation when they are generated. The returned output contains the stop tokens unless the stop tokens are special tokens. | +| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of` must be greater than `1` and `temperature` must be `0`. | +| `stop_token_ids` | `array` | `null` | List of IDs for tokens that, when generated, stop further token generation. The returned output contains the stop tokens unless the stop tokens are special tokens.| | `skip_special_tokens` | `boolean` | `null` | Whether to skip special tokens in the output. | -The `message` object has the following fields: +The `messages` object has the following fields: | Key | Type | Value | |--|--|| The response payload is a dictionary with the following fields. | `choices` | `array` | The list of completion choices the model generated for the input messages. | | `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. | | `model` | `string` | The model_id used for completion. |-| `object` | `string` | The object type, which is always "chat.completion". | +| `object` | `string` | The object type, which is always `chat.completion`. | | `usage` | `object` | Usage statistics for the completion request. | > [!TIP]-> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. In each `choice` object, the key for `message` is changed by `delta`. +> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. In each `choices` object, the key for `messages` is changed by `delta`. -The `choice` object is a dictionary with the following fields. +The `choices` object is a dictionary with the following fields. | Key | Type | Description | ||--|--|-| `index` | `integer` | Choice index. When best_of > 1, the index in this array might not be in order and might not be 0 to n-1. | -| `message` or `delta` | `string` | Chat completion result in `message` object. When streaming mode is used, `delta` key is used. | -| `finish_reason` | `string` | The reason the model stopped generating tokens: `stop`, model hit a natural stop point, or a provided stop sequence; `length`, if max number of tokens have been reached; `content_filter`, When RAI moderates and CMP forces moderation; `content_filter_error`, an error during moderation and wasn't able to make decision on the response; `null`, API response still in progress or incomplete. | +| `index` | `integer` | Choice index. When `best_of` > 1, the index in this array might not be in order and might not be `0` to `n-1`. | +| `messages` or `delta` | `string` | Chat completion result in `messages` object. When streaming mode is used, `delta` key is used. | +| `finish_reason` | `string` | The reason the model stopped generating tokens: <br>- `stop`: model hit a natural stop point or a provided stop sequence. <br>- `length`: if max number of tokens have been reached. <br>- `content_filter`: When RAI moderates and CMP forces moderation <br>- `content_filter_error`: an error during moderation and wasn't able to make decision on the response <br>- `null`: API response still in progress or incomplete.| | `logprobs` | `object` | The log probabilities of the generated tokens in the output text. | The `usage` object is a dictionary with the following fields. | `prompt_tokens` | `integer` | Number of tokens in the prompt. | | `completion_tokens` | `integer` | Number of tokens generated in the completion. | | `total_tokens` | `integer` | Total tokens. |- + The `logprobs` object is a dictionary with the following fields: | Key | Type | Value | ||-|| | `text_offsets` | `array` of `integers` | The position or index of each token in the completion output. |-| `token_logprobs` | `array` of `float` | Selected logprobs from dictionary in top_logprobs array | -| `tokens` | `array` of `string` | Selected tokens | +| `token_logprobs` | `array` of `float` | Selected `logprobs` from dictionary in `top_logprobs` array. | +| `tokens` | `array` of `string` | Selected tokens. | | `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. | #### Example The following is an example response: ## Deploy Llama 2 models to real-time endpoints -Llama 2 models can be deployed to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about on the infrastructure running the model including the virtual machines used to run it and the number of instances to handle the load you're expecting. Models deployed in this modality consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints. +Apart from deploying with the pay-as-you-go managed service, you can also deploy Llama 2 models to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to real-time endpoints consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints. ### Create a new deployment # [Studio](#tab/azure-studio) -Follow the steps below to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com). +Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com). -1. Choose a model you want to deploy from AI Studio [model catalog](./model-catalog.md). Alternatively, you can initiate deployment by selecting **Create** from `your project`>`deployments` +1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models). -1. On the detail page, select **Deploy** and then **Real-time endpoint**. + Alternatively, you can initiate deployment by starting from your project in AI Studio. From the **Build** tab of your project, select the **Deployments** option, then select **+ Create**. -1. Select if you want to enable **Azure AI Content Safety (preview)**. +1. On the model's **Details** page, select **Deploy** and then **Real-time endpoint**. - > [!TIP] - > Deploying Llama 2 models with Azure AI Content Safety (preview) is currently only supported using the Python SDK. + :::image type="content" source="../media/deploy-monitor/llama/deploy-real-time-endpoint.png" alt-text="A screenshot showing how to deploy a model with the real-time endpoint option." lightbox="../media/deploy-monitor/llama/deploy-real-time-endpoint.png"::: -1. Select **Proceed**. - -1. Select the project where you want to create a deployment. +1. On the **Deploy with Azure AI Content Safety (preview)** page, select **Skip Azure AI Content Safety** so that you can continue to deploy the model using the UI. > [!TIP]- > If you don't have enough quota available in the selected project, you can use the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours**. + > In general, we recommend that you select **Enable Azure AI Content Safety (Recommended)** for deployment of the Llama model. This deployment option is currently only supported using the Python SDK and it happens in a notebook. -1. Select the **Virtual machine** and the instance count you want to assign to the deployment. +1. Select **Proceed**. +1. Select the project where you want to create a deployment. -1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resources configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys. + > [!TIP] + > If you don't have enough quota available in the selected project, you can use the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours**. + +1. Select the **Virtual machine** and the **Instance count** that you want to assign to the deployment. +1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resource configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys. + 1. Indicate if you want to enable **Inferencing data collection (preview)**. -1. Select **Deploy**. +1. Select **Deploy**. After a few moments, the endpoint's **Details** page opens up. -1. You land on the deployment details page. Select **Consume** to obtain code samples that can be used to consume the deployed model in your application. +1. Wait for the endpoint creation and deployment to finish. This step can take a few minutes. +1. Select the **Consume** tab of the deployment to obtain code samples that can be used to consume the deployed model in your application. # [Python SDK](#tab/python) -You can use the Azure AI Generative SDK to deploy an open model. In this example, you deploy a `Llama-2-7b-chat` model. +Follow these steps to deploy an open model such as `Llama-2-7b-chat` to a real-time endpoint, using the Azure AI Generative SDK. -```python -# Import the libraries -from azure.ai.resources.client import AIClient -from azure.ai.resources.entities.deployment import Deployment -from azure.ai.resources.entities.models import PromptflowModel -from azure.identity import DefaultAzureCredential -``` +1. Import required libraries -Credential info can be found under your project settings on Azure AI Studio. You can go to Settings by selecting the gear icon on the bottom of the left navigation UI. + ```python + # Import the libraries + from azure.ai.resources.client import AIClient + from azure.ai.resources.entities.deployment import Deployment + from azure.ai.resources.entities.models import PromptflowModel + from azure.identity import DefaultAzureCredential + ``` -```python -credential = DefaultAzureCredential() -client = AIClient( - credential=credential, - subscription_id="<xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>", - resource_group_name="<YOUR_RESOURCE_GROUP_NAME>", - project_name="<YOUR_PROJECT_NAME>", -) -``` +1. Provide your credentials. Credentials can be found under your project settings in Azure AI Studio. You can go to Settings by selecting the gear icon on the bottom of the left navigation UI. -Define the model and the deployment. `The model_id` can be found on the model card in the Azure AI Studio [model catalog](../how-to/model-catalog.md). + ```python + credential = DefaultAzureCredential() + client = AIClient( + credential=credential, + subscription_id="<xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>", + resource_group_name="<YOUR_RESOURCE_GROUP_NAME>", + project_name="<YOUR_PROJECT_NAME>", + ) + ``` -```python -model_id = "azureml://registries/azureml/models/Llama-2-7b-chat/versions/12" -deployment_name = "my-llam27bchat-deployment" +1. Define the model and the deployment. `The model_id` can be found on the model card in the Azure AI Studio [model catalog](../how-to/model-catalog.md). -deployment = Deployment( - name=deployment_name, - model=model_id, -) -``` + ```python + model_id = "azureml://registries/azureml/models/Llama-2-7b-chat/versions/12" + deployment_name = "my-llam27bchat-deployment" + + deployment = Deployment( + name=deployment_name, + model=model_id, + ) + ``` -Deploy the model. +1. Deploy the model. ++ ```python + client.deployments.create_or_update(deployment) + ``` -```python -client.deployments.create_or_update(deployment) -``` -### Consuming Llama 2 models deployed to real-time endpoints +### Consume Llama 2 models deployed to real-time endpoints -For reference about how to invoke Llama 2 models deployed to real-time endpoints, see the model card in the Azure AI Studio [model catalog](../how-to/model-catalog.md). +For reference about how to invoke Llama 2 models deployed to real-time endpoints, see the model's card in the Azure AI Studio [model catalog](../how-to/model-catalog.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation. ## Cost and quotas For reference about how to invoke Llama 2 models deployed to real-time endpoints Llama models deployed as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or [fine-tuning the models](./fine-tune-model-llama.md). -Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine tuning, However, multiple meters are available to track each scenario independently. +Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently. -See [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace) to learn more about how to track costs. +For more information on how to track costs, see [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace). :::image type="content" source="../media/cost-management/marketplace/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="../media/cost-management/marketplace/costs-model-as-service-cost-details.png"::: -Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits don't suffice your scenarios. +Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios. ### Cost and quota considerations for Llama 2 models deployed as real-time endpoints -Deploying Llama models and inferencing with real-time endpoints can be done by consuming Virtual Machine (VM) core quota that is assigned to your subscription a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once that happens, you can request for quota increase. +For deployment and inferencing of Llama models with real-time endpoints, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase. ## Content filtering -Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. When deployed to real-time endpoints, you can opt out for this capability. Both the prompt and completion are run through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md). +Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. When deployed to real-time endpoints, you can opt out of this capability. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md). ## Next steps -- Learn more about what you can do in [Azure AI Studio](../what-is-ai-studio.md)-- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml)+- [What is Azure AI Studio?](../what-is-ai-studio.md) +- [Fine-tune a Llama 2 model in Azure AI Studio](fine-tune-model-llama.md) +- [Azure AI FAQ article](../faq.yml) |
aks | App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md | Title: Azure Kubernetes Service (AKS) managed nginx Ingress with the application routing add-on + Title: Azure Kubernetes Service (AKS) managed NGINX ingress with the application routing add-on description: Use the application routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS). Last updated 11/21/2023 -# Managed nginx Ingress with the application routing add-on +# Managed NGINX ingress with the application routing add-on -One way to route Hypertext Transfer Protocol (HTTP) and secure (HTTPS) traffic to applications running on an Azure Kubernetes Service (AKS) cluster is to use the [Kubernetes Ingress object][kubernetes-ingress-object-overview]. When you create an Ingress object that uses the application routing add-on nginx Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster. +One way to route Hypertext Transfer Protocol (HTTP) and secure (HTTPS) traffic to applications running on an Azure Kubernetes Service (AKS) cluster is to use the [Kubernetes Ingress object][kubernetes-ingress-object-overview]. When you create an Ingress object that uses the application routing add-on NGINX Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster. This article shows you how to deploy and configure a basic Ingress controller in your AKS cluster. -## Application routing add-on with nginx features +## Application routing add-on with NGINX features -The application routing add-on with nginx delivers the following: +The application routing add-on with NGINX delivers the following: -* Easy configuration of managed nginx Ingress controllers based on [Kubernetes nginx Ingress controller][kubernetes-nginx-ingress]. +* Easy configuration of managed NGINX Ingress controllers based on [Kubernetes NGINX Ingress controller][kubernetes-nginx-ingress]. * Integration with [Azure DNS][azure-dns-overview] for public and private zone management * SSL termination with certificates stored in Azure Key Vault. With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the - The application routing add-on supports up to five Azure DNS zones. - All global Azure DNS zones integrated with the add-on have to be in the same resource group. - All private Azure DNS zones integrated with the add-on have to be in the same resource group.-- Editing any resources in the `app-routing-system` namespace, including the Ingress-nginx ConfigMap isn't supported.+- Editing any resources in the `app-routing-system` namespace, including the Ingress-nginx ConfigMap, isn't supported. ## Enable application routing using Azure CLI az aks approuting enable -g <ResourceGroupName> -n <ClusterName> The following add-ons are required to support this configuration: -* **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx Ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS). +* **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the NGINX Ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS). ### Enable on a new cluster The application routing add-on uses annotations on Kubernetes Ingress objects to app: aks-helloworld ``` -### Create the Ingress +### Create the Ingress object The application routing add-on creates an Ingress class on the cluster named *webapprouting.kubernetes.azure.com*. When you create an Ingress object with this class, it activates the add-on. The application routing add-on creates an Ingress class on the cluster named *we app: aks-helloworld ``` -### Create the Ingress +### Create the Ingress object The application routing add-on creates an Ingress class on the cluster called *webapprouting.kubernetes.azure.com*. When you create an Ingress object with this class, it activates the add-on. The `kubernetes.azure.com/use-osm-mtls: "true"` annotation on the Ingress object creates an Open Service Mesh (OSM) [IngressBackend][ingress-backend] to configure a backend service to accept Ingress traffic from trusted sources. |
aks | Azure Hpc Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hpc-cache.md | Last updated 06/22/2023 ## Before you begin -* This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal]. - > [!IMPORTANT] - > Your AKS cluster must be [in a region that supports Azure HPC Cache][hpc-cache-regions]. +* AKS cluster must be in a region that [supports Azure HPC Cache][hpc-cache-regions]. +* You need Azure CLI version 2.7 or later. Run┬á`az --version` to find the version. If you need to install or upgrade, see┬á[Install Azure CLI][install-azure-cli]. +* Register the `hpc-cache` extension in your Azure subscription. For more information on using HPC Cache with Azure CLI, see the [HPC Cache CLI prerequisites][hpc-cache-cli-prerequisites]. +* Review the [HPC Cache prerequisites][hpc-cache-prereqs]. You need to satisfy the following before you can run an HPC Cache: -* You need Azure CLI version 2.7 or later. Run┬á`az --version` to find the version. If you need to install or upgrade, see┬á[Install Azure CLI][install-azure-cli]. For more information on using HPC Cache with Azure CLI, see the [HPC Cache CLI prerequisites][hpc-cache-cli-prerequisites]. -* Install the `hpc-cache` Azure CLI extension using the [`az extension add --upgrade -n hpc-cache][az-extension-add]` command. -* Review the [HPC Cache prerequisites][hpc-cache-prereqs]. You need to satisfy these prerequisites before you can run an HPC Cache. Important prerequisites include the following: * The cache requires a *dedicated* subnet with at least 64 IP addresses available. * The subnet must not host other VMs or containers. * The subnet must be accessible from the AKS nodes. +* If you need to run your application as a user without root access, you may need to disable root squashing by using the change owner (chown) command to change directory ownership to another user. The user without root access needs to own a directory to access the file system. For the user to own a directory, the root user must chown a directory to that user, but if the HPC Cache is squashing root, this operation is denied because the root user (UID 0) is being mapped to the anonymous user. For more information about root squashing and client access policies, see [HPC Cache access policies][hpc-cache-access-policies]. ++### Install the `hpc-cache` Azure CLI extension +++To install the hpc-cache extension, run the following command: ++```azurecli-interactive +az extension add --name hpc-cache +``` ++Run the following command to update to the latest version of the extension released: ++```azurecli-interactive +az extension update --name hpc-cache +``` ++### Register the StorageCache feature flag ++Register the *Microsoft.StorageCache* resource provider using the [`az provider register`][az-provider-register] command. ++```azurecli +az provider register --namespace Microsoft.StorageCache --wait +``` ++It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: ++```azurecli-interactive +az feature show --namespace "Microsoft.StorageCache" +``` + ## Create the Azure HPC Cache 1. Get the node resource group using the [`az aks show`][az-aks-show] command with the `--query nodeResourceGroup` query parameter. Last updated 06/22/2023 MC_myResourceGroup_myAKSCluster_eastus ``` -2. Create the dedicated HPC Cache subnet using the [`az network vnet subnet create`][az-network-vnet-subnet-create] command. +1. Create a dedicated HPC Cache subnet using the [`az network vnet subnet create`][az-network-vnet-subnet-create] command. First define the environment variables for `RESOURCE_GROUP`, `VNET_NAME`, `VNET_ID`, and `SUBNET_NAME`. Copy the output from the previous step for `RESOURCE_GROUP`, and specify a value for `SUBNET_NAME`. ```azurecli RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv) VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv) SUBNET_NAME=MyHpcCacheSubnet+ ``` + ```azurecli-interactive az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \ Last updated 06/22/2023 --address-prefixes 10.0.0.0/26 ``` -3. Register the *Microsoft.StorageCache* resource provider using the [`az provider register`][az-provider-register] command. +1. Create an HPC Cache in the same node resource group and region. First define the environment variable `SUBNET_ID`. - ```azurecli - az provider register --namespace Microsoft.StorageCache --wait + ```azurecli-interactive + SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv) ``` - > [!NOTE] - > The resource provider registration can take some time to complete. --4. Create an HPC Cache in the same node resource group and region using the [`az hpc-cache create`][az-hpc-cache-create]. -- > [!NOTE] - > The HPC Cache takes approximately 20 minutes to be created. -- ```azurecli - RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus - VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv) - VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv) - SUBNET_NAME=MyHpcCacheSubnet - SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv) + Create the HPC Cache using the [`az hpc-cache create`][az-hpc-cache-create] command. The following example creates the HPC Cache in the East US region with a Standard 2G cache type named *MyHpcCache*. Specify a value for **--location**, **--sku-name**, and **--name**. + ```azurecli-interactive az hpc-cache create \ --resource-group $RESOURCE_GROUP \ --cache-size-gb "3072" \ Last updated 06/22/2023 --name MyHpcCache ``` + > [!NOTE] + > Creation of the HPC Cache can take up to 20 minutes. + ## Create and configure Azure storage -> [!IMPORTANT] -> You need to select a unique storage account name. Replace `uniquestorageaccount` with something unique for you. Storage account names must be *between 3 and 24 characters in length* and *can contain only numbers and lowercase letters*. +1. Create a storage account using the [`az storage account create`][az-storage-account-create] command. First define the environment variable `STORAGE_ACCOUNT_NAME`. -1. Create a storage account using the [`az storage account create`][az-storage-account-create] command. + > [!IMPORTANT] + > You need to select a unique storage account name. Replace `uniquestorageaccount` with your specified name. Storage account names must be *between 3 and 24 characters in length* and *can contain only numbers and lowercase letters*. - ```azurecli - RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus - STORAGE_ACCOUNT_NAME=uniquestorageaccount + ```azurecli + STORAGE_ACCOUNT_NAME=uniquestorageaccount + ``` + + The following example creates a storage account in the East US region with the Standard_LRS SKU. Specify a value for **--location** and **--sku**. + ```azurecli-interactive az storage account create \- -n $STORAGE_ACCOUNT_NAME \ - -g $RESOURCE_GROUP \ - -l eastus \ + --name $STORAGE_ACCOUNT_NAME \ + --resource-group $RESOURCE_GROUP \ + --location eastus \ --sku Standard_LRS ``` -2. Assign the "Storage Blob Data Contributor Role" on your subscription using the [`az role assignment create`][az-role-assignment-create] command. +1. Assign the **Storage Blob Data Contributor Role** on your subscription using the [`az role assignment create`][az-role-assignment-create] command. First, define the environment variables `STORAGE_ACCOUNT_ID` and `AD_USER`. - ```azurecli - STORAGE_ACCOUNT_NAME=uniquestorageaccount + ```azurecli-interactive STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv) AD_USER=$(az ad signed-in-user show --query objectId -o tsv)- CONTAINER_NAME=mystoragecontainer + ``` + ```azurecli-interactive az role assignment create --role "Storage Blob Data Contributor" --assignee $AD_USER --scope $STORAGE_ACCOUNT_ID ``` -3. Create the Blob container within the storage account using the [`az storage container create`][az-storage-container-create] command. +1. Create the Blob container within the storage account using the [`az storage container create`][az-storage-container-create] command. First, define the environment variable `CONTAINER_NAME` and replace the name for the Blob container. ```azurecli+ CONTAINER_NAME=mystoragecontainer + ``` ++ ```azurecli-interactive az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --auth-mode login ``` -4. Provide permissions to the Azure HPC Cache service account to access your storage account and Blob container using the following [`az role assignment`][az-role-assignment-create] commands. +1. Provide permissions to the Azure HPC Cache service account to access your storage account and Blob container using the [`az role assignment`][az-role-assignment-create] commands. First, define the environment variables `HPC_CACHE_USER` and `HPC_CACHE_ID`. ```azurecli HPC_CACHE_USER="StorageCache Resource Provider" HPC_CACHE_ID=$(az ad sp list --display-name "${HPC_CACHE_USER}" --query "[].objectId" -o tsv)+ ``` + ```azurecli-interactive az role assignment create --role "Storage Account Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID- az role assignment create --role "Storage Blob Data Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID ``` -5. Add the blob container to your HPC Cache as a storage target using the [`az hpc-cache blob-storage-target add`][az-hpc-cache-blob-storage-target-add] command. -- ```azurecli - CONTAINER_NAME=mystoragecontainer +1. Add the blob container to your HPC Cache as a storage target using the [`az hpc-cache blob-storage-target add`][az-hpc-cache-blob-storage-target-add] command. The following example creates a blob container named *MyStorageTarget* to the HPC Cache *MyHpcCache*. Specify a value for **--name**, **--cache-name**, and **--virtual-namespace-path**. + ```azurecli-interactive az hpc-cache blob-storage-target add \ --resource-group $RESOURCE_GROUP \ --cache-name MyHpcCache \ Last updated 06/22/2023 ## Set up client load balancing -1. Create an Azure Private DNS Zone for the client-facing IP addresses using the [`az network private-dns zone create`][az-network-private-dns-zone-create] command. +1. Create an Azure Private DNS zone for the client-facing IP addresses using the [`az network private-dns zone create`][az-network-private-dns-zone-create] command. First define the environment variable `PRIVATE_DNS_ZONE` and specify a name for the zone. ```azurecli PRIVATE_DNS_ZONE="myhpccache.local"+ ``` + ```azurecli-interactive az network private-dns zone create \- -g $RESOURCE_GROUP \ - -n $PRIVATE_DNS_ZONE + --resource-group $RESOURCE_GROUP \ + --name $PRIVATE_DNS_ZONE ``` -2. Create a DNS link between the Azure Private DNS Zone and the VNet using the [`az network private-dns link vnet create`][az-network-private-dns-link-vnet-create] command. +2. Create a DNS link between the Azure Private DNS Zone and the VNet using the [`az network private-dns link vnet create`][az-network-private-dns-link-vnet-create] command. Replace the value for **--name**. - ```azurecli + ```azurecli-interactive az network private-dns link vnet create \- -g $RESOURCE_GROUP \ - -n MyDNSLink \ - -z $PRIVATE_DNS_ZONE \ - -v $VNET_NAME \ - -e true + --resource-group $RESOURCE_GROUP \ + --name MyDNSLink \ + --zone-name $PRIVATE_DNS_ZONE \ + --virtual-network $VNET_NAME \ + --registration-enabled true ``` -3. Create the round-robin DNS name for the client-facing IP addresses using the [`az network private-dns record-set a create`][az-network-private-dns-record-set-a-create] command. +3. Create the round-robin DNS name for the client-facing IP addresses using the [`az network private-dns record-set a create`][az-network-private-dns-record-set-a-create] command. First, define the environment variables `DNS_NAME`, `HPC_MOUNTS0`, `HPC_MOUNTS1`, and `HPC_MOUNTS2`. Replace the value for the property `DNS_NAME`. ```azurecli DNS_NAME="server" HPC_MOUNTS0=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[0]" -o tsv | tr --delete '\r') HPC_MOUNTS1=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[1]" -o tsv | tr --delete '\r') HPC_MOUNTS2=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[2]" -o tsv | tr --delete '\r')+ ``` + ```azurecli-interactive az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS0-+ az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS1 az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS2 Last updated 06/22/2023 ## Create a persistent volume -1. Create a `pv-nfs.yaml` file to define a [persistent volume][persistent-volume]. +1. Create a file named `pv-nfs.yaml` to define a [persistent volume][persistent-volume] and then paste in the following manifest. Replace the values for the property `server` and `path`. ```yaml Last updated 06/22/2023 path: / ``` -2. Get the credentials for your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. +1. Get the credentials for your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. ```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ``` -3. Update the *server* and *path* to the values of your NFS (Network File System) volume you created in the previous step. -4. Create the persistent volume using the [`kubectl apply`][kubectl-apply] command. +1. Create the persistent volume using the [`kubectl apply`][kubectl-apply] command. - ```console + ```bash kubectl apply -f pv-nfs.yaml ``` -5. Verify the status of the persistent volume is **Available** using the [`kubectl describe`][kubectl-describe] command. +1. Verify the status of the persistent volume is **Available** using the [`kubectl describe`][kubectl-describe] command. - ```console + ```bash kubectl describe pv pv-nfs ``` ## Create the persistent volume claim -1. Create a `pvc-nfs.yaml` to define a [persistent volume claim][persistent-volume-claim]. +1. Create a file named `pvc-nfs.yaml`to define a [persistent volume claim][persistent-volume-claim], and then paste the following manifest. ```yaml apiVersion: v1 Last updated 06/22/2023 2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command. - ```console + ```bash kubectl apply -f pvc-nfs.yaml ``` 3. Verify the status of the persistent volume claim is **Bound** using the [`kubectl describe`][kubectl-describe] command. - ```console + ```bash kubectl describe pvc pvc-nfs ``` ## Mount the HPC Cache with a pod -1. Create a `nginx-nfs.yaml` file to define a pod that uses the persistent volume claim. +1. Create a file named `nginx-nfs.yaml` to define a pod that uses the persistent volume claim, and then paste the following manifest. ```yaml kind: Pod Last updated 06/22/2023 2. Create the pod using the [`kubectl apply`][kubectl-apply] command. - ```console + ```bash kubectl apply -f nginx-nfs.yaml ``` 3. Verify the pod is running using the [`kubectl describe`][kubectl-describe] command. - ```console + ```bash kubectl describe pod nginx-nfs ``` -4. Verify your volume is mounted in the pod using the [`kubectl exec`][kubectl-exec] command to connect to the pod, then `df -h` to check if the volume is mounted. +4. Verify your volume is mounted in the pod using the [`kubectl exec`][kubectl-exec] command to connect to the pod. - ```console + ```bash kubectl exec -it nginx-nfs -- sh ``` + To check if the volume is mounted, run `df` in its human-readable format using the `--human-readable` (`-h` for short) option. ++ ```bash + df -h + ``` ++ The following example resembles output returned from the command: + ```output- / # df -h Filesystem Size Used Avail Use% Mounted on ... server.myhpccache.local:/myfilepath 8.0E 0 8.0E 0% /mnt/azure/myfilepath ... ``` -## Frequently asked questions (FAQ) --### Running applications as non-root --If you need to run an application as a non-root user, you may need to disable root squashing to chown a directory to another user. The non-root user needs to own a directory to access the file system. For the user to own a directory, the root user must chown a directory to that user, but if the HPC Cache is squashing root, this operation is denied because the root user (UID 0) is being mapped to the anonymous user. For more information about root squashing and client access policies, see [HPC Cache access policies][hpc-cache-access-policies]. - ## Next steps * For more information on Azure HPC Cache, see [HPC Cache overview][hpc-cache]. * For more information on using NFS with AKS, see [Manually create and use a Network File System (NFS) Linux Server volume with AKS][aks-nfs]. -[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md --[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md --[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md +<!-- EXTERNAL LINKS --> +[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply +[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe +[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec +[hpc-cache-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache®ions=all +<!-- INTERNAL LINKS --> [aks-nfs]: azure-nfs-volume.md- [hpc-cache]: ../hpc-cache/hpc-cache-overview.md- [hpc-cache-access-policies]: ../hpc-cache/access-policies.md--[hpc-cache-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache®ions=all - [hpc-cache-cli-prerequisites]: ../hpc-cache/az-cli-prerequisites.md- [hpc-cache-prereqs]: ../hpc-cache/hpc-cache-prerequisites.md- [az-hpc-cache-create]: /cli/azure/hpc-cache#az_hpc_cache_create- [az-aks-show]: /cli/azure/aks#az_aks_show-+[az-feature-show]: /cli/azure/feature#az-feature-show [install-azure-cli]: /cli/azure/install-azure-cli--[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply --[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe --[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec - [persistent-volume]: concepts-storage.md#persistent-volumes- [persistent-volume-claim]: concepts-storage.md#persistent-volume-claims- [az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create- [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials- [az-provider-register]: /cli/azure/provider#az_provider_register- [az-storage-account-create]: /cli/azure/storage/account#az_storage_account_create- [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create- [az-storage-container-create]: /cli/azure/storage/container#az_storage_container_create- [az-hpc-cache-blob-storage-target-add]: /cli/azure/hpc-cache/blob-storage-target#az_hpc_cache_blob_storage_target_add- [az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone#az_network_private_dns_zone_create- [az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create--[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create --+[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create |
aks | Azure Linux Aks Partner Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-linux-aks-partner-solutions.md | + + Title: Azure Linux AKS Container Host partner solutions ++description: Discover partner-tested solutions that enable you to build, test, deploy, manage, and monitor your AKS environment using Azure Linux Container Host. +++ Last updated : 02/16/2024+++# Azure Linux AKS Container Host partner solutions ++Microsoft collaborates with partners to ensure your build, test, deployment, configuration, and monitoring of your applications perform optimally with Azure Linux Container Host on AKS. ++Our third party partners featured in this article have introduction guides to help you start using their solutions with your applications running on Azure Linux Container Host on AKS. ++| Solutions | Partners | +|--|| +| DevOps | [Advantech](#advantech) <br> [Hashicorp](#hashicorp) <br> [Akuity](#akuity) <br> [Kong](#kong) | +| Networking | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Tetrate](#tetrate) | +| Observability | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Dynatrace](#dynatrace) | +| Security | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Kong](#kong) <br> [Tetrate](#tetrate) | +| Storage | [Veeam](#veeam) | +| Config Management | [Corent](#corent) | +| Migration | [Catalogic](#catalogic) | ++## DevOps ++DevOps streamlines the delivery process, improves collaboration across teams, and enhances software quality, ensuring swift, reliable, and continuous deployment of your applications. ++### Advantech +++| Solution | Categories | +|-|| +| iFactoryEHS | DevOps | ++The right EHS management system can strengthen organizations behind the scenes and enable them to continuously put their best foot forward. iFactoryEHS solution is designed to help manufacturers manage employee health, improve safety, and analyze environmental footprints while ensuring operational continuity. ++For more information, see [Advantech & iFactoryEHS](https://page.advantech.com/en/global/solutions/ifactory/ifactory_ehs). ++### Hashicorp +++| Solution | Categories | +|-|| +| Terraform | DevOps | ++At HashiCorp, we believe infrastructure enables innovation, and we're helping organizations to operate that infrastructure in the cloud. ++<details> <summary> See more </summary><br> ++Our suite of multicloud infrastructure automation products, built on projects with source code freely available at their core, underpin the most important applications for the largest enterprises in the world. As part of the once-in-a-generation shift to the cloud, organizations of all sizes, from well-known brands to ambitious start-ups, rely on our solutions to provision, secure, connect, and run their business-critical applications so they can deliver essential services, communications tools, and entertainment platforms worldwide. ++</details> ++For more information, see [Hashicorp solutions](https://hashicorp.com/) and [Hasicorp on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/hashicorp-4665790.terraform-azure-saas?tab=overview). ++### Akuity +++| Solution | Categories | +|-|| +| Akuity Platform | DevOps | ++The Akuity Platform is a managed solution for Argo CD from the creators of Argo open source project. ++<details> <summary> See more </summary><br> ++Argo Project is a suite of open source tools for deploying and running applications and workloads on Kubernetes. It extends the Kubernetes APIs and unlocks new and powerful capabilities in application deployment, container orchestration, event automation, progressive delivery, and more. ++Akuity is rooted in Argo, extending its capabilities and using the same familiar user interface. The platform solves real-life DevOps use cases using battle-tested patterns packaged into a product with the best possible developer experience. ++</details> ++For more information, see [Akuity Solutions](https://akuity.io/). ++### Kong +++| Solution | Categories | +|-|| +| Kong Connect | DevOps <br> Security | ++Kong Konnect is the unified cloud-native API lifecycle platform to optimize any environment. It reduces operational complexity, promotes federated governance, and provides robust security by seamlessly managing Kong Gateway, Kong Ingress Controller and Kong Mesh with a single management console, delivering API configuration, portal, service catalog, and analytics capabilities. ++<details> <summary> See more </summary><br> ++A unified Konnect control plane empowers businesses to: ++* Define a collection of API Data Plane Nodes that share the same configuration. +* Provide a single control plane to catalog, connect to, and monitor the status of all control planes and instances and manage group configuration. +* Browse APIs, reference documentation, test endpoints, and create applications using specific APIs through a customizable and unified API portal for developers. +* Create a single source of truth by cataloging all services with the Service Hub. +* Access key statistics, monitor vital signs, and spot patterns in real time to see how your APIs and gateways are performing. +* Deliver a fully Kubernetes-centric operational lifecycle model through the integration of DevOps-ready config-driven API management layer and KICΓÇÖs unrivaled runtime performance. ++KongΓÇÖs extensive ecosystem of community and enterprise plugins delivers critical functionality, including authentication, authorization, rate limiting, request enforcement, and caching, without increasing API platformΓÇÖs footprint. ++</details> ++For more information, see [Kong Solutions](https://konghq.com/) and [Kong on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/konginc1581527938760.kong-enterprise?tab=Overview). ++## Networking ++Ensure efficient traffic management, enhanced security, and optimal network performance. ++### Buoyant +++| Solution | Categories | +|-|| +| Managed Linkerd with Buoyant Cloud | Networking <br> Security <br> Observability | ++Managed Linkerd with Buoyant Cloud automatically keeps your Linkerd control plane data plane up to date with latest versions, and handles installs, trust anchor rotation, and more. ++For more information, see [Buoyant Solutions](https://buoyant.io/cloud) and [Buoyant on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/buoyantinc1658330371653.buoyant?tab=Overview). ++### Isovalent +++| Solution | Categories | +|-|| +| Isovalent Enterprise for Cilium | Networking <br> Security <br> Observability | ++Isovalent Enterprise for Cilium provides advanced network policy capabilities, including DNS-aware policy, L7 policy, and deny policy, enabling fine-grained control over network traffic for micro-segmentation and improved security. ++<details> <summary> See more </summary><br> ++Isovalent also provides multi-cluster connectivity via Cluster Mesh, seamless networking and security across multiple clouds, including public cloud providers like AWS, Azure, and Google Cloud Platform, as well as on-premises environments. With free service-to-service communication and advanced load balancing, Isovalent makes it easy to deploy and manage complex microservices architectures. ++The Hubble flow observability + User Interface feature provides real-time network traffic flow and policy visualization, as well as a powerful User Interface for easy troubleshooting and network management. Tetragon provides advanced security capabilities such as protocol enforcement, IP and port allowlists, and automatic application-aware policy generation to protect against the most sophisticated threats. Tetragon is built on eBPF, enabling scaling to meet the needs of the most demanding cloud-native environments with ease. ++Isovalent provides enterprise-grade support from their experienced team of experts, ensuring that any issues are resolved in a timely and efficient manner. Additionally, professional services help organizations deploy and manage Cilium in production environments. ++</details> ++For more information, see [Isovalent Solutions](https://isovalent.com/blog/post/isovalent-azure-linux/) and [Isovalent on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/isovalentinc1662143158090.isovalent-cilium-enterprise?tab=overview). ++## Observability ++Observability provides deep insights into your systems, enabling rapid issue detection and resolution to enhance your applicationΓÇÖs reliability and performance. ++### Dynatrace +++| Solution | Categories | +|-|| +| Dynatrace Azure Monitoring | Observability | ++Fully automated, AI-assisted observability across Azure environments Dynatrace is a single source of truth for your cloud platforms, allowing you to monitor the health of your entire Azure infrastructure. ++For more information, see [Dynatrace Solutions](https://www.dynatrace.com/technologies/azure-monitoring/) and [Dynatrace on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview). ++## Security ++Ensure the integrity and confidentiality of applications and foster trust and compliance across your infrastructure. ++### Tetrate +++| Solution | Categories | +|-|| +| Tetrate Istio Distro (TID) | Security <br> Networking | ++Tetrate Istio Distro (TID) is a simple, safe enterprise-grade Istio distro, providing the easiest way of installing, operating, and upgrading. ++<details> <summary> See more </summary><br> ++TID enforces fetching certified versions of Istio and enables only compatible versions of Istio installation. It includes a FIPS-compliant flavor, delivers platform-based Istio configuration validations by integrating validation libraries from multiple sources, uses various cloud provider certificate management systems to create Istio CA certs that are used for signing service mesh managed workloads, and provides multiple additional integration points with cloud providers. ++</details> ++For more information, see [Tetrate Solutions](https://istio.tetratelabs.io/download/) and [Tetrate on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tetrate1598353087553.tetrateistio?tab=Overview). ++## Storage ++Storage enables standardized and seamless storage interactions, ensuring high application performance and data consistency. ++### Veeam +++| Solution | Categories | +|-|| +| Kasten K10 by Veeam | Storage | ++Kasten K10 by Veeam is the #1 Kubernetes data management product, providing an easy-to-use, scalable, and secure system for backup and restore, mobility, and DR. ++For more information, see [Veeam Solutions](https://www.kasten.io/partner-microsoft) and [Veeam on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.kasten_k10_by_veeam_byol?tab=overview). ++## Config management ++Automate and standardize the system settings across your environments to enhance efficiency, reduce errors, and ensuring system stability and compliance. ++### Corent +++| Solution | Categories | +|-|| +| Corent MaaS | Config Management | ++Corent MaaS provides scanning to identify workloads that can be containerized, and automatically containerizes on AKS. ++For more information, see [Corent Solutions](https://www.corenttech.com/SurPaaS_MaaS_Product.html) and [Corent on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/corent-technology-pvt.surpaas_maas?tab=Overview). ++## Migration ++Migrate workloads to Azure Linux Container Host on AKS with confidence. ++### Catalogic +++| Solution | Categories | +|-|| +| CloudCasa | Migration | ++CloudCasa is a Kubernetes backup, recovery, and migration solution that is fully compatible with AKS, as well as all other major Kubernetes distributions and managed services. ++<details> <summary> See more </summary><br> ++Install the CloudCasa agent and let it do all the hard work of protecting and recovering your cluster resources and persistent data from human error, security breaches, and service failures, including providing the business continuity and compliance that your business requires. ++From a single dashboard, CloudCasa makes cross-cluster, cross-tenant, cross-region, and cross-cloud recoveries easy. Recovery and migration from backups includes recovering an entire cluster along with your vNETs, add-ons, load balancers and more. During recovery, users can migrate to Azure Linux, and migrate storage resources from Azure Disk to Azure Container Storage. ++CloudCasa can also centrally manage Azure Backup or Velero backup installations across multiple clusters and cloud providers, with migration of resources to different environments. ++</details> ++For more information, see [Catalogic Solutions](https://cloudcasa.io/) and [Catalogic on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/catalogicsoftware1625626770507.cloudcasa-aks-app). ++## Next steps ++[Learn more about Azure Linux Container Host on AKS](../azure-linux/intro-azure-linux.md). |
aks | Create Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md | The following limitations apply when you create AKS clusters that support multip * The AKS cluster must use the Standard SKU load balancer to use multiple node pools. This feature isn't supported with Basic SKU load balancers. * The AKS cluster must use Virtual Machine Scale Sets for the nodes. * The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter.- * For Linux node pools, the length must be between 1-11 characters. + * For Linux node pools, the length must be between 1-12 characters. * For Windows node pools, the length must be between 1-6 characters. * All node pools must reside in the same virtual network. * When you create multiple node pools at cluster creation time, the Kubernetes versions for the node pools must match the version set for the control plane. |
aks | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md | The AKS Linux Extension is an Azure VM extension that installs and configures mo - [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the cluster's API server using Events and NodeConditions.-- [ig](https://inspektor-gadget.io/docs/latest/ig/): An eBPF-powered open-source framework for debugging and observing Linux and Kubernetes systems. It provides a set of tools (or gadgets) designed to gather relevant information, allowing users to identify the cause of performance issues, crashes, or other anomalies. Notably, its independence from Kubernetes enables users to employ it also for debugging control plane issues.+- [ig](https://go.microsoft.com/fwlink/p/?linkid=2260320): An eBPF-powered open-source framework for debugging and observing Linux and Kubernetes systems. It provides a set of tools (or gadgets) designed to gather relevant information, allowing users to identify the cause of performance issues, crashes, or other anomalies. Notably, its independence from Kubernetes enables users to employ it also for debugging control plane issues. These tools help provide observability around many node health related problems, such as: |
aks | Manage Azure Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md | az role assignment create --role "Azure Kubernetes Service RBAC Admin" --assigne > az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name> > ``` +> [!NOTE] +> In Azure portal, after creating role assignments scoped to a desired namespace, you won't be able to see "role assignments" for namespace [at a scope][list-role-assignments-at-a-scope-at-portal]. You can find it by using the [`az role assignment list`][az-role-assignment-list] command, or [list role assignments for a user or group][list-role-assignments-for-a-user-or-group-at-portal], which you assigned the role to. +> +> ```azurecli-interactive +> az role assignment list --scope $AKS_ID/namespaces/<namespace-name> +> ``` + ## Create custom roles definitions The following example custom role definition allows a user to only read deployments and nothing else. For the full list of possible actions, see [Microsoft.ContainerService operations](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice). To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azur <!-- LINKS - Internal --> [aks-support-policies]: support-policies.md [aks-faq]: faq.md-[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update -[az-feature-list]: /cli/azure/feature#az_feature_list -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli -[az-aks-create]: /cli/azure/aks#az_aks_create -[az-aks-show]: /cli/azure/aks#az_aks_show -[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create -[az-provider-register]: /cli/azure/provider#az_provider_register -[az-group-create]: /cli/azure/group#az_group_create -[az-aks-update]: /cli/azure/aks#az_aks_update +[az-extension-add]: /cli/azure/extension#az-extension-add +[az-extension-update]: /cli/azure/extension#az-extension-update +[az-feature-list]: /cli/azure/feature#az-feature-list +[az-feature-register]: /cli/azure/feature#az-feature-register +[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli +[az-aks-create]: /cli/azure/aks#az-aks-create +[az-aks-show]: /cli/azure/aks#az-aks-show +[list-role-assignments-at-a-scope-at-portal]: ../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-at-a-scope +[list-role-assignments-for-a-user-or-group-at-portal]: ../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group +[az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create +[az-role-assignment-list]: /cli/azure/role/assignment#az-role-assignment-list +[az-provider-register]: /cli/azure/provider#az-provider-register +[az-group-create]: /cli/azure/group#az-group-create +[az-aks-update]: /cli/azure/aks#az-aks-update [managed-aad]: ./managed-azure-ad.md [install-azure-cli]: /cli/azure/install-azure-cli-[az-role-definition-create]: /cli/azure/role/definition#az_role_definition_create -[az-aks-get-credentials]: /cli/azure/aks#az_aks_get-credentials +[az-role-definition-create]: /cli/azure/role/definition#az-role-definition-create +[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [kubernetes-rbac]: /azure/aks/concepts-identity#azure-rbac-for-kubernetes-authorization |
aks | Operator Best Practices Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-identity.md | Title: Best practices for managing identity + Title: Best practices for managing authentication and authorization description: Learn the cluster operator best practices for how to manage authentication and authorization for clusters in Azure Kubernetes Service (AKS) Previously updated : 04/14/2023 Last updated : 02/16/2024 # Best practices for authentication and authorization in Azure Kubernetes Service (AKS) For more information about cluster operations in AKS, see the following best pra <!-- INTERNAL LINKS --> [aks-concepts-identity]: concepts-identity.md [azure-ad-integration]: managed-azure-ad.md-[aks-aad]: azure-ad-integration-cli.md -[managed-identities]: ../active-directory/managed-identities-azure-resources/overview.md +[aks-aad]: enable-authentication-microsoft-entra-id.md [aks-best-practices-scheduler]: operator-best-practices-scheduler.md [aks-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | For the past release history, see [Kubernetes history](https://github.com/kubern | K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | |--|-|--||-|--|-| 1.24 | Apr 2022 | May 2022 | Jul 2022 | Jul 2023 | Until 1.28 GA | | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 14, 2024 | Until 1.29 GA | | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA | Note the following important changes before you upgrade to any of the available |Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-|| | 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2-| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes |None -| 1.27 | Azure policy 1.1.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards. -| 1.28 | Azure policy 1.2.1<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.2<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes|None -+| 1.26 | Azure policy 1.3.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.1<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0<br>azurefile-csi-driver 1.26.10<br>| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.26.10 |None +| 1.27 | Azure policy 1.3.0<br>azuredisk-csi driver v1.28.5<br>azurefile-csi driver v1.28.7<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>Keda 2.11.2<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>azurefile-csi-driver 1.28.7<br>KMS 0.5.0<br>CSI Secret store driver 1.3.4-1<br>|Cilium 1.13.10-1<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.11.2<br>Cilium 1.13.10-1<br>azurefile-csi-driver 1.28.7<br>azuredisk-csi driver v1.28.5<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2|Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards. +| 1.28 | Azure policy 1.3.0<br>azurefile-csi-driver 1.29.2<br>csi-node-driver-registrar v2.9.0<br>csi-livenessprobe 2.11.0<br>azuredisk-csi-linux v1.29.2<br>azuredisk-csi-windows v1.29.2<br>csi-provisioner v3.6.2<br>csi-attacher v4.5.0<br>csi-resizer v1.9.3<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>CSI Secret store driver 1.3.4-1<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.10-1<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br>Tigera-Operator 1.28.13| OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.29.2<br>csi-resizer v1.9.3<br>csi-attacher v4.4.2<br>csi-provisioner v4.4.2<br>blob-csi v1.23.2<br>azurefile-csi driver v1.29.2<br>azuredisk-csi driver v1.29.2<br>csi-livenessprobe v2.11.0<br>csi-node-driver-registrar v2.9.0|None +| 1.29 | Azure policy 1.3.0<br>csi-provisioner v4.0.0<br>csi-attacher v4.5.0<br>csi-snapshotter v6.3.3<br>snapshot-controller v6.3.3<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1<br>CSI Secret store driver 1.3.4-1<br>azurefile-csi-driver 1.29.3<br>|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br>Tigera-Operator 1.30.7<br>| OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Tigera-Operator 1.30.7<br>csi-provisioner v4.0.0<br>csi-attacher v4.5.0<br>csi-snapshotter v6.3.3<br>snapshot-controller v6.3.3 |None ## Alias minor version > [!NOTE] |
api-management | Api Management Howto Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md | In the Developer, Basic, Standard, and Premium tiers of API Management, the publ * The service subscription is disabled or warned (for example, for nonpayment) and then reinstated. [Learn more about subscription states](/azure/cost-management-billing/manage/subscription-states) * (Developer and Premium tiers) Azure Virtual Network is added to or removed from the service. * (Developer and Premium tiers) API Management service is switched between external and internal VNet deployment mode.-* (Developer and Premium tiers) API Management service is moved to a different subnet. +* (Developer and Premium tiers) API Management service is moved to a different subnet, or [migrated](migrate-stv1-to-stv2.md) from the `stv1` to the `stv2` compute platform.. * (Premium tier) [Availability zones](../reliability/migrate-api-mgt.md) are enabled, added, or removed. * (Premium tier) In [multi-regional deployments](api-management-howto-deploy-multi-region.md), the regional IP address changes if a region is vacated and then reinstated. |
api-management | Quickstart Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-arm-template.md | description: Use this quickstart to create an Azure API Management instance in t -tags: azure-resource-manager |
api-management | Quota By Key Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md | To understand the difference between rate limits and quotas, [see Rate limits an | calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A | | counter-key | The key to use for the `quota policy`. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A | | increment-condition | The Boolean expression specifying if the request should be counted towards the quota (`true`). Policy expressions are allowed. | No | N/A |-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed. | Yes | N/A | +| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. Minimum period: 300 seconds. When `renewal-period` is set to 0, the period is set to infinite. Policy expressions aren't allowed. | Yes | N/A | | first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. Policy expressions aren't allowed. | No | `0001-01-01T00:00:00Z` | For more information and examples of this policy, see [Advanced request throttli * [API Management access restriction policies](api-management-access-restriction-policies.md) |
api-management | Set Backend Service Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md | -Use the `set-backend-service` policy to redirect an incoming request to a different backend than the one specified in the API settings for that operation. This policy changes the backend service base URL of the incoming request to the one specified in the policy. +Use the `set-backend-service` policy to redirect an incoming request to a different backend than the one specified in the API settings for that operation. This policy changes the backend service base URL of the incoming request to a URL or [backend](backends.md) specified in the policy. > [!NOTE] > Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement). Use the `set-backend-service` policy to redirect an incoming request to a differ | Attribute | Description | Required | Default | | -- | | -- | - | |base-url|New backend service base URL. Policy expressions are allowed.|One of `base-url` or `backend-id` must be present.|N/A|-|backend-id|Identifier (name) of the backend to route primary or secondary replica of a partition. Policy expressions are allowed. |One of `base-url` or `backend-id` must be present.|N/A| +|backend-id|Identifier (name) of the [backend](backends.md) to route primary or secondary replica of a partition. Policy expressions are allowed. |One of `base-url` or `backend-id` must be present.|N/A| |sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution. Policy expressions are allowed.|No|N/A| |sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. Policy expressions are allowed. |No|N/A| |sf-partition-key|Only applicable when the backend is a Service Fabric service. Specifies the partition key of a Service Fabric service. Policy expressions are allowed. |No|N/A| |
app-service | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md | Title: App Service Environment networking description: App Service Environment networking details Previously updated : 10/02/2023 Last updated : 01/31/2024 If you use a smaller subnet, be aware of the following limitations: - For any App Service plan OS/SKU combination used in your App Service Environment like I1v2 Windows, one standby instance is created for every 20 active instances. The standby instances also require IP addresses. - When scaling App Service plans in the App Service Environment up/down, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. - Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic.-- After scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this can be up to 12 hours.+- After scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this operation can be up to 12 hours. - If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure. >[!NOTE] You can find details in the **IP Addresses** portion of the portal, as shown in As you scale your App Service plans in your App Service Environment, you use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time. +### Bring your own inbound address ++You can bring your own inbound address to your App Service Environment. If you create an App Service Environment with an internal VIP, you can specify a static IP address in the subnet. If you create an App Service Environment with an external VIP, you can use your own Azure Public IP address by specifying the resource ID of the Public IP address. The following are limitations for bringing your own inbound address: ++- For App Service Environment with external VIP, the Azure Public IP address resource must be in the same subscription as the App Service Environment. +- The inbound address can't be changed after the App Service Environment is created. + ## Ports and network restrictions For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports, you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This port is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet. For more information about Private Endpoint and Web App, see [Azure Web App Priv ## DNS -The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix. Note that for App Service Environment domains, the site name will be truncated at 40 characters because of DNS limits. If you have a slot, the slot name will be truncated at 19 characters. +The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix. For App Service Environment domains, the site name is truncated at 40 characters because of DNS limits. If you have a slot, the slot name is truncated at 19 characters. ### DNS configuration to your App Service Environment In addition to setting up DNS, you also need to enable it in the [App Service En ### DNS configuration from your App Service Environment -The apps in your App Service Environment uses the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server. +The apps in your App Service Environment use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server. ## More resources |
application-gateway | Create Vmss Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-cli.md | Title: Azure CLI Script Sample - Manage web traffic | Microsoft Docs description: Azure CLI Script Sample - Manage web traffic with an application gateway and a virtual machine scale set. -tags: azure-resource-manager vm-windows |
application-gateway | Create Vmss Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-powershell.md | Title: Azure PowerShell Script Sample - Manage web traffic | Microsoft Docs description: Azure PowerShell Script Sample - Manage web traffic with an application gateway and a virtual machine scale set. -tags: azure-resource-manager vm-windows |
application-gateway | Create Vmss Waf Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-waf-cli.md | Title: Azure CLI Script Sample - Restrict web traffic | Microsoft Docs description: Azure CLI Script Sample - Create an application gateway with a web application firewall and a virtual machine scale set that uses OWASP rules to restrict traffic. -tags: azure-resource-manager vm-windows |
application-gateway | Create Vmss Waf Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-waf-powershell.md | Title: Azure PowerShell Script Sample - Restrict web traffic | Microsoft Docs description: Azure PowerShell Script Sample - Create an application gateway with a web application firewall and a virtual machine scale set that uses OWASP rules to restrict traffic. -tags: azure-resource-manager vm-windows |
automation | Runtime Environment Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runtime-environment-overview.md | Title: Runtime environment in Azure Automation description: This article provides an overview on Runtime environment in Azure Automation. Previously updated : 01/24/2024 Last updated : 02/16/2024 You can't edit these Runtime environments. However, any changes that are made in - Existing runbooks that are automatically moved from old experience to Runtime environment experience would be able to execute as both cloud and hybrid job. - When the runbook is [updated](manage-runtime-environment.md) and linked to a different Runtime environment, it can be executed as cloud job only. - PowerShell Workflow, Graphical PowerShell, and Graphical PowerShell Workflow runbooks only work with System-generated PowerShell-5.1 Runtime environment.+- Runbooks created in Runtime environment experience with Runtime version PowerShell 7.2 would show as PowerShell 5.1 runbooks in old experience. - RBAC permissions cannot be assigned to Runtime environment. - Runtime environment can't be configured through Azure Automation extension for Visual Studio Code. - Deleted Runtime environments cannot be recovered. |
azure-app-configuration | Howto Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md | App Configuration offers the option to bulk [import](./howto-import-export-data. If your application is deployed in multiple regions, we recommend that you [enable geo-replication](./howto-geo-replication.md) of your App Configuration store. You can let your application primarily connect to the replica matching the region where instances of your application are deployed and allow them to fail over to replicas in other regions. This setup minimizes the latency between your application and App Configuration, spreads the load as each replica has separate throttling quotas, and enhances your application's resiliency against transient and regional outages. See [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) for more information. +## Building applications with high resiliency ++Applications often rely on configuration to start, making Azure App Configuration's high availability critical. For improved resiliency, applications should leverage App Configuration's reliability features and consider taking the following measures based on your specific requirements. ++* **Provision in regions with Azure availability zone support.** Availability zones allow applications to be resilient to data center outages. App Configuration offers zone redundancy for all customers without any extra charges. Creating your App Configuration store in regions with support for availability zones is recommended. You can find [a list of regions](./faq.yml#how-does-app-configuration-ensure-high-data-availability) where App Configuration has enabled availability zone support. +* **[Enable geo-replication](./howto-geo-replication.md) and allow your application to failover among replicas.** This setup gives you a model for scalability and enhanced resiliency against transient failures and regional outages. See [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) for more information. +* **Deploy configuration with [safe deployment practices](/azure/well-architected/operational-excellence/safe-deployments).** Incorrect or accidental configuration changes can frequently cause application downtime. You should avoid making configuration changes that impact the production directly from, for example, the Azure portal whenever possible. In safe deployment practices (SDP), you use a progressive exposure deployment model to minimize the potential blast radius of deployment-caused issues. If you adopt SDP, you can build and test a [configuration snapshot](./howto-create-snapshots.md) before deploying it to production. During the deployment, you can update instances of your application to progressively pick up the new snapshot. If issues are detected, you can roll back the change by redeploying the last-known-good (LKG) snapshot. The snapshot is immutable, guaranteeing consistency throughout all deployments. You can utilize snapshots along with dynamic configuration. Use a snapshot for your foundational configuration and dynamic configuration for emergency configuration overrides and feature flags. +* **Include configuration with your application.** If you want to ensure that your application always has access to a copy of the configuration, or if you prefer to avoid a runtime dependency on App Configuration altogether, you can pull the configuration from App Configuration during build or release time and include it with your application. To learn more, check out examples of integrating App Configuration with your [CI/CD pipeline](./integrate-ci-cd-pipeline.md) or [Kubernetes deployment](./integrate-kubernetes-deployment-helm.md). +* **Use App Configuration providers.** Applications play a critical part in achieving high resiliency because they can account for issues arising during their runtime, such as networking problems, and respond to failures more quickly. The App Configuration providers offer a range of built-in resiliency features, including automatic replica discovery, replica failover, startup retries with customizable timeouts, configuration caching, and adaptive strategies for reliable configuration refresh. It's highly recommended that you use App Configuration providers to benefit from these features. If that's not an option, you should consider implementing similar features in your custom solution to achieve the highest level of resiliency. + ## Client applications in App Configuration When you use App Configuration in client applications, ensure that you consider two major factors. First, if you're using the connection string in a client application, you risk exposing the access key of your App Configuration store to the public. Second, the typical scale of a client application might cause excessive requests to your App Configuration store, which can result in overage charges or throttling. For more information about throttling, see the [FAQ](./faq.yml#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration). |
azure-app-configuration | Use Feature Flags Dotnet Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md | In this tutorial, you will learn how to: ## Set up feature management -To access the .NET feature manager, your app must have references to the `Microsoft.FeatureManagement.AspNetCore` NuGet package. +To access the .NET feature manager, your app must have references to the `Microsoft.Azure.AppConfiguration.AspNetCore` and `Microsoft.FeatureManagement.AspNetCore` NuGet packages. The .NET feature manager is configured from the framework's native configuration system. As a result, you can define your application's feature flag settings by using any configuration source that .NET supports, including the local `appsettings.json` file or environment variables. |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md | -## February 12, 2024 +## February 13, 2024 -**Image tag**:`v1.27.0_2023-02-13` +**Image tag**:`v1.27.0_2024-02-13` -For complete release version information, review [Version log](version-log.md#february-12-2024). +For complete release version information, review [Version log](version-log.md#february-13-2024). ## December 12, 2023 |
azure-arc | Version Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md | -## February 12, 2024 +## February 13, 2024 |Component|Value| |--|--|-|Container images tag |`v1.27.0_2023-02-13`| +|Container images tag |`v1.27.0_2024-02-13`| |**CRD names and version:**| | |`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| |`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5| |
azure-arc | Network Requirements Consolidated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md | Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 01/10/2024 Last updated : 02/15/2024 Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernete - Azure Arc-enabled App services - Azure Arc-enabled Machine Learning - Azure Arc-enabled data services (direct connectivity mode only)+- Azure Arc resource bridge [!INCLUDE [network-requirements](kubernetes/includes/network-requirements.md)] For more information, see [Connectivity modes and requirements](data/connectivit Connectivity to Arc-enabled server endpoints is required for: - SQL Server enabled by Azure Arc-- Azure Arc-enabled VMware vSphere (preview) <sup>*</sup>-- Azure Arc-enabled System Center Virtual Machine Manager (preview) <sup>*</sup>-- Azure Arc-enabled Azure Stack (HCI) (preview) <sup>*</sup>+- Azure Arc-enabled VMware vSphere <sup>*</sup> +- Azure Arc-enabled System Center Virtual Machine Manager <sup>*</sup> +- Azure Arc-enabled Azure Stack (HCI) <sup>*</sup> <sup>*</sup>Only required for guest management enabled. For more information, see [Connected Machine agent network requirements](servers ## Azure Arc resource bridge -This section describes additional networking requirements specific to deploying Azure Arc resource bridge in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere (preview) and Azure Arc-enabled System Center Virtual Machine Manager (preview). +This section describes additional networking requirements specific to deploying Azure Arc resource bridge in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere and Azure Arc-enabled System Center Virtual Machine Manager. [!INCLUDE [network-requirements](resource-bridge/includes/network-requirements.md)] Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) also requires: | | | | | | | SCVMM management Server | 443 | URL of the SCVMM management server | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. | -For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager (preview)](system-center-virtual-machine-manager/overview.md). +For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md). ## Azure Arc-enabled VMware vSphere |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | Title: Azure Arc resource bridge network requirements description: Learn about network requirements for Azure Arc resource bridge including URLs that must be allowlisted. Previously updated : 11/03/2023 Last updated : 02/15/2024 # Azure Arc resource bridge network requirements Arc resource bridge communicates outbound securely to Azure Arc over TCP port 44 [!INCLUDE [network-requirements](includes/network-requirements.md)] -## Additional network requirements +In addition, Arc resource bridge requires connectivity to the Arc-enabled Kubernetes endpoints shown here. -In addition, Arc resource bridge requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud). > [!NOTE] > The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md). |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | This page is updated monthly, so revisit it regularly. If you're looking for ite ## Version 1.38 - February 2024 -Download for [Windows](https://download.microsoft.com/download/e/#installing-a-specific-version-of-the-agent) +Download for [Windows](https://download.microsoft.com/download/0/9/8/0981cd23-37aa-4cb3-8965-368586ab9fd8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent) ++### Known issues ++Windows machines that try to upgrade to version 1.38 via Microsoft Update and encounter an error might fail to roll back to the previously installed version. As a result, the machine will appear "Disconnected" and won't be manageable from Azure. The update has been removed from the Microsoft Update Catalog while Microsoft investigates this behavior. Manual installations of the agent on new and existing machines aren't affected. ++If your machine was affected by this issue, you can repair the agent by downloading and installing the agent again. The agent will automatically discover the existing configuration and restore connectivity with Azure. You don't need to run `azcmagent connect`. ### New features |
azure-cache-for-redis | Create Manage Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-cache.md | Title: Create, query, and delete an Azure Cache for Redis - Azure CLI description: This Azure CLI code sample shows how to create an Azure Cache for Redis instance using the command az redis create. It then gets details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, it deletes the cache. -tags: azure-service-management ms.devlang: azurecli |
azure-cache-for-redis | Create Manage Premium Cache Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-premium-cache-cluster.md | Title: Create, query, and delete a Premium Azure Cache for Redis with clustering description: This Azure CLI code sample shows how to create a 6 GB Premium tier Azure Cache for Redis with clustering enabled and two shards. It then gets details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, it deletes the cache. -tags: azure-service-management ms.devlang: azurecli |
azure-functions | Configure Networking How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md | Title: How to configure Azure Functions with a virtual network -description: Article that shows you how to perform certain virtual networking tasks for Azure Functions. + Title: How to use a secured storage account with Azure Functions +description: Article that shows you how to use a secured storage account in a virtual network as the default storage account for a function app in Azure Functions. Previously updated : 06/23/2023 Last updated : 01/31/2024 -# How to configure Azure Functions with a virtual network +# How to use a secured storage account with Azure Functions -This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. For an in-depth tutorial on how to secure your storage account, refer to the [Connect to a Virtual Network tutorial](functions-create-vnet.md). To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md). +This article shows you how to connect your function app to a secured storage account. For an in-depth tutorial on how to create your function app with inbound and outbound access restrictions, refer to the [Integrate with a virtual network](functions-create-vnet.md) tutorial. To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md). ## Restrict your storage account to a virtual network -When you create a function app, you either create a new storage account or link to an existing storage account. During function app creation, you can secure a new storage account behind a virtual network and integrate the function app with this network. At this time, you can't secure an existing storage account being used by your function app in the same way. +When you create a function app, you either create a new storage account or link to an existing one. Currently, only [ARM template and Bicep deployments](functions-infrastructure-as-code.md#secured-deployments) support function app creation with an existing secured storage account. > [!NOTE] > Securing your storage account is supported for all tiers in both Dedicated (App Service) and Elastic Premium plans. Consumption plans currently don't support virtual networks. For a list of all restrictions on storage accounts, see [Storage account requirements](storage-considerations.md#storage-account-requirements). -### During function app creation +## Secure storage during function app creation -You can create a new function app along with a new storage account secured behind a virtual network. The following links show you how to create these resources by using either the Azure portal or by using deployment templates: +You can create a function app along with a new storage account secured behind a virtual network that is accessible via private endpoints. The following links show you how to create these resources by using either the Azure portal or by using deployment templates: -# [Azure portal](#tab/portal) +### [Azure portal](#tab/portal) Complete the following tutorial to create a new function app a secured storage account: [Use private endpoints to integrate Azure Functions with a virtual network](functions-create-vnet.md). -# [Deployment templates](#tab/templates) +### [Deployment templates](#tab/templates) Use Bicep files or Azure Resource Manager (ARM) templates to create a secured function app and storage account resources. When you create a secured storage account in an automated deployment, you must also specifically set the `WEBSITE_CONTENTSHARE` setting and create the file share as part of your deployment. For more information, including links to example deployments, see [Secured deployments](functions-infrastructure-as-code.md#secured-deployments). -### Existing function app +## Secure storage for an existing function app -When you have an existing function app, you can't directly secure the storage account currently being used by the app. You must instead swap-out the existing storage account for a new, secured storage account. +When you have an existing function app, you can't directly secure the storage account currently being used by the app. You must instead swap-out the existing storage account for a new, secured storage account. -To secure the storage for an existing function app: +### 1. Enable virtual network integration ++As a prerequisite, you need to enable virtual network integration for your function app. 1. Choose a function app with a storage account that doesn't have service endpoints or private endpoints enabled. 1. [Enable virtual network integration](./functions-networking-options.md#enable-virtual-network-integration) for your function app. -1. Create or configure a second storage account. This is going to be the secured storage account that your function app uses instead. +### 2. Create a secured storage account ++Set up a secured storage account for your function app: ++1. [Create a second storage account](../storage/common/storage-account-create.md). This is going to be the secured storage account that your function app will use instead. You can also use an existing storage account not already being used by Functions. ++1. Copy the connection string for this storage account. You need this string for later. -1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account. +1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account. Try to use the same name as the file share in the existing storage account. Otherwise, you'll need to copy the name of the new file share to configure an app setting later. 1. Secure the new storage account in one of the following ways: - * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). When using private endpoint connections, the storage account must have private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints. + * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). When you set up private endpoint connections, create private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints. If you're using a custom or on-premises DNS server, make sure you [configure your DNS server](../storage/common/storage-private-endpoints.md#dns-changes-for-private-endpoints) to resolve to the new private endpoints. ++ * [Restrict traffic to specific subnets](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). Ensure that one of the allowed subnets is the one your function app is network integrated with. Double check that the subnet has a service endpoint to Microsoft.Storage. ++1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share. [AzCopy](../storage/common/storage-use-azcopy-blobs-copy.md) and [Azure Storage Explorer](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-move-azure-storage-blobs-between/ba-p/3545304) are common methods. If you use Azure Storage Explorer, you may need to allow your client IP address into your storage account's firewall. ++Now you're ready to configure your function app to communicate with the newly secured storage account. ++### 3. Enable application and configuration routing ++You should now route your function app's traffic to go through the virtual network. ++1. Enable [application routing](../app-service/overview-vnet-integration.md#application-routing) to route your app's traffic into the virtual network. ++ * Navigate to the **Networking** tab of your function app. Under **Outbound traffic configuration**, select the subnet associated with your virtual network integration. ++ * In the new page, check the box for **Outbound internet traffic** under **Application routing**. ++1. Enable [content share routing](../app-service/overview-vnet-integration.md#content-share) to have your function app communicate with your new storage account through its virtual network. - * [Enable a service endpoint from the virtual network](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). When using service endpoints, enable the subnet dedicated to your function apps for storage accounts on the firewall. + * In the same page, check the box for **Content storage** under **Configuration routing**. -1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share. +### 4. Update application settings -1. Copy the connection string for this storage account. +Finally, you need to update your application settings to point at the new secure storage account. -1. Update the **Application Settings** under **Configuration** for the function app to the following: +1. Update the **Application Settings** under the **Configuration** tab of your function app to the following: | Setting name | Value | Comment | |-|-|-|- | `AzureWebJobsStorage`| Storage connection string | This is the connection string for a secured storage account. | - | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account. This setting is required for Consumption and Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. | - | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside. This setting is required for Consumption and Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. | - | `WEBSITE_CONTENTOVERVNET` | 1 | A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. | + | [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)<br>[`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md#website_contentazurefileconnectionstring) | Storage connection string | Both settings contain the connection string for the new secured storage account, which you saved earlier. | + | [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md#website_contentshare) | File share | The name of the file share created in the secured storage account where the project deployment files reside. | 1. Select **Save** to save the application settings. Changing app settings causes the app to restart. |
azure-functions | Azfd0011 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0011.md | + + Title: "AZFD0011: The FUNCTIONS_WORKER_RUNTIME setting is required" ++description: "Learn how to troubleshoot the event 'AZFD0011: The FUNCTIONS_WORKER_RUNTIME setting is required' in Azure Functions." + Last updated : 01/24/2024++++# AZFD0011: The FUNCTIONS_WORKER_RUNTIME setting is required ++This event occurs when a function app doesn't have the `FUNCTIONS_WORKER_RUNTIME` application setting, which is required. ++| | Value | +|-|-| +| **Event ID** |AZFD0011| +| **Severity** |Warning| ++## Event description ++The `FUNCTIONS_WORKER_RUNTIME` application setting indicates the language or language stack on which the function app runs, such as `python`. For more information on valid values, see the [`FUNCTIONS_WORKER_RUNTIME`](../../functions-app-settings.md#functions_worker_runtime) reference. ++While not currently required, you should always specify `FUNCTIONS_WORKER_RUNTIME` for your function apps. When you don't have this setting and the Functions host can't determine the correct language or language stack, you might see exceptions or unexpected behaviors. ++Because `FUNCTIONS_WORKER_RUNTIME` is likely to become a required setting, you should explicitly set it in all of your existing function apps and deployment scripts to prevent any downtime in the future. ++## How to resolve the event ++In a production application, add `FUNCTIONS_WORKER_RUNTIME` to the [application settings](../../functions-how-to-use-azure-function-app-settings.md#settings). ++When running locally in Azure Functions Core Tools, also add `FUNCTIONS_WORKER_RUNTIME` to the [local.settings.json file](../../functions-develop-local.md#local-settings-file). ++## When to suppress the event ++This event shouldn't be suppressed. |
azure-functions | Functions Dotnet Dependency Injection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-dependency-injection.md | This example uses the [Microsoft.Extensions.Http](https://www.nuget.org/packages A series of registration steps run before and after the runtime processes the startup class. Therefore, keep in mind the following items: -- *The startup class is meant for only setup and registration.* Avoid using services registered at startup during the startup process. For instance, don't try to log a message in a logger that is being registered during startup. This point of the registration process is too early for your services to be available for use. After the `Configure` method is run, the Functions runtime continues to register additional dependencies, which can affect how your services operate.+- *The startup class is meant for only setup and registration.* Avoid using services registered at startup during the startup process. For instance, don't try to log a message in a logger that is being registered during startup. This point of the registration process is too early for your services to be available for use. After the `Configure` method is run, the Functions runtime continues to register other dependencies, which can affect how your services operate. -- *The dependency injection container only holds explicitly registered types*. The only services available as injectable types are what are setup in the `Configure` method. As a result, Functions-specific types like `BindingContext` and `ExecutionContext` aren't available during setup or as injectable types.+- *The dependency injection container only holds explicitly registered types*. The only services available as injectable types are what are set up in the `Configure` method. As a result, Functions-specific types like `BindingContext` and `ExecutionContext` aren't available during setup or as injectable types. ++- *Configuring ASP.NET authentication isn't supported*. The Functions host configures ASP.NET authentication services to properly expose APIs for core lifecycle operations. Other configurations in a custom `Startup` class can override this configuration, causing unintended consequences. For example, calling `builder.Services.AddAuthentication()` can break authentication between the portal and the host, leading to messages such as [Azure Functions runtime is unreachable](./functions-recover-storage-account.md#aspnet-authentication-overrides). ## Use injected dependencies -Constructor injection is used to make your dependencies available in a function. The use of constructor injection requires that you do not use static classes for injected services or for your function classes. +Constructor injection is used to make your dependencies available in a function. The use of constructor injection requires that you don't use static classes for injected services or for your function classes. The following sample demonstrates how the `IMyService` and `HttpClient` dependencies are injected into an HTTP-triggered function. Application Insights is added by Azure Functions automatically. ### ILogger\<T\> and ILoggerFactory -The host injects `ILogger<T>` and `ILoggerFactory` services into constructors. However, by default these new logging filters are filtered out of the function logs. You need to modify the `host.json` file to opt-in to additional filters and categories. +The host injects `ILogger<T>` and `ILoggerFactory` services into constructors. However, by default these new logging filters are filtered out of the function logs. You need to modify the `host.json` file to opt in to extra filters and categories. The following example demonstrates how to add an `ILogger<HttpTrigger>` with logs that are exposed to the host. Overriding services provided by the host is currently not supported. If there a Values defined in [app settings](./functions-how-to-use-azure-function-app-settings.md#settings) are available in an `IConfiguration` instance, which allows you to read app settings values in the startup class. -You can extract values from the `IConfiguration` instance into a custom type. Copying the app settings values to a custom type makes it easy test your services by making these values injectable. Settings read into the configuration instance must be simple key/value pairs. Please note that, the functions running on Elastic Premium SKU has this constraint "App setting names can only contain letters, numbers (0-9), periods ("."), colons (":") and underscores ("_")" +You can extract values from the `IConfiguration` instance into a custom type. Copying the app settings values to a custom type makes it easy test your services by making these values injectable. Settings read into the configuration instance must be simple key/value pairs. For functions running in an Elastic Premium plan, application setting names can only contain letters, numbers (`0-9`), periods (`.`), colons (`:`) and underscores (`_`). For more information, see [App setting considerations](functions-app-settings.md#app-setting-considerations). Consider the following class that includes a property named consistent with an app setting: public class HttpTrigger } ``` -Refer to [Options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options) for more details regarding working with options. +For more information, see [Options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options). ## Using ASP.NET Core user secrets -When developing locally, ASP.NET Core provides a [Secret Manager tool](/aspnet/core/security/app-secrets#secret-manager) that allows you to store secret information outside the project root. It makes it less likely that secrets are accidentally committed to source control. Azure Functions Core Tools (version 3.0.3233 or later) automatically reads secrets created by the ASP.NET Core Secret Manager. +When you develop your app locally, ASP.NET Core provides a [Secret Manager tool](/aspnet/core/security/app-secrets#secret-manager) that allows you to store secret information outside the project root. It makes it less likely that secrets are accidentally committed to source control. Azure Functions Core Tools (version 3.0.3233 or later) automatically reads secrets created by the ASP.NET Core Secret Manager. To configure a .NET Azure Functions project to use user secrets, run the following command in the project root. To access user secrets values in your function app code, use `IConfiguration` or ## Customizing configuration sources -To specify additional configuration sources, override the `ConfigureAppConfiguration` method in your function app's `StartUp` class. +To specify other configuration sources, override the `ConfigureAppConfiguration` method in your function app's `StartUp` class. -The following sample adds configuration values from a base and an optional environment-specific app settings files. +The following sample adds configuration values from both base and optional environment-specific app settings files. ```csharp using System.IO; Add configuration providers to the `ConfigurationBuilder` property of `IFunction A `FunctionsHostBuilderContext` is obtained from `IFunctionsConfigurationBuilder.GetContext()`. Use this context to retrieve the current environment name and resolve the location of configuration files in your function app folder. -By default, configuration files such as *appsettings.json* are not automatically copied to the function app's output folder. Update your *.csproj* file to match the following sample to ensure the files are copied. +By default, configuration files such as `appsettings.json` aren't automatically copied to the function app's output folder. Update your `.csproj` file to match the following sample to ensure the files are copied. ```xml <None Update="appsettings.json"> |
azure-functions | Functions Infrastructure As Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md | Keep the following considerations in mind when working with slot deployments: :::zone pivot="premium-plan,dedicated-plan" ## Secured deployments -You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. You function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md). +You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. Your function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md). When creating a deployment that uses a secured storage account, you must both explicitly set the `WEBSITE_CONTENTSHARE` setting and create the file share resource named in this setting. Make sure you create a `Microsoft.Storage/storageAccounts/fileServices/shares` resource using the value of `WEBSITE_CONTENTSHARE`, as shown in this example ([ARM template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/azuredeploy.json#L467)|[Bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/main.bicep#L351)). |
azure-functions | Functions Recover Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md | For function apps that run on Linux in a container, the `Azure Functions runtime 1. Check for any logged errors that indicate that the container is unable to start successfully. -### Container image unavailable +## Container image unavailable Errors can occur when the container image being referenced is unavailable or fails to start correctly. Check for any logged errors that indicate that the container is unable to start successfully. You need to correct any errors that prevent the container from starting for the function app run correctly. -When the container image can't be found, you'll see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli#manual-version-updates-on-linux) to change the container image being referenced. If you've deployed a [custom container image](./functions-how-to-custom-container.md), you need to fix the image and redeploy the updated version to the referenced registry. +When the container image can't be found, you see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli#manual-version-updates-on-linux) to change the container image being referenced. If you've deployed a [custom container image](./functions-how-to-custom-container.md), you need to fix the image and redeploy the updated version to the referenced registry. -### App container has conflicting ports +## App container has conflicting ports Your function app might be in an unresponsive state due to conflicting port assignment upon startup. This can happen in the following cases: Starting with version 3.x of the Functions runtime, [host ID collision](storage- ## Read-only app settings -Changing any _read-only_ [App Service application settings](../app-service/reference-app-settings.md#app-environment) can put your function app into an unreachable state. +Changing any _read-only_ [App Service application settings](../app-service/reference-app-settings.md#app-environment) can put your function app into an unreachable state. ++## ASP.NET authentication overrides +_Applies only to C# apps running [in-process with the Functions host](./functions-dotnet-class-library.md)._ ++Configuring ASP.NET authentication in a Functions startup class can override services that are required for the Azure portal to communicate with the host. This includes, but isn't limited to, any calls to `AddAuthentication()`. If the host's authentication services are overridden and the portal can't communicate with the host, it considers the app unreachable. This issue may result in errors such as: `No authentication handler is registered for the scheme 'ArmToken'.`. ## Next steps |
azure-monitor | Agent Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md | +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. This article provides details on installing the Log Analytics agent on Linux computers hosted in other clouds or on-premises. [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)] |
azure-monitor | Agents Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md | Last updated 7/19/2023 -# Customer intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines. +# Customer intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines. + # Azure Monitor Agent overview +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). Azure Monitor Agent replaces all of Azure Monitor's legacy monitoring agents. This article provides an overview of Azure Monitor Agent's capabilities and supported use cases. -Here's a short **introduction to Azure Monitor agent video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs) +Here's a short **introduction to Azure Monitor agent video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs) ## Benefits-Using Azure Monitor agent, you get immediate benefits as shown below: +Using Azure Monitor agent, you get immediate benefits as shown below: :::image type="content" source="media/azure-monitor-agent-overview/azure-monitor-agent-benefits.png" lightbox="media/azure-monitor-agent-overview/azure-monitor-agent-benefits.png" alt-text="Snippet of the Azure Monitor Agent benefits at a glance. This is described in more details below."::: - **Cost savings** by [using data collection rules](data-collection-rule-azure-monitor-agent.md): - Enables targeted and granular data collection for a machine or subset(s) of machines, as compared to the "all or nothing" approach of legacy agents.- - Allows filtering rules and data transformations to reduce the overall data volume being uploaded, thus lowering ingestion and storage costs significantly. + - Allows filtering rules and data transformations to reduce the overall data volume being uploaded, thus lowering ingestion and storage costs significantly. - **Simpler management** including efficient troubleshooting: - Supports data uploads to multiple destinations (multiple Log Analytics workspaces, i.e. *multihoming* on Windows and Linux) including cross-region and cross-tenant data collection (using Azure LightHouse).- - Centralized agent configuration "in the cloud" for enterprise scale throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time. + - Centralized agent configuration "in the cloud" for enterprise scale throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time. - Any change in configuration is rolled out to all agents automatically, without requiring a client side deployment. - Greater transparency and control of more capabilities and services, such as Microsoft Sentinel, Defender for Cloud, and VM Insights. - **Security and Performance** Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r | On-premises servers (Azure Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing the [Azure Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent by using Azure extension framework, provided for on-premises by first installing [Azure Arc agent](../../azure-arc/servers/deployment-options.md). | | Windows 10, 11 desktops, workstations | [Client installer](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. | | Windows 10, 11 laptops | [Client installer](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent *isn't optimized yet* for battery or network consumption. |- + 1. Define a data collection rule and associate the resource to the rule. The table below lists the types of data you can currently collect with the Azure Monitor Agent and where you can send that data. The tables below provide a comparison of Azure Monitor Agent with the legacy the ## Supported operating systems -The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system. +The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system. View [supported operating systems for Azure Arc Connected Machine agent](../../azure-arc/servers/prerequisites.md#supported-operating-systems), which is a prerequisite to run Azure Monitor agent on physical servers and virtual machines hosted outside of Azure (that is, on-premises) or in other clouds. ### Windows -| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension | +| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension | |:|::|::|::| | Windows Server 2022 | Γ£ô | Γ£ô | | | Windows Server 2022 Core | Γ£ô | | | View [supported operating systems for Azure Arc Connected Machine agent](../../a | Windows 11 Client and Pro | Γ£ô<sup>2</sup>, <sup>3</sup> | | | | Windows 11 Enterprise<br>(including multi-session) | Γ£ô | | | | Windows 10 1803 (RS4) and higher | Γ£ô<sup>2</sup> | | |-| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only) | Γ£ô | Γ£ô | Γ£ô | +| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only) | Γ£ô | Γ£ô | Γ£ô | | Windows 8 Enterprise and Pro<br>(Server scenarios only) | | Γ£ô<sup>1</sup> | | | Windows 7 SP1<br>(Server scenarios only) | | Γ£ô<sup>1</sup> | | | Azure Stack HCI | Γ£ô | Γ£ô | | An agent is only required to collect data from the operating system and workload ### How can I be notified when data collection from the Log Analytics agent stops? -Use the steps described in [Create a new log alert](../alerts/alerts-metric.md) to be notified when data collection stops. Use the following settings for the alert rule: - +Use the steps described in [Create a new log search alert](../alerts/alerts-metric.md) to be notified when data collection stops. Use the following settings for the alert rule: + - **Define alert condition**: Specify your Log Analytics workspace as the resource target. - **Alert criteria**: - **Signal Name**: *Custom log search*. Use the steps described in [Create a new log alert](../alerts/alerts-metric.md) - **Define alert details**: - **Name**: *Data collection stopped*. - **Severity**: *Warning*.- -Specify an existing or new [action group](../alerts/action-groups.md) so that when the log alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes. - ++Specify an existing or new [action group](../alerts/action-groups.md) so that when the log search alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes. + ### Will Azure Monitor Agent support data collection for the various Log Analytics solutions and Azure services like Microsoft Defender for Cloud and Microsoft Sentinel? -Review the list of [Azure Monitor Agent extensions currently available in preview](#supported-services-and-features). These extensions are the same solutions and services now available by using the new Azure Monitor Agent instead. +Review the list of [Azure Monitor Agent extensions currently available in preview](#supported-services-and-features). These extensions are the same solutions and services now available by using the new Azure Monitor Agent instead. ++You might see more extensions getting installed for the solution or service to collect extra data or perform transformation or processing as required for the solution or service. Then use Azure Monitor Agent to route the final data to Azure Monitor. -You might see more extensions getting installed for the solution or service to collect extra data or perform transformation or processing as required for the solution or service. Then use Azure Monitor Agent to route the final data to Azure Monitor. - The following diagram explains the new extensibility architecture.- + :::image type="content" source="./media/azure-monitor-agent/extensibility-arch-new.png" lightbox="./media/azure-monitor-agent/extensibility-arch-new.png" alt-text="Diagram that shows extensions architecture."::: ### Is Azure Monitor Agent at parity with the Log Analytics agents? |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on premise servers with Azure Arc agent installed). We strongly recommended to always update to the latest version, or opt in to the [Automatic Extension Update](../../virtual-machines/automatic-extension-upgrade.md) feature. We strongly recommended to always update to the latest version, or opt in to the | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | None | | Feb 2023 | <ul><li>**Linux (hotfix)** Resolved potential data loss due to "Bad file descriptor" errors seen in the mdsd error log with previous version. Upgrade to hotfix version</li><li>**Windows** Reliability improvements in Fluentbit buffering to handle larger text files</li></ul> | 1.13.1 | 1.25.2<sup>Hotfix</sup> | | Jan 2023 | **Linux** <ul><li>RHEL 9 and Amazon Linux 2 support</li><li>Update to OpenSSL 1.1.1s and require TLS 1.2 or higher</li><li>Performance improvements</li><li>Improvements in Garbage Collection for persisted disk cache and handling corrupted cache files better</li><li>**Fixes** <ul><li>Set agent service memory limit for CentOS/RedHat 7 distros. Resolved MemoryMax parsing error</li><li>Fixed modifying rsyslog system-wide log format caused by installer on RedHat/Centos 7.3</li><li>Fixed permissions to config directory</li><li>Installation reliability improvements</li><li>Fixed permissions on default file so rpm verification doesn't fail</li><li>Added traceFlags setting to enable trace logs for agent</li></ul></li></ul> **Windows** <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0 | 1.25.1 |-| Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0 | None | -| Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1 MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lock down write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 | -| Sep 2022 | Reliability improvements | 1.9.0 | None | +| Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0 | None | +| Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1 MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lock down write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 | +| Sep 2022 | Reliability improvements | 1.9.0 | None | | August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last three days (72 hours) up from 60 minutes, for agent to collect data post interruption. Look back time is subject to default offline cache size of 10 Gb</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0 | 1.22.2 | | July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0 | None | | June 2022 | Bug fixes with user assigned identity support, and reliability improvements | 1.6.0 | None | |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | Before you begin migrating from the Log Analytics agent to Azure Monitor Agent, ### Before you begin > [!div class="checklist"]-> - **Check the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for installing Azure Monitor Agent.**<br>To monitor non-Azure and on-premises servers, you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md). You won't incur an additional cost for installing the Azure Arc agent and you don't necessarily need to use Azure Arc to manage your non-Azure virtual machines. +> - **Check the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for installing Azure Monitor Agent.**<br>To monitor non-Azure and on-premises servers, you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md). The Arc agent makes your on-premises servers visible as to Azure as a resource it can target. You won't incur any additional cost for installing the Azure Arc agent. > - **Understand your current needs.**<br>Use the **Workspace overview** tab of the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to see connected agents and discover solutions enabled on your Log Analytics workspaces that use legacy agents, including per-solution migration recommendations. > - **Verify that Azure Monitor Agent can address all of your needs.**<br>Azure Monitor Agent is generally available for data collection and is used for data collection by various Azure Monitor features and other Azure services. For details, see [Supported services and features](#migrate-additional-services-and-features). > - **Consider installing Azure Monitor Agent together with a legacy agent for a transition period.**<br>Run Azure Monitor Agent alongside the legacy Log Analytics agent on the same machine to continue using existing functionality during evaluation or migration. Keep in mind that running two agents on the same machine doubles resource consumption, including but not limited to CPU, memory, storage space, and network bandwidth.<br> |
azure-monitor | Azure Monitor Agent Troubleshoot Linux Vm Rsyslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md | +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC standards: - Azure Monitor Agent installs an output configuration for the system Syslog daemon during the installation process. The configuration file specifies the way events flow between the Syslog daemon and Azure Monitor Agent. |
azure-monitor | Data Collection Snmp Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-snmp-data.md | +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. Simple Network Management Protocol (SNMP) is a widely-deployed management protocol for monitoring and configuring Linux devices and appliances. |
azure-monitor | Data Collection Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md | +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + Syslog is an event logging protocol that's common to Linux. You can use the Syslog daemon that's built in to Linux devices and appliances to collect local events of the types you specify. Then you can have it send those events to a Log Analytics workspace. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Azure Monitor agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent when Syslog collection is enabled in [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). Azure Monitor Agent then sends the messages to an Azure Monitor or Log Analytics workspace where a corresponding Syslog record is created in a [Syslog table](/azure/azure-monitor/reference/tables/syslog). |
azure-monitor | Data Sources Syslog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md | +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + Syslog is an event logging protocol that's common to Linux. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the messages to Azure Monitor where a corresponding record is created. [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)] |
azure-monitor | Troubleshooter Ama Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-linux.md | +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + The Azure Monitor Agent Troubleshooter (AMA) is designed to help identify issues with the agent and perform general health assessments. It can perform various checks to ensure that the agent is properly installed and connected, and can also gather AMA-related logs from the machine being diagnosed. > [!Note] If directory doesn't exist or the installation is failed, follow [Basic troubles If the directory exists, proceed to [Run the Troubleshooter](#run-the-troubleshooter). ## Run the Troubleshooter-On the machine to be diagnosed, run the Agent Troubleshooter. +On the machine to be diagnosed, run the Agent Troubleshooter. **Log Mode** enables the collection of logs, which can then be compressed into .tgz format for export or review. **Interactive Mode** allows users to actively engage in troubleshooting scenarios and view the output directly within the shell. To start the Agent Troubleshooter in log mode, copy the following command and ru ```Bash cd /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}/ama_tst/-sudo sh ama_troubleshooter.sh -L +sudo sh ama_troubleshooter.sh -L ``` Enter a path to output logs to. For instance, you might use **/tmp**. To start the Agent Troubleshooter in interactive mode, copy the following comman ```Bash cd /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}/ama_tst/-sudo sh ama_troubleshooter.sh -A +sudo sh ama_troubleshooter.sh -A ``` It runs a series of scenarios and displays the results. It runs a series of scenarios and displays the results. ### Can I copy the Troubleshooter from a newer agent to an older agent and run it on the older agent to diagnose issues with the older agent? It isn't possible to use the Troubleshooter to diagnose an older version of the agent by copying it. You must have an up-to-date version of the agent for the Troubleshooter to work properly.- + ## Next Steps - [Troubleshooting guidance for the Azure Monitor agent](../agents/azure-monitor-agent-troubleshoot-linux-vm.md) on Linux virtual machines and scale sets - [Syslog troubleshooting guide for Azure Monitor Agent](../agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) for Linux |
azure-monitor | Alerts Common Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md | -The common alert schema standardizes the consumption of Azure Monitor alert notifications. Historically, activity log, metric, and log alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications. +The common alert schema standardizes the consumption of Azure Monitor alert notifications. Historically, activity log, metric, and log search alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications. Using a standardized schema helps minimize the number of integrations, which simplifies the process of managing and maintaining your integrations. The common schema enables a richer alert consumption experience in both the Azure portal and the Azure mobile app. For sample alerts that use the common schema, see [Sample alert payloads](alerts | signalType | Identifies the signal on which the alert rule was defined. Possible values are Metric, Log, or Activity Log. | | monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. | | monitoringService | The monitoring service or solution that generated the alert. The monitoring service determines which fields are in the alert context. |-| alertTargetIDs | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. | -| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the data, and not the workspace.<br><ul><li>In the log alerts API (Scheduled Query Rules) v2021-08-01, the `configurationItem` values are taken from explicitly defined dimensions in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li><li>In earlier versions of the log alerts API, the `configurationItem` values are taken implicitly from the results in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li></ul>In ITSM systems, the `configurationItems` field is used to correlate alerts to resources in a configuration management database. | +| alertTargetIDs | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log search alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. | +| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log search alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the data, and not the workspace.<br><ul><li>In the log search alerts API (Scheduled Query Rules) v2021-08-01, the `configurationItem` values are taken from explicitly defined dimensions in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li><li>In earlier versions of the log search alerts API, the `configurationItem` values are taken implicitly from the results in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li></ul>In ITSM systems, the `configurationItems` field is used to correlate alerts to resources in a configuration management database. | | originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. | | firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). | | resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.| For sample alerts that use the common schema, see [Sample alert payloads](alerts } ``` -## Alert context fields for Log alerts +## Alert context fields for log search alerts > [!NOTE]-> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log alerts have these limitations regarding the common schema: -> - The common schema is not supported for log alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations. -> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included. +> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log search alerts have these limitations regarding the common schema: +> - The common schema is not supported for log search alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations. +> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log search alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log search alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included. |Field |Description | ||| For sample alerts that use the common schema, see [Sample alert payloads](alerts -### Sample log alert when the monitoringService = Log Analytics +### Sample log search alert when the monitoringService = Log Analytics ```json { For sample alerts that use the common schema, see [Sample alert payloads](alerts } } ```-### Sample log alert when the monitoringService = Application Insights +### Sample log search alert when the monitoringService = Application Insights ```json { For sample alerts that use the common schema, see [Sample alert payloads](alerts } } ```-### Sample log alert when the monitoringService = Log Alerts V2 +### Sample log search alert when the monitoringService = Log Alerts V2 > [!NOTE]-> Log alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload. +> Log search alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log search alerts payload when you use this version. Use [dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload. ```json { For sample alerts that use the common schema, see [Sample alert payloads](alerts ## Alert context fields for activity log alerts See [Azure activity log event schema](../essentials/activity-log-schema.md) for detailed information about the fields in activity log alerts.+ ### Sample activity log alert when the monitoringService = Activity Log - Administrative ```json |
azure-monitor | Alerts Create Log Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-log-alert-rule.md | Title: Create Azure Monitor log alert rules -description: This article shows you how to create a new log alert rule. + Title: Create Azure Monitor log search alert rules +description: This article shows you how to create a new log search alert rule. Last updated 11/27/2023 -# Create or edit a log alert rule +# Create or edit a log search alert rule -This article shows you how to create a new log alert rule or edit an existing log alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md). +This article shows you how to create a new log search alert rule or edit an existing log search alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md). You create an alert rule by combining the resources to be monitored, the monitoring data from the resource, and the conditions that you want to trigger the alert. You can then define [action groups](./action-groups.md) and [alert processing rules](alerts-action-rules.md) to determine what happens when an alert is triggered. Alerts triggered by these alert rules contain a payload that uses the [common al 1. On the **Logs** pane, write a query that returns the log events for which you want to create an alert. To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries. > [!NOTE]- > Log alert rule queries do not support the 'bag_unpack()', 'pivot()' and 'narrow()' plugins. + > Log search alert rule queries do not support the 'bag_unpack()', 'pivot()' and 'narrow()' plugins. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log search alert rule."::: 1. (Optional) If you're querying an ADX or ARG cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example: Alerts triggered by these alert rules contain a payload that uses the [common al | project _ResourceId=tolower(id), tags ``` - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log search alert rule."::: - For sample log alert queries that query ARG or ADX, see [log alert query samples](./alerts-log-alert-query-samples.md) + For sample log search alert queries that query ARG or ADX, see [Log search alert query samples](./alerts-log-alert-query-samples.md) 1. Select **Run** to run the alert. 1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**. 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information. 1. In the **Measurement** section, select values for these fields: - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log search alert rule."::: |Field |Description | |||- |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. | + |Measure|Log search alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. | |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value by using the aggregation granularity. Examples are Total, Average, Minimum, or Maximum. | |Aggregation granularity| The interval for aggregating multiple records to one numeric value.| 1. <a name="dimensions"></a>(Optional) In the **Split by dimensions** section, you can use dimensions to help provide context for the triggered alert. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log search alert rule."::: Dimensions are columns from your query results that contain additional data. When you use dimensions, the alert rule groups the query results by the dimension values and evaluates the results of each group separately. If the condition is met, the rule fires an alert for that group. The alert payload includes the combination that triggered the alert. Alerts triggered by these alert rules contain a payload that uses the [common al 1. In the **Alert logic** section, select values for these fields: - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log search alert rule."::: |Field |Description | ||| Alerts triggered by these alert rules contain a payload that uses the [common al > * The query uses the **adx** pattern > * The query calls a function that calls other tables - For sample log alert queries that query ARG or ADX, see [log alert query samples](./alerts-log-alert-query-samples.md) + For sample log search alert queries that query ARG or ADX, see [Log search alert query samples](./alerts-log-alert-query-samples.md) 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log search alert rule."::: Select values for these fields under **Number of violations to trigger the alert**: Alerts triggered by these alert rules contain a payload that uses the [common al 1. Define the **Alert rule details**. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log search alert rule."::: 1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.- 1. <a name="managed-id"></a>In the **Identity** section, select which identity is used by the log alert rule to send the log query. This identity is used for authentication when the alert rule executes the log query. + 1. <a name="managed-id"></a>In the **Identity** section, select which identity is used by the log search alert rule to send the log query. This identity is used for authentication when the alert rule executes the log query. Keep these things in mind when selecting an identity: - A managed identity is required if you're sending a query to Azure Data Explorer. Alerts triggered by these alert rules contain a payload that uses the [common al - Use a managed identity to help you avoid a case where the rule doesn't work as expected because the user that last edited the rule didn't have permissions for all the resources added to the scope of the rule. The identity associated with the rule must have these roles:- - If the query is accessing a Log Analytics workspace, the identity must be assigned a **Reader role** for all workspaces accessed by the query. If you're creating resource-centric log alerts, the alert rule may access multiple workspaces, and the identity must have a reader role on all of them. + - If the query is accessing a Log Analytics workspace, the identity must be assigned a **Reader role** for all workspaces accessed by the query. If you're creating resource-centric log search alerts, the alert rule may access multiple workspaces, and the identity must have a reader role on all of them. - If you are querying an ADX or ARG cluster you must add **Reader role** for all data sources accessed by the query. For example, if the query is resource centric, it needs a reader role on that resources. - If the query is [accessing a remote Azure Data Explorer cluster](../logs/azure-monitor-data-explorer-proxy.md), the identity must be assigned: - **Reader role** for all data sources accessed by the query. For example, if the query is calling a remote Azure Data Explorer cluster using the adx() function, it needs a reader role on that ADX cluster. Alerts triggered by these alert rules contain a payload that uses the [common al |Field |Description | ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|- |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met for a specific time range. The time range differs based on the frequency of the alert:<br>**1 minute**: The alert condition isn't met for 10 minutes.<br>**5-15 minutes**: The alert condition isn't met for three frequency periods.<br>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.<br>**11 to 12 hours**: The alert condition isn't met for one frequency period. <br><br>Note that stateful log alerts have these limitations:<br> - they can trigger up to 300 alerts per evaluation.<br> - you can have a maximum of 5000 alerts with the `fired` alert condition.| + |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met for a specific time range. The time range differs based on the frequency of the alert:<br>**1 minute**: The alert condition isn't met for 10 minutes.<br>**5-15 minutes**: The alert condition isn't met for three frequency periods.<br>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.<br>**11 to 12 hours**: The alert condition isn't met for one frequency period. <br><br>Note that stateful log search alerts have these limitations:<br> - they can trigger up to 300 alerts per evaluation.<br> - you can have a maximum of 5000 alerts with the `fired` alert condition.| |Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.| Alerts triggered by these alert rules contain a payload that uses the [common al ## Next steps-- [Log alert query samples](./alerts-log-alert-query-samples.md) +- [Log search alert query samples](./alerts-log-alert-query-samples.md) - [View and manage your alert instances](alerts-manage-alert-instances.md) |
azure-monitor | Alerts Create Rule Cli Powershell Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md | You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit 1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use these. - To create a metric alert rule, use the [az monitor metrics alert create](/cli/azure/monitor/metrics/alert) command.- - To create a log alert rule, use the [az monitor scheduled-query create](/cli/azure/monitor/scheduled-query) command. + - To create a log search alert rule, use the [az monitor scheduled-query create](/cli/azure/monitor/scheduled-query) command. - To create an activity log alert rule, use the [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert) command. For example, to create a metric alert rule that monitors if average Percentage CPU on a VM is greater than 90: You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit - To create a metric alert rule using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet. > [!NOTE] > When you create a metric alert on a single resource, the syntax uses the `TargetResourceId`. When you create a metric alert on multiple resources, the syntax contains the `TargetResourceScope`, `TargetResourceType`, and `TargetResourceRegion`.-- To create a log alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.+- To create a log search alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet. - To create an activity log alert rule using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet. ## Create a new alert rule using an ARM template You can use an [Azure Resource Manager template (ARM template)](../../azure-reso > - We recommend that you create the metric alert using the same resource group as your target resource. > - Metric alerts for an Azure Log Analytics workspace resource type (`Microsoft.OperationalInsights/workspaces`) are configured differently than other metric alerts. For more information, see [Resource Template for Metric Alerts for Logs](alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs). > - If you are creating a metric alert for a single resource, the template uses the `ResourceId` of the target resource. If you are creating a metric alert for multiple resources, the template uses the `scope`, `TargetResourceType`, and `TargetResourceRegion` for the target resources.- - For log alerts: `Microsoft.Insights/scheduledQueryRules` + - For log search alerts: `Microsoft.Insights/scheduledQueryRules` - For activity log, service health, and resource health alerts: `microsoft.Insights/activityLogAlerts` 1. Copy one of the templates from these sample ARM templates. - For metric alerts: [Resource Manager template samples for metric alert rules](resource-manager-alerts-metric.md)- - For log alerts: [Resource Manager template samples for log alert rules](resource-manager-alerts-log.md) + - For log search alerts: [Resource Manager template samples for log search alert rules](resource-manager-alerts-log.md) - For activity log alerts: [Resource Manager template samples for activity log alert rules](resource-manager-alerts-activity-log.md) - For service health alerts: [Resource Manager template samples for service health alert rules](resource-manager-alerts-service-health.md) - For resource health alerts: [Resource Manager template samples for resource health alert rules](resource-manager-alerts-resource-health.md) |
azure-monitor | Alerts Log Alert Query Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-alert-query-samples.md | Title: Samples of Azure Monitor log alert rule queries -description: See examples of Azure monitor log alert rule queries. + Title: Samples of Azure Monitor log search alert rule queries +description: See examples of Azure monitor log search alert rule queries. Last updated 01/04/2024 -# Sample log alert queries that include ADX and ARG +# Sample log search alert queries that include ADX and ARG -A log alert rule monitors a resource by using a Log Analytics query to evaluate logs at a set frequency. You can include data from Azure Data Explorer and Azure Resource Graph in your log alert rule queries. +A log search alert rule monitors a resource by using a Log Analytics query to evaluate logs at a set frequency. You can include data from Azure Data Explorer and Azure Resource Graph in your log search alert rule queries. -This article provides examples of log alert rule queries that use Azure Data Explorer and Azure Resource Graph. For more information about creating a log alert rule, see [Create a log alert rule](./alerts-create-log-alert-rule.md). +This article provides examples of log search alert rule queries that use Azure Data Explorer and Azure Resource Graph. For more information about creating a log search alert rule, see [Create a log search alert rule](./alerts-create-log-alert-rule.md). ## Queries that check virtual machine health This query finds virtual machines marked as critical that had a heartbeat more t ``` ## Next steps-- [Learn more about creating a log alert rule](./alerts-create-log-alert-rule.md)-- [Learn how to optimize log alert queries](./alerts-log-query.md)+- [Learn more about creating a log search alert rule](./alerts-create-log-alert-rule.md) +- [Learn how to optimize log search alert queries](./alerts-log-query.md) |
azure-monitor | Alerts Log Api Switch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md | Title: Upgrade legacy rules management to the current Azure Monitor Log Alerts API -description: Learn how to switch to the log alerts management to ScheduledQueryRules API + Title: Upgrade legacy rules management to the current Azure Monitor Scheduled Query Rules API +description: Learn how to switch log search alert management to ScheduledQueryRules API. Last updated 07/09/2023 -# Upgrade to the Log Alerts API from the legacy Log Analytics alerts API +# Upgrade to the Scheduled Query Rules API from the legacy Log Analytics Alert API > [!IMPORTANT]-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log alerts by that date. -> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). +> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics Alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log search alerts by that date. +> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage log search alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). > Once you migrate rules to the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules), you cannot revert back to the older [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts). -In the past, users used the [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts) to manage log alert rules. Currently workspaces use [ScheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) for new rules. This article describes the benefits and the process of switching legacy log alert rules management from the legacy API to the current API. +In the past, users used the [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts) to manage log search alert rules. Currently workspaces use the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) for new rules. This article describes the benefits and the process of switching legacy log search alert rules management from the legacy API to the current API. ## Benefits -- Manage all log rules in one API.+- Manage all log search alert rules in one API. - Single template for creation of alert rules (previously needed three separate templates). - Single API for all Azure resources log alerting.-- Support for stateful (preview) and 1-minute log alerts.+- Support for stateful (preview) and 1-minute log search alerts. - [PowerShell cmdlets](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell) and [Azure CLI](/azure/azure-monitor/alerts/alerts-log#manage-log-alerts-using-cli) support for switched rules. - Alignment of severities with all other alert types and newer rules. - Ability to create a [cross workspace log alert](/azure/azure-monitor/logs/cross-workspace-query) that spans several external resources like Log Analytics workspaces or Application Insights resources for switched rules. - Users can specify dimensions to split the alerts for switched rules.-- Log alerts have extended period of up to two days of data (previously limited to one day) for switched rules.+- Log search alerts have an extended period of up to two days of data (previously limited to one day) for switched rules. ## Impact - All switched rules must be created/edited with the current API. See [sample use via Azure Resource Template](/azure/azure-monitor/alerts/alerts-log-create-templates) and [sample use via PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell).-- As rules become Azure Resource Manager tracked resources in the current API and must be unique, rules resource ID will change to this structure: `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Display names of the alert rule will remain unchanged. +- As rules become Azure Resource Manager tracked resources in the current API and must be unique, the resource IDs for the rules change to this structure: `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Display names for the alert rules remain unchanged. + ## Process View workspaces to upgrade using this [Azure Resource Graph Explorer query](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29). Open the [link](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29), select all available subscriptions, and run the query. $switchJSON = '{"scheduledQueryRulesEnabled": true}' armclient PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview $switchJSON ``` -You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool: +You can also use the [Azure CLI](/cli/azure/reference-index#az-rest) tool: ```bash az rest --method put --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body "{\"scheduledQueryRulesEnabled\" : true}" You can also use this API call to check the switch status: GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview ``` -You can also use [ARMClient](https://github.com/projectkudu/ARMClient) tool: +You can also use the [ARMClient](https://github.com/projectkudu/ARMClient) tool: ```powershell armclient GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview ``` -You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool: +You can also use the [Azure CLI](/cli/azure/reference-index#az-rest) tool: ```bash az rest --method get --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview If the Log Analytics workspace wasn't switched, the response is: ## Next steps -- Learn about the [Azure Monitor - Log Alerts](/azure/azure-monitor/alerts/alerts-types).-- Learn how to [manage your log alerts using the API](/azure/azure-monitor/alerts/alerts-log-create-templates).-- Learn how to [manage log alerts using PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell).+- Learn about the [Azure Monitor log search alerts](/azure/azure-monitor/alerts/alerts-types). +- Learn how to [manage your log search alerts using the API](/azure/azure-monitor/alerts/alerts-log-create-templates). +- Learn how to [manage your log search alerts using PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell). - Learn more about the [Azure Alerts experience](/azure/azure-monitor/alerts/alerts-overview). |
azure-monitor | Alerts Log Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-query.md | Title: Optimize log alert queries | Microsoft Docs + Title: Optimize log search alert queries | Microsoft Docs description: This article gives recommendations for writing efficient alert queries. Last updated 5/30/2023 -# Optimize log alert queries +# Optimize log search alert queries -This article describes how to write and convert [Log alerts](alerts-types.md#log-alerts) to achieve optimal performance. Optimized queries reduce latency and load of alerts, which run frequently. +This article describes how to write and convert [log search alerts](alerts-types.md#log-alerts) to achieve optimal performance. Optimized queries reduce latency and load of alerts, which run frequently. ## Start writing an alert log query Using `limit` and `take` in queries can increase latency and load of alerts beca [Log queries in Azure Monitor](../logs/log-query-overview.md) start with either a table, [`search`](/azure/kusto/query/searchoperator), or [`union`](/azure/kusto/query/unionoperator) operator. -Queries for log alert rules should always start with a table to define a clear scope, which improves query performance and the relevance of the results. Queries in alert rules run frequently. Using `search` and `union` can result in excessive overhead that adds latency to the alert because it requires scanning across multiple tables. These operators also reduce the ability of the alerting service to optimize the query. +Queries for log search alert rules should always start with a table to define a clear scope, which improves query performance and the relevance of the results. Queries in alert rules run frequently. Using `search` and `union` can result in excessive overhead that adds latency to the alert because it requires scanning across multiple tables. These operators also reduce the ability of the alerting service to optimize the query. -We don't support creating or modifying log alert rules that use `search` or `union` operators, except for cross-resource queries. +We don't support creating or modifying log search alert rules that use `search` or `union` operators, except for cross-resource queries. For example, the following alerting query is scoped to the _SecurityEvent_ table and searches for a specific event ID. It's the only table that the query must process. SecurityEvent | where EventID == 4624 ``` -Log alert rules using [cross-resource queries](../logs/cross-workspace-query.md) aren't affected by this change because cross-resource queries use a type of `union`, which limits the query scope to specific resources. The following example would be a valid log alert query: +Log search alert rules using [cross-resource queries](../logs/cross-workspace-query.md) aren't affected by this change because cross-resource queries use a type of `union`, which limits the query scope to specific resources. The following example would be a valid log search alert query: ```Kusto union workspace('00000000-0000-0000-0000-000000000003').Perf ``` >[!NOTE]-> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, see [Upgrade legacy rules management to the current Azure Monitor Log Alerts API](./alerts-log-api-switch.md) to learn about switching. +> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log search alerts, see [Upgrade legacy rules management to the current Azure Monitor Scheduled Query Rules API](./alerts-log-api-switch.md) to learn about switching. ## Examples The following examples include log queries that use `search` and `union`. They p ### Example 1 -You want to create a log alert rule by using the following query that retrieves performance information using `search`: +You want to create a log search alert rule by using the following query that retrieves performance information using `search`: ``` Kusto search * search * ### Example 2 -You want to create a log alert rule by using the following query that retrieves performance information using `search`: +You want to create a log search alert rule by using the following query that retrieves performance information using `search`: ``` Kusto search ObjectName =="Memory" and CounterName=="% Committed Bytes In Use" search ObjectName =="Memory" and CounterName=="% Committed Bytes In Use" ### Example 3 -You want to create a log alert rule by using the following query that uses both `search` and `union` to retrieve performance information: +You want to create a log search alert rule by using the following query that uses both `search` and `union` to retrieve performance information: ``` Kusto search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceName == "_Total") search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceN ### Example 4 -You want to create a log alert rule by using the following query that joins the results of two `search` queries: +You want to create a log search alert rule by using the following query that joins the results of two `search` queries: ```Kusto search Type == 'SecurityEvent' and EventID == '4625' search Type == 'SecurityEvent' and EventID == '4625' ## Next steps -- Learn about [log alerts](alerts-log.md) in Azure Monitor.+- Learn about [log search alerts](alerts-log.md) in Azure Monitor. - Learn about [log queries](../logs/log-query-overview.md). |
azure-monitor | Alerts Log Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md | Title: Sample payloads for Azure Monitor log alerts using webhook actions -description: This article describes how to configure log alert rules with webhook actions and available customizations. + Title: Sample payloads for Azure Monitor log search alerts using webhook actions +description: This article describes how to configure log search alert rules with webhook actions and available customizations. Last updated 11/23/2023 -# Sample payloads for log alerts using webhook actions +# Sample payloads for log search alerts using webhook actions -You can use webhook actions in a log alert rule to invoke a single HTTP POST request. In this article, we describe the properties that are available when you [configure action groups to use webhooks](./action-groups.md). The service that's called must support webhooks and know how to use the payload it receives. +You can use webhook actions in a log search alert rule to invoke a single HTTP POST request. In this article, we describe the properties that are available when you [configure action groups to use webhooks](./action-groups.md). The service that's called must support webhooks and know how to use the payload it receives. We recommend that you use [common alert schema](../alerts/alerts-common-schema.md) for your webhook integrations. The common alert schema provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. -For log alert rules that have a custom JSON payload defined, enabling the common alert schema reverts the payload schema to the one described in [Common alert schema](../alerts/alerts-common-schema.md#alert-context-fields-for-log-alerts). If you want to have a custom JSON payload defined, the webhook can't use the common alert schema. +For log search alert rules that have a custom JSON payload defined, enabling the common alert schema reverts the payload schema to the one described in [Common alert schema](../alerts/alerts-common-schema.md#alert-context-fields-for-log-search-alerts). If you want to have a custom JSON payload defined, the webhook can't use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert. A bigger alert doesn't include search results. When the search results aren't included, use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results via the Log Analytics API. The sample payloads include examples when the payload is standard and when it's custom. -## Log alert for all resources logs (from API version `2021-08-01`) +## Log search alert for all resources logs (from API version `2021-08-01`) -The following sample payload is for a standard webhook when it's used for log alerts based on resources logs: +The following sample payload is for a standard webhook when it's used for log search alerts based on resources logs: ```json { The following sample payload is for a standard webhook when it's used for log al } ``` -## Log alert for Log Analytics (up to API version `2018-04-16`) +## Log search alert for Log Analytics (up to API version `2018-04-16`) + The following sample payload is for a standard webhook action that's used for alerts based on Log Analytics: > [!NOTE] The following sample payload is for a standard webhook action that's used for al } ``` -## Log alert for Application Insights (up to API version `2018-04-16`) -The following sample payload is for a standard webhook when it's used for log alerts based on Application Insights resources: +## Log search alert for Application Insights (up to API version `2018-04-16`) ++The following sample payload is for a standard webhook when it's used for log search alerts based on Application Insights resources: ```json { The following sample payload is for a standard webhook when it's used for log al } ``` -## Log alert with a custom JSON payload (up to API version `2018-04-16`) +## Log search alert with a custom JSON payload (up to API version `2018-04-16`) > [!NOTE] > A custom JSON-based webhook isn't supported from API version `2021-08-01`. The following table lists default webhook action properties and their custom JSO | Parameter | Variable | Description | |: |: |: | | `AlertRuleName` |#alertrulename |Name of the alert rule. |-| `Severity` |#severity |Severity set for the fired log alert. | +| `Severity` |#severity |Severity set for the fired log search alert. | | `AlertThresholdOperator` |#thresholdoperator |Threshold operator for the alert rule. | | `AlertThresholdValue` |#thresholdvalue |Threshold value for the alert rule. | | `LinkToSearchResults` |#linktosearchresults |Link to the Analytics portal that returns the records from the query that created the alert. | The following table lists default webhook action properties and their custom JSO | `ResultCount` |#searchresultcount |Number of records in the search results. | | `Search Interval End time` |#searchintervalendtimeutc |End time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM. | | `Search Interval` |#searchinterval |Time window for the alert rule, with the format HH:mm:ss. |-| `Search Interval StartTime` |#searchintervalstarttimeutc |Start time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM. +| `Search Interval StartTime` |#searchintervalstarttimeutc |Start time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM. | | `SearchQuery` |#searchquery |Log search query used by the alert rule. | | `SearchResults` |"IncludeSearchResults": true|Records returned by the query as a JSON table, limited to the first 1,000 records. "IncludeSearchResults": true is added in a custom JSON webhook definition as a top-level property. | | `Dimensions` |"IncludeDimensions": true|Dimensions value combinations that triggered that alert as a JSON section. "IncludeDimensions": true is added in a custom JSON webhook definition as a top-level property. |-| `Alert Type`| #alerttype | The type of log alert rule configured as [Metric measurement or Number of results](./alerts-unified-log.md#measure).| +| `Alert Type`| #alerttype | The type of log search alert rule configured as [Metric measurement or Number of results](./alerts-types.md#log-alerts).| | `WorkspaceID` |#workspaceid |ID of your Log Analytics workspace. | | `Application ID` |#applicationid |ID of your Application Insights app. | | `Subscription ID` |#subscriptionid |ID of your Azure subscription used. | For example, to create a custom payload that includes only the alert name and th } ``` -The following sample payload is for a custom webhook action for any log alert: +The following sample payload is for a custom webhook action for any log search alert: ```json { The following sample payload is for a custom webhook action for any log alert: ``` ## Next steps+ - Learn about [Azure Monitor alerts](./alerts-overview.md). - Create and manage [action groups in Azure](./action-groups.md). - Learn more about [log queries](../logs/log-query-overview.md). |
azure-monitor | Alerts Manage Alert Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md | Manage your alert rules in the Azure portal, or using the CLI or PowerShell. 1. To edit an alert rule, select **Edit**, and then edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, or the **Signal type** of an existing alert rule. - **Scope**. You can edit the scope for all alert rules **other than**:- - Log alert rules + - Log search alert rules - Metric alert rules that monitor a custom metric - Smart detection alert rules- - **Condition**. Learn more about conditions for [metric alert rules](./alerts-create-new-alert-rule.md?tabs=metric#tabpanel_1_metric), [log alert rules](./alerts-create-new-alert-rule.md?tabs=log#tabpanel_1_log), and [activity log alert rules](./alerts-create-new-alert-rule.md?tabs=activity-log#tabpanel_1_activity-log) + - **Condition**. Learn more about conditions for [metric alert rules](./alerts-create-new-alert-rule.md?tabs=metric#tabpanel_1_metric), [log search alert rules](./alerts-create-new-alert-rule.md?tabs=log#tabpanel_1_log), and [activity log alert rules](./alerts-create-new-alert-rule.md?tabs=activity-log#tabpanel_1_activity-log) - **Actions** - **Alert rule details** 1. Select **Save** on the top command bar. > [!NOTE]-> This section describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage log alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage log alert rules created in the previous UI. +> This section describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage log search alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage log search alert rules created in the previous UI. ## Enable recommended alert rules in the Azure portal Metric alert rules have these dedicated PowerShell cmdlets: - [Update](/rest/api/monitor/metricalerts/update): Update a metric alert rule. - [Delete](/rest/api/monitor/metricalerts/delete): Delete a metric alert rule. -## Manage log alert rules using the CLI +## Manage log search alert rules using the CLI -This section describes how to manage log alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md). +This section describes how to manage log search alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md). > [!NOTE] > Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md). - 1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. 1. Use these options of the `az monitor scheduled-query alert` CLI command in this table:- |What you want to do|CLI command | ||| This section describes how to manage log alerts using the cross-platform [Azure |Delete a log alert rule|`az monitor scheduled-query delete -g {ResourceGroup} -n {AlertRuleName}`| |Learn more about the command|`az monitor scheduled-query --help`| -### Manage log alert rules using the Azure Resource Manager CLI with [templates](./alerts-log-create-templates.md) +### Manage log search alert rules using the Azure Resource Manager CLI with [templates](./alerts-log-create-templates.md) ```azurecli az login az deployment group create \ A 201 response is returned on successful creation. 200 is returned on successful updates. -## Manage log alert rules with PowerShell +## Manage log search alert rules with PowerShell ++Log search alert rules have this dedicated PowerShell cmdlet: +- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): Creates a new log search alert rule or updates an existing log search alert rule. -Log alert rules have this dedicated PowerShell cmdlet: -- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): Creates a new log alert rule or updates an existing log alert rule. ## Manage activity log alert rules using PowerShell Activity log alerts have these dedicated PowerShell cmdlets: |
azure-monitor | Alerts Manage Alerts Previous Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md | Title: View and manage log alert rules created in previous versions| Microsoft Docs -description: Use the Azure Monitor portal to manage log alert rules created in earlier versions. + Title: View and manage log search alert rules created in previous versions| Microsoft Docs +description: Use the Azure Monitor portal to manage log search alert rules created in earlier versions. Last updated 06/20/2023-This article describes the process of managing alert rules created in the previous UI or by using API version `2018-04-16` or earlier. Alert rules created in the latest UI are viewed and managed in the new UI, as described in [Create, view, and manage log alerts by using Azure Monitor](alerts-log.md). +This article describes the process of managing alert rules created in the previous UI or by using API version `2018-04-16` or earlier. Alert rules created in the latest UI are viewed and managed in the new UI, as described in [Create, view, and manage log search alerts by using Azure Monitor](alerts-log.md). -## Changes to the log alert rule creation experience +## Changes to the log search alert rule creation experience The current alert rule wizard is different from the earlier experience: The current alert rule wizard is different from the earlier experience: 1. Edit the alert rule conditions by using these sections: - **Search query**: In this section, you can modify your query.- - **Alert logic**: Log alerts can be based on two types of [measures](./alerts-unified-log.md#measure): + - **Alert logic**: Log search alerts can be based on two types of [measures](./alerts-types.md#log-alerts): 1. **Number of results**: Count of records returned by the query. 1. **Metric measurement**: **Aggregate value** is calculated by using `summarize` grouped by the expressions chosen and the [bin()](/azure/data-explorer/kusto/query/binfunction) selection. For example: ```Kusto The current alert rule wizard is different from the earlier experience: or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m) ```- For metric measurements alert logic, you can specify how to [split the alerts by dimensions](./alerts-unified-log.md#split-by-alert-dimensions) by using the **Aggregate on** option. The row grouping expression must be unique and sorted. + For metric measurements alert logic, you can specify how to [split the alerts by dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions) by using the **Aggregate on** option. The row grouping expression must be unique and sorted. The [bin()](/azure/data-explorer/kusto/query/binfunction) function can result in uneven time intervals, so the alert service automatically converts the [bin()](/azure/data-explorer/kusto/query/binfunction) function to a [binat()](/azure/data-explorer/kusto/query/binatfunction) function with appropriate time at runtime to ensure results with a fixed point. The current alert rule wizard is different from the earlier experience: :::image type="content" source="media/alerts-log/aggregate-on.png" lightbox="media/alerts-log/aggregate-on.png" alt-text="Screenshot that shows Aggregate on."::: - - **Period**: Choose the time range over which to assess the specified condition by using the [Period](./alerts-unified-log.md#query-time-range) option. + - **Period**: Choose the time range over which to assess the specified condition by using the [Period](./alerts-types.md) option. 1. When you're finished editing the conditions, select **Done**.-1. Use the preview data to set the [Operator, Threshold value](./alerts-unified-log.md#threshold-and-operator), and [Frequency](./alerts-unified-log.md#frequency). -1. Set the [number of violations to trigger an alert](./alerts-unified-log.md#number-of-violations-to-trigger-alert) by using **Total** or **Consecutive breaches**. +1. Use the preview data to set the [Operator, Threshold value](./alerts-types.md), and [Frequency](./alerts-types.md). +1. Set the [number of violations to trigger an alert](./alerts-types.md) by using **Total** or **Consecutive breaches**. 1. Select **Done**. 1. You can edit the rule **Description** and **Severity**. These details are used in all alert actions. You can also choose to not activate the alert rule on creation by selecting **Enable rule upon creation**.-1. Use the [Suppress Alerts](./alerts-unified-log.md#state-and-resolving-alerts) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts, but actions won't be triggered to prevent noise. The **Mute actions** value must be greater than the frequency of the alert to be effective. +1. Use the [Suppress Alerts](./alerts-processing-rules.md) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts, but actions won't be triggered to prevent noise. The **Mute actions** value must be greater than the frequency of the alert to be effective. <!-- convertborder later --> :::image type="content" source="media/alerts-log/AlertsPreviewSuppress.png" lightbox="media/alerts-log/AlertsPreviewSuppress.png" alt-text="Screenshot that shows the Alert Details pane." border="false"::: 1. To make alerts stateful, select **Automatically resolve alerts (preview)**. 1. Specify if the alert rule should trigger one or more [action groups](./action-groups.md) when the alert condition is met. For limits on the actions that can be performed, see [Azure Monitor service limits](../../azure-monitor/service-limits.md).-1. (Optional) Customize actions in log alert rules: +1. (Optional) Customize actions in log search alert rules: - **Custom email subject**: Overrides the *email subject* of email actions. You can't modify the body of the mail and this field *isn't for email addresses*.- - **Include custom Json payload for webhook**: Overrides the webhook JSON used by action groups, assuming that the action group contains a webhook action. Learn more about [webhook actions for log alerts](./alerts-log-webhook.md). + - **Include custom Json payload for webhook**: Overrides the webhook JSON used by action groups, assuming that the action group contains a webhook action. Learn more about [webhook actions for log search alerts](./alerts-log-webhook.md). <!-- convertborder later -->- :::image type="content" source="media/alerts-log/AlertsPreviewOverrideLog.png" lightbox="media/alerts-log/AlertsPreviewOverrideLog.png" alt-text="Screenshot that shows Action overrides for log alerts." border="false"::: + :::image type="content" source="media/alerts-log/AlertsPreviewOverrideLog.png" lightbox="media/alerts-log/AlertsPreviewOverrideLog.png" alt-text="Screenshot that shows Action overrides for log search alerts." border="false"::: 1. After you've finished editing all the alert rule options, select **Save**. -## Manage log alerts using PowerShell +## Manage log search alerts using PowerShell [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] Use the following PowerShell cmdlets to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules): -- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): PowerShell cmdlet to create a new log alert rule.-- [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule): PowerShell cmdlet to update an existing log alert rule.-- [New-AzScheduledQueryRuleSource](/powershell/module/az.monitor/new-azscheduledqueryrulesource): PowerShell cmdlet to create or update the object that specifies source parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.-- [New-AzScheduledQueryRuleSchedule](/powershell/module/az.monitor/new-azscheduledqueryruleschedule): PowerShell cmdlet to create or update the object that specifies schedule parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.-- [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction): PowerShell cmdlet to create or update the object that specifies action parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.-- [New-AzScheduledQueryRuleAznsActionGroup](/powershell/module/az.monitor/new-azscheduledqueryruleaznsactiongroup): PowerShell cmdlet to create or update the object that specifies action group parameters for a log alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.-- [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition): PowerShell cmdlet to create or update the object that specifies trigger condition parameters for a log alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.-- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger): PowerShell cmdlet to create or update the object that specifies metric trigger condition parameters for a metric measurement log alert. Used as input by the [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet.-- [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule): PowerShell cmdlet to list existing log alert rules or a specific log alert rule.-- [Update-AzScheduledQueryRule](/powershell/module/az.monitor/update-azscheduledqueryrule): PowerShell cmdlet to enable or disable a log alert rule.-- [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log alert rule.+- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): PowerShell cmdlet to create a new log search alert rule. +- [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule): PowerShell cmdlet to update an existing log search alert rule. +- [New-AzScheduledQueryRuleSource](/powershell/module/az.monitor/new-azscheduledqueryrulesource): PowerShell cmdlet to create or update the object that specifies source parameters for a log search alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets. +- [New-AzScheduledQueryRuleSchedule](/powershell/module/az.monitor/new-azscheduledqueryruleschedule): PowerShell cmdlet to create or update the object that specifies schedule parameters for a log search alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets. +- [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction): PowerShell cmdlet to create or update the object that specifies action parameters for a log search alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets. +- [New-AzScheduledQueryRuleAznsActionGroup](/powershell/module/az.monitor/new-azscheduledqueryruleaznsactiongroup): PowerShell cmdlet to create or update the object that specifies action group parameters for a log search alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet. +- [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition): PowerShell cmdlet to create or update the object that specifies trigger condition parameters for a log search alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet. +- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger): PowerShell cmdlet to create or update the object that specifies metric trigger condition parameters for a metric measurement log search alert. Used as input by the [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet. +- [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule): PowerShell cmdlet to list existing log search alert rules or a specific log search alert rule. +- [Update-AzScheduledQueryRule](/powershell/module/az.monitor/update-azscheduledqueryrule): PowerShell cmdlet to enable or disable a log search alert rule. +- [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log search alert rule. > [!NOTE]-> The `ScheduledQueryRules` PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log alert rules created by using the legacy [Log Analytics Alert API](./api-alerts.md) can only be managed by using PowerShell after you [switch to the Scheduled Query Rules API](./alerts-log-api-switch.md). +> The `ScheduledQueryRules` PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log search alert rules created by using the legacy [Log Analytics Alert API](./api-alerts.md) can only be managed by using PowerShell after you [switch to the Scheduled Query Rules API](./alerts-log-api-switch.md). -Example steps for creating a log alert rule by using PowerShell: +Example steps for creating a log search alert rule by using PowerShell: ```powershell $source = New-AzScheduledQueryRuleSource -Query 'Heartbeat | summarize AggregatedValue = count() by bin(TimeGenerated, 5m), _ResourceId' -DataSourceId "/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.OperationalInsights/workspaces/servicews" $alertingAction = New-AzScheduledQueryRuleAlertingAction -AznsAction $aznsAction New-AzScheduledQueryRule -ResourceGroupName "contosoRG" -Location "Region Name for your Application Insights App or Log Analytics Workspace" -Action $alertingAction -Enabled $true -Description "Alert description" -Schedule $schedule -Source $source -Name "Alert Name" ``` -Example steps for creating a log alert rule by using PowerShell with cross-resource queries: +Example steps for creating a log search alert rule by using PowerShell with cross-resource queries: ```powershell $authorized = @ ("/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.OperationalInsights/workspaces/servicewsCrossExample", "/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.insights/components/serviceAppInsights") $alertingAction = New-AzScheduledQueryRuleAlertingAction -AznsAction $aznsAction New-AzScheduledQueryRule -ResourceGroupName "contosoRG" -Location "Region Name for your Application Insights App or Log Analytics Workspace" -Action $alertingAction -Enabled $true -Description "Alert description" -Schedule $schedule -Source $source -Name "Alert Name" ``` -You can also create the log alert by using [a template and parameters](./alerts-log-create-templates.md) files using PowerShell: +You can also create the log search alert by using [a template and parameters](./alerts-log-create-templates.md) files using PowerShell: ```powershell Connect-AzAccount New-AzResourceGroupDeployment -Name AlertDeployment -ResourceGroupName ResourceG ## Next steps -* Learn about [log alerts](./alerts-unified-log.md). -* Create log alerts by using [Azure Resource Manager templates](./alerts-log-create-templates.md). -* Understand [webhook actions for log alerts](./alerts-log-webhook.md). +* Learn about [log search alerts](./alerts-types.md#log-alerts). +* Create log search alerts by using [Azure Resource Manager templates](./alerts-log-create-templates.md). +* Understand [webhook actions for log search alerts](./alerts-log-webhook.md). * Learn more about [log queries](../logs/log-query-overview.md). |
azure-monitor | Alerts Metric Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md | The supported Log Analytics logs are the following: - [Update management](../../automation/update-management/overview.md) records - [Event data](./../agents/data-sources-windows-events.md) logs -There are many benefits for using **Metric Alerts for Logs** over query based [Log Alerts](./alerts-log.md) in Azure; some of them are listed below: +There are many benefits for using **Metric Alerts for Logs** over query based [log search alerts](./alerts-log.md) in Azure; some of them are listed below: - Metric Alerts offer near-real time monitoring capability and Metric Alerts for Logs forks data from the log source to ensure the same.-- Metric Alerts are stateful - only notifying once when alert is fired and once when alert is resolved; as opposed to Log alerts, which are stateless and keep firing at every interval if the alert condition is met.+- Metric Alerts are stateful - only notifying once when alert is fired and once when alert is resolved; as opposed to log search alerts, which are stateless and keep firing at every interval if the alert condition is met. - Metric Alerts for Log provide multiple dimensions, allowing filtering to specific values like Computers, OS Type, etc. simpler; without the need for defining a complex query in Log Analytics. > [!NOTE] az deployment group create --resource-group myRG --template-file metricfromLogsA ## Next steps - Learn more about the [metric alerts](../alerts/alerts-metric.md).-- Learn about [log alerts in Azure](./alerts-unified-log.md).+- Learn about [log search alerts in Azure](./alerts-types.md#log-alerts). - Learn about [alerts in Azure](./alerts-overview.md). |
azure-monitor | Alerts Non Common Schema Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-non-common-schema-definitions.md | -The noncommon alert schema were historically used to customize alert email templates and webhook schemas for metric, log, and activity log alert rules. We recommend using the [common schema](./alerts-common-schema.md) for all alert types and integrations. +The noncommon alert schema were historically used to customize alert email templates and webhook schemas for metric, log search, and activity log alert rules. We recommend using the [common schema](./alerts-common-schema.md) for all alert types and integrations. This article describes the noncommon alert schema definitions for Azure Monitor, including definitions for: - Webhooks See sample values for metric alerts. } ``` -## Log alerts +## Log search alerts -See sample values for log alerts. +See sample values for log search alerts. ### monitoringService = Log Alerts V1 ΓÇô Metric |
azure-monitor | Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md | This table provides a brief description of each alert type. For more information |Alert type|Description| |:|:| |[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics, or Application Insights metrics. Metric alerts can also apply multiple conditions and dynamic thresholds.|-|[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.| +|[Log search alerts](alerts-types.md#log-alerts)|Log search alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.| |[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches defined conditions. Resource Health alerts and Service Health alerts are activity log alerts that report on your service and resource health.| |[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.| |[Prometheus alerts](alerts-types.md#prometheus-alerts)|Prometheus alerts are used for alerting on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). The alert rules are based on the PromQL open-source query language.| The alert condition for stateful alerts is `fired`, until it is considered resol For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved. -Stateful log alerts have limitations - details [here](/azure/azure-monitor/service-limits#alerts). +Stateful log search alerts have limitations - details [here](/azure/azure-monitor/service-limits#alerts). This table describes when a stateful alert is considered resolved: |Alert type |The alert is resolved when | ||| |Metric alerts|The alert condition isn't met for three consecutive checks.|-|Log alerts| The alert condition isn't met for a specific time range. The time range differs based on the frequency of the alert:<ul> <li>**1 minute**: The alert condition isn't met for 10 minutes.</li> <li>**5 to 15 minutes**: The alert condition isn't met for three frequency periods.</li> <li>**15 minutes to 11 hours**: The alert condition isn't met for two frequency periods.</li> <li>**11 to 12 hours**: The alert condition isn't met for one frequency period.</li></ul>| +|Log search alerts| The alert condition isn't met for a specific time range. The time range differs based on the frequency of the alert:<ul> <li>**1 minute**: The alert condition isn't met for 10 minutes.</li> <li>**5 to 15 minutes**: The alert condition isn't met for three frequency periods.</li> <li>**15 minutes to 11 hours**: The alert condition isn't met for two frequency periods.</li> <li>**11 to 12 hours**: The alert condition isn't met for one frequency period.</li></ul>| ## Recommended alert rules For metric alert rules for Azure services that don't support multiple resources, Each metric alert rule is charged based on the number of time series that are monitored. -### Log alerts +### Log search alerts -Use [log alert rules](alerts-create-log-alert-rule.md) to monitor all resources that send data to the Log Analytics workspace. These resources can be from any subscription or region. Use data collection rules when setting up your Log Analytics workspace to collect the required data for your log alerts rule. +Use [log search alert rules](alerts-create-log-alert-rule.md) to monitor all resources that send data to the Log Analytics workspace. These resources can be from any subscription or region. Use data collection rules when setting up your Log Analytics workspace to collect the required data for your log search alert rule. You can also create resource-centric alerts instead of workspace-centric alerts by using **Split by dimensions**. When you split on the resourceId column, you will get one alert per resource that meets the condition. -Log alert rules that use splitting by dimensions are charged based on the number of time series created by the dimensions resulting from your query. If the data is already collected to a Log Analytics workspace, there is no additional cost. +Log search alert rules that use splitting by dimensions are charged based on the number of time series created by the dimensions resulting from your query. If the data is already collected to a Log Analytics workspace, there is no additional cost. If you use metric data at scale in the Log Analytics workspace, pricing will change based on the data ingestion. |
azure-monitor | Alerts Payload Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-payload-samples.md | -The common alert schema standardizes the consumption experience for alert notifications in Azure. Historically, activity log, metric, and log alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications. +The common alert schema standardizes the consumption experience for alert notifications in Azure. Historically, activity log, metric, and log search alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications. A standardized schema can help you minimize the number of integrations, which simplifies the process of managing and maintaining your integrations. The following are sample metric alert payloads. } ``` -## Sample log alerts +## Sample log search alerts > [!NOTE]-> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log alerts have these limitations regarding the common schema: -> - The common schema is not supported for log alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations. -> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included. +> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log search alerts have these limitations regarding the common schema: +> - The common schema is not supported for log search alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations. +> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log search alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log search alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included. -### Log alert with monitoringService = Platform +### Log search alert with monitoringService = Platform ```json { The following are sample metric alert payloads. } } ```-### Log alert with monitoringService = Application Insights +### Log search alert with monitoringService = Application Insights ```json { The following are sample metric alert payloads. } ``` -### Log alert with monitoringService = Log Alerts V2 +### Log search alert with monitoringService = Log Alerts V2 > [!NOTE]-> Log alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload. +> Log search alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log search alerts payload when you use this version. Use [dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload. ```json { The following are sample metric alert payloads. } ``` -### Sample test action log alerts +### Sample test action log search alerts -#### Test action log alert V1 ΓÇô Metric +#### Test action log search alert V1 ΓÇô Metric ```json { The following are sample metric alert payloads. } ``` -#### Test action log alert V1 - Numresults +#### Test action log search alert V1 - Numresults ```json { The following are sample metric alert payloads. } ``` -#### Test action log alert V2 +#### Test action log search alert V2 > [!NOTE]-> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. +> Log search alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log search alerts payload when you use this version. Use [dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload. |
azure-monitor | Alerts Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-plan.md | Title: 'Plan your Alerts and automated actions' + Title: 'Plan your alerts and automated actions' description: Recommendations for deployment of Azure Monitor alerts and automated actions. Alerts in Azure Monitor are created by alert rules that you must create. For gui Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require. - Activity log rules. Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating an activity log alert.-- Metric alert rules. Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a metric alert.-- Log alert rules. Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a log query alert.+- Metric alert rules. Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log search alerts. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a metric alert. +- Log search alert rules. Creates an alert when the results of a scheduled query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a log search query alert. - [Application alerts](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) for a description of the different tests and information on creating them. ## Alert severity You want to create alerts for any important information in your environment. But - See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting. - Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.-- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue.+- Use the **Suppress alerts** option in log search alert rules to avoid creating multiple alerts for the same issue. - Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together. - Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention. Typically, you'll want to alert on issues for all your critical Azure applicatio - Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md). - For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](resource-manager-alerts-metric.md).-- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.+- To return data for multiple resources, write queries in log search alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource. > [!NOTE]-> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert. +> Resource-centric log search alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log search alert. ## Next steps |
azure-monitor | Alerts Resource Move | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md | Navigate to Alerts > Alert processing rules (preview) > filter by the containing ## Next steps -Learn about fixing other problems with [alert notifications](alerts-troubleshoot.md), [metric alerts](alerts-troubleshoot-metric.md), and [log alerts](alerts-troubleshoot-log.md). +Learn about fixing other problems with [alert notifications](alerts-troubleshoot.md), [metric alerts](alerts-troubleshoot-metric.md), and [log search alerts](alerts-troubleshoot-log.md). |
azure-monitor | Alerts Troubleshoot Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md | Title: Troubleshoot log alerts in Azure Monitor | Microsoft Docs -description: Common issues, errors, and resolutions for log alert rules in Azure. + Title: Troubleshoot log search alerts in Azure Monitor | Microsoft Docs +description: Common issues, errors, and resolutions for log search alert rules in Azure. Last updated 06/20/2023 -# Troubleshoot log alerts in Azure Monitor +# Troubleshoot log search alerts in Azure Monitor -This article describes how to resolve common issues with log alerts in Azure Monitor. It also provides solutions to common problems with the functionality and configuration of log alerts. +This article describes how to resolve common issues with log search alerts in Azure Monitor. It also provides solutions to common problems with the functionality and configuration of log search alerts. -You can use log alerts to evaluate resources logs every set frequency by using a [Log Analytics](../logs/log-analytics-tutorial.md) query, and fire an alert that's based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md). To learn more about functionality and terminology of log alerts, see [Log alerts in Azure Monitor](alerts-unified-log.md). +You can use log search alerts to evaluate resources logs every set frequency by using a [Log Analytics](../logs/log-analytics-tutorial.md) query, and fire an alert that's based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md). To learn more about functionality and terminology of log search alerts, see [Log search alerts in Azure Monitor](alerts-types.md#log-alerts). > [!NOTE] > This article doesn't consider cases where the Azure portal shows that an alert rule was triggered but a notification isn't received. For such cases, see [Action or notification on my alert did not work as expected](./alerts-troubleshoot.md#action-or-notification-on-my-alert-did-not-work-as-expected). -## Log alert didn't fire +## Log search alert didn't fire ### Data ingestion time for logs To mitigate latency, the system retries the alert evaluation multiple times. Aft ### Actions are muted or alert rule is defined to resolve automatically -Log alerts provide an option to mute fired alert actions for a set amount of time using **Mute actions** and to only fire once per condition being met using **Automatically resolve alerts**. +Log search alerts provide an option to mute fired alert actions for a set amount of time using **Mute actions** and to only fire once per condition being met using **Automatically resolve alerts**. A common issue is that you think that the alert didn't fire, but it was actually the rule configuration. A common issue is that you think that the alert didn't fire, but it was actually ### Alert scope resource has been moved, renamed, or deleted -When you author an alert rule, Log Analytics creates a permission snapshot for your user ID. This snapshot is saved in the rule and contains the rule scope resource, Azure Resource Manager ID. If the rule scope resource moves, gets renamed, or is deleted, all log alert rules that refer to that resource will break. To work correctly, alert rules need to be recreated using the new Azure Resource Manager ID. +When you author an alert rule, Log Analytics creates a permission snapshot for your user ID. This snapshot is saved in the rule and contains the rule scope resource, Azure Resource Manager ID. If the rule scope resource moves, gets renamed, or is deleted, all log search alert rules that refer to that resource will break. To work correctly, alert rules need to be recreated using the new Azure Resource Manager ID. ### The alert rule uses a system-assigned managed identity -When you create a log alert rule with system-assigned managed identity, the identity is created without any permissions. After you create the rule, you need to assign the appropriate roles to the ruleΓÇÖs identity so that it can access the data you want to query. For example, you might need to give it a Reader role for the relevant Log Analytics workspaces, or a Reader role and a Database Viewer role for the relevant ADX cluster. See [managed identities](/azure/azure-monitor/alerts/alerts-create-log-alert-rule#configure-the-alert-rule-details) for more information about using managed identities in log alerts. +When you create a log search alert rule with system-assigned managed identity, the identity is created without any permissions. After you create the rule, you need to assign the appropriate roles to the ruleΓÇÖs identity so that it can access the data you want to query. For example, you might need to give it a Reader role for the relevant Log Analytics workspaces, or a Reader role and a Database Viewer role for the relevant ADX cluster. See [managed identities](/azure/azure-monitor/alerts/alerts-create-log-alert-rule#configure-the-alert-rule-details) for more information about using managed identities in log search alerts. ### Metric measurement alert rule with splitting using the legacy Log Analytics API -[Metric measurement](alerts-unified-log.md#calculation-of-a-value) is a type of log alert that's based on summarized time series results. You can use these rules to group by columns to [split alerts](alerts-unified-log.md#split-by-alert-dimensions). If you're using the legacy Log Analytics API, splitting doesn't work as expected because it doesn't support grouping. +[Metric measurement](alerts-types.md#log-alerts) is a type of log search alert that's based on summarized time series results. You can use these rules to group by columns to [split alerts](alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1). If you're using the legacy Log Analytics API, splitting doesn't work as expected because it doesn't support grouping. -You can use the current ScheduledQueryRules API to set **Aggregate On** in [Metric measurement](alerts-unified-log.md#calculation-of-a-value) rules, which work as expected. To learn more about switching to the current ScheduledQueryRules API, see [Upgrade to the current Log Alerts API from legacy Log Analytics Alert API](./alerts-log-api-switch.md). +You can use the current ScheduledQueryRules API to set **Aggregate On** in [Metric measurement](alerts-types.md#log-alerts) rules, which work as expected. To learn more about switching to the current ScheduledQueryRules API, see [Upgrade to the current Log Alerts API from legacy Log Analytics Alert API](./alerts-log-api-switch.md). ### Override query time range+ As a part of the configuration of the alert, in the section of the "Advance Options", there is an option to configure "Override query time range" parameter. If you want the alert evaluation period to be different than the query time range, enter a time range here. The alert time range is limited to a maximum of two days. Even if the query contains an ago command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains ago(7d), the query only scans up to two days of data. If the query requires more data than the alert evaluation, you can change the time range manually. If there's ago command in the query, it will be changed automatically to be 2 days (48 hours). -## Log alert fired unnecessarily +## Log search alert fired unnecessarily -A configured [log alert rule in Azure Monitor](./alerts-log.md) might be triggered unexpectedly. The following sections describe some common reasons. +A configured [log search alert rule in Azure Monitor](./alerts-log.md) might be triggered unexpectedly. The following sections describe some common reasons. ### Alert triggered by partial data Azure Monitor processes terabytes of customers' logs from across the world, whic Logs are semi-structured data and are inherently more latent than metrics. If you're experiencing many misfires in fired alerts, you should consider using [metric alerts](alerts-metric-overview.md). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md). -Log alerts work best when you try to detect data in the logs. It works less well when you try to detect lack of data in the logs, like alerting on virtual machine heartbeat. +Log search alerts work best when you try to detect data in the logs. It works less well when you try to detect lack of data in the logs, like alerting on virtual machine heartbeat. There are built-in capabilities to prevent false alerts, but they can still occur on very latent data (over ~30 minutes) and data with latency spikes. -## Log alert was disabled +## Log search alert was disabled -The following sections list some reasons why Azure Monitor might disable a log alert rule. After those sections, there's an [example of the activity log that is sent when a rule is disabled](#activity-log-example-when-rule-is-disabled). +The following sections list some reasons why Azure Monitor might disable a log search alert rule. After those sections, there's an [example of the activity log that is sent when a rule is disabled](#activity-log-example-when-rule-is-disabled). ### Alert scope no longer exists or was moved When the scope resources of an alert rule are no longer valid, rule execution fails, and billing stops. -If a log alert fails continuously for a week, Azure Monitor disables it. +If a log search alert fails continuously for a week, Azure Monitor disables it. -### Query used in a log alert isn't valid +### <a name="query-used-in-a-log-alert-isnt-valid"></a>Query used in a log search alert isn't valid -When a log alert rule is created, the query is validated for correct syntax. But sometimes, the query provided in the log alert rule can start to fail. Some common reasons are: +When a log search alert rule is created, the query is validated for correct syntax. But sometimes, the query provided in the log search alert rule can start to fail. Some common reasons are: - Rules were created via the API, and validation was skipped by the user. - The query [runs on multiple resources](../logs/cross-workspace-query.md), and one or more of the resources was deleted or moved. When a log alert rule is created, the query is validated for correct syntax. But - [Custom logs tables](../agents/data-sources-custom-logs.md) aren't yet created, because the data flow hasn't started. - Changes in [query language](/azure/kusto/query/) include a revised format for commands and functions, so the query provided earlier is no longer valid. -[Azure Advisor](../../advisor/advisor-overview.md) warns you about this behavior. It adds a recommendation about the affected log alert rule. The category used is 'High Availability' with medium impact and a description of 'Repair your log alert rule to ensure monitoring'. +[Azure Advisor](../../advisor/advisor-overview.md) warns you about this behavior. It adds a recommendation about the affected log search alert rule. The category used is 'High Availability' with medium impact and a description of 'Repair your log search alert rule to ensure monitoring'. ## Alert rule quota was reached For details about the number of log search alert rules per subscription and maxi If you've reached the quota limit, the following steps might help resolve the issue. 1. Delete or disable log search alert rules that arenΓÇÖt used anymore.-1. Use [splitting of alerts by dimensions](alerts-unified-log.md#split-by-alert-dimensions) to reduce rules count. These rules can monitor many resources and detection cases. +1. Use [splitting of alerts by dimensions](alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to reduce rules count. These rules can monitor many resources and detection cases. 1. If you need the quota limit to be increased, continue to open a support request, and provide the following information: - The Subscription IDs and Resource IDs for which the quota limit needs to be increased If you've reached the quota limit, the following steps might help resolve the is - The resource type for the quota increase, such as **Log Analytics** or **Application Insights** - The requested quota limit -### To check the current usage of new log alert rules +### To check the current usage of new log search alert rules #### From the Azure portal The total number of log search alert rules is displayed above the rules list. ## Activity log example when rule is disabled -If query fails for seven days continuously, Azure Monitor disables the log alert and stops the billing of the rule. You can see the exact time when Azure Monitor disabled the log alert in the [Azure activity log](../../azure-monitor/essentials/activity-log.md). +If query fails for seven days continuously, Azure Monitor disables the log search alert and stops the billing of the rule. You can see the exact time when Azure Monitor disabled the log search alert in the [Azure activity log](../../azure-monitor/essentials/activity-log.md). See this example: See this example: ``` ## Query syntax validation error+ If you get an error message that says "Couldn't validate the query syntax because the service can't be reached", it could be either: - A query syntax error. - A problem connecting to the service that validates the query. Try the following steps to resolve the problem: - Flush the DNS cache on your local machine, by opening a command prompt and running the following command: `ipconfig /flushdns`, and then check again. If you still get the same error message, try the next step. - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](../ip-addresses.md#application-insights-and-log-analytics-apis). - ## Next steps -- Learn about [log alerts in Azure](./alerts-unified-log.md).-- Learn more about [configuring log alerts](../logs/log-query-overview.md).+- Learn about [log search alerts in Azure](./alerts-types.md#log-alerts). +- Learn more about [configuring log search alerts](../logs/log-query-overview.md). - Learn more about [log queries](../logs/log-query-overview.md). |
azure-monitor | Alerts Troubleshoot Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md | If you believe your metric alert shouldn't have fired but it did, the following - Edit the alert rule in the Azure portal. See if the **Automatically resolve alerts** checkbox under the **Alert rule details** section is cleared. - Review the script used to deploy the alert rule or retrieve the alert rule definition. Check if the `autoMitigate` property is set to `false`.+ ## Can't find the metric to alert on If you want to alert on a specific metric but you can't see it when you create an alert rule, check to determine: - If you can't see any metrics for the resource, [check if the resource type is supported for metric alerts](./alerts-metric-near-real-time.md). - If you can see some metrics for the resource but can't find a specific metric, [check if that metric is available](../essentials/metrics-supported.md). If so, see the metric description to check if it's only available in specific versions or editions of the resource.-- If the metric isn't available for the resource, it might be available in the resource logs and can be monitored by using log alerts. For more information, see how to [collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).+- If the metric isn't available for the resource, it might be available in the resource logs and can be monitored by using log search alerts. For more information, see how to [collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md). ## Can't find the metric to alert on: Virtual machines guest metrics For more information about collecting data from the guest operating system of a > [!NOTE] > If you configured guest metrics to be sent to a Log Analytics workspace, the metrics appear under the Log Analytics workspace resource and start showing data *only* after you create an alert rule that monitors them. To do so, follow the steps to [configure a metric alert for logs](./alerts-metric-logs.md#configuring-metric-alert-for-logs). -Currently, monitoring a guest metric for multiple virtual machines with a single alert rule isn't supported by metric alerts. But you can use a [log alert rule](./alerts-unified-log.md). To do so, make sure the guest metrics are collected to a Log Analytics workspace and create a log alert rule on the workspace. +Currently, monitoring a guest metric for multiple virtual machines with a single alert rule isn't supported by metric alerts. But you can use a [log search alert rule](./alerts-types.md#log-alerts). To do so, make sure the guest metrics are collected to a Log Analytics workspace and create a log search alert rule on the workspace. + ## Can't find the metric dimension to alert on If you want to alert on [specific dimension values of a metric](./alerts-metric-overview.md#using-dimensions) but you can't find these values: If you've reached the quota limit, the following steps might help resolve the is - Requested quota limit. ## `Metric not found` error:+ - **For a platform metric:** Make sure you're using the **Metric** name from [the Azure Monitor supported metrics page](../essentials/metrics-supported.md) and not the **Metric Display Name**. - **For a custom metric:** Make sure that the metric is already being emitted because you can't create an alert rule on a custom metric that doesn't yet exist. Also ensure that you're providing the custom metric's namespace. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-a-custom-metric). - If you're creating [metric alerts on logs](./alerts-metric-logs.md), ensure appropriate dependencies are included. For a sample template, see [Create Metric Alerts for Logs in Azure Monitor](./alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs). To resolve this, we recommend that you either: ΓÇó Use the **StartsWith** operator if the dimension values have common names. ΓÇó If relevant, configure the rule to monitor all dimension values if thereΓÇÖs no need to individually monitor the specific dimension values. - ## No permissions to create metric alert rules To create a metric alert rule, you must have the following permissions: To create a metric alert rule, you must have the following permissions: - Read permission on the target resource of the alert rule. - Write permission on the resource group in which the alert rule is created. If you're creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides. - Read permission on any action group associated to the alert rule, if applicable.+ ## Considerations when creating an alert rule that contains multiple criteria+ - You can only select one value per dimension within each criterion. - You can't use an asterisk (\*) as a dimension value. - When metrics that are configured in different criteria support the same dimension, a configured dimension value must be explicitly set in the same way for all those metrics. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-multiple-criteria).+ ## Check the total number of metric alert rules To check the current usage of metric alert rules, follow the next steps. |
azure-monitor | Alerts Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md | This article discusses common problems in Azure Monitor alerting and notificatio You can see fired alerts in the Azure portal. -Refer to these articles for troubleshooting information about metric or log alerts that are not behaving as expected: +Refer to these articles for troubleshooting information about metric or log search alerts that are not behaving as expected: - [Troubleshoot Azure Monitor metric alerts](alerts-troubleshoot-metric.md)-- [Troubleshoot Azure Monitor log alerts](alerts-troubleshoot-log.md)+- [Troubleshoot Azure Monitor log search alerts](alerts-troubleshoot-log.md) If the alert fires as intended according to the Azure portal but the proper notifications do not occur, use the information in the rest of this article to troubleshoot that problem. If you can see a fired alert in the portal, but a related alert processing rule Here is an example of an alert processing rule adding another action group: <!-- convertborder later --> :::image type="content" source="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" lightbox="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" alt-text="Screenshot of action repeated in multiple action groups." border="false":::- 1. **Does the alert processing rule scope and filter match the fired alert?** If you think the alert processing rule should have fired but didn't, or that it shouldn't have fired but it did, carefully examine the alert processing rule scope and filter conditions versus the properties of the fired alert. - ## How to find the alert ID of a fired alert When opening a case about a specific fired alert (such as ΓÇô if you did not receive its email notification), you need to provide the alert ID. If you received an error while trying to create, update or delete an [alert proc Check the [alert processing rule documentation](../alerts/alerts-action-rules.md), or the [alert processing rule PowerShell Set-AzActionRule](/powershell/module/az.alertsmanagement/set-azalertprocessingrule) command. ## Next steps-- If using a log alert, also see [Troubleshooting Log Alerts](./alerts-troubleshoot-log.md).++- If using a log search alert, also see [Troubleshooting Log Search Alerts](./alerts-troubleshoot-log.md). - Go back to the [Azure portal](https://portal.azure.com) to check if you solved your issue with guidance in this article. |
azure-monitor | Alerts Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md | For more information about pricing, see the [pricing page](https://azure.microso The types of alerts are: - [Metric alerts](#metric-alerts)-- [Log alerts](#log-alerts)+- [Log search alerts](#log-alerts) - [Activity log alerts](#activity-log-alerts) - [Service Health alerts](#service-health-alerts) - [Resource Health alerts](#resource-health-alerts) The types of alerts are: |Alert type |When to use |Pricing information| |||| |Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Use metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time series that are monitored. |-|Log alert|You can use log alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for at-scale monitoring using splitting by dimensions, the cost also depends on the number of time series created by the dimensions resulting from your query. | +|Log search alert|You can use log search alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log search alerts.|Each log search alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log search alerts configured for at-scale monitoring using splitting by dimensions, the cost also depends on the number of time series created by the dimensions resulting from your query. | |Activity log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource like a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts let you know when there's an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).| |Prometheus alerts|Prometheus alerts are used for alerting on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). The alert rules are based on the PromQL open-source query language. |Prometheus alert rules are only charged on the data queried by the rules. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/). | Dynamic thresholds help you: See [dynamic thresholds](alerts-dynamic-thresholds.md) for detailed instructions on using dynamic thresholds in metric alert rules. -## Log alerts +## <a name="log-alerts"></a>Log search alerts -A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, you can perform advanced logic operations on your data and use the robust KQL features to manipulate log data. +A log search alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, you can perform advanced logic operations on your data and use the robust KQL features to manipulate log data. -The target of the log alert rule can be: +The target of the log search alert rule can be: - A single resource, such as a VM. - A single container of resources, like a resource group or subscription. - Multiple resources that use a [cross-resource query](../logs/cross-workspace-query.md). -Log alerts can measure two different things, which can be used for different monitoring scenarios: +Log search alerts can measure two different things, which can be used for different monitoring scenarios: - **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. - **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. -You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview. -Note that stateful log alerts have these limitations: +You can configure if log search alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview. +Note that stateful log search alerts have these limitations: - they can trigger up to 300 alerts per evaluation. - you can have a maximum of 5000 alerts with the `fired` alert condition. > [!NOTE]-> Log alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md). +> Log search alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md). ### Monitor multiple instances of a resource using dimensions -You can use dimensions when you create log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance. +You can use dimensions when you create log search alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance. ### Monitor the same condition on multiple resources using splitting by dimensions To monitor for the same condition on multiple Azure resources, you can use split You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%. -### Use the API for log alert rules +### Use the API for log search alert rules Manage new rules in your workspaces by using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API. > [!NOTE]-> Log alerts for Log Analytics used to be managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md). +> Log search alerts for Log Analytics used to be managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md). -### Log alerts on your Azure bill +### Log search alerts on your Azure bill -Log alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with: -- Log alerts on Application Insights shown with the exact resource name along with resource group and alert properties.-- Log alerts on Log Analytics are shown with the exact resource name along with resource group and alert properties when they're created by using the scheduledQueryRules API.-- Log alerts created from the [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have the resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log alerts on the legacy API are shown with the preceding hidden resource name along with resource group and alert properties.+Log search alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with: +- Log search alerts on Application Insights shown with the exact resource name along with resource group and alert properties. +- Log search alerts on Log Analytics are shown with the exact resource name along with resource group and alert properties when they're created by using the scheduledQueryRules API. +- Log search alerts created from the [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have the resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log search alerts on the legacy API are shown with the preceding hidden resource name along with resource group and alert properties. > [!Note] > Unsupported resource characters like <, >, %, &, \, ? and / are replaced with an underscore (_) in the hidden resource names. This character change is also reflected in the billing information. |
azure-monitor | Alerts Understand Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md | Classic alerts are [retired](./monitoring-classic-retirement.md) for public clou This article explains how the manual migration and voluntary migration tool work, which will be used to migrate remaining alert rules. It also describes solutions for some common problems. > [!IMPORTANT]-> Activity log alerts (including Service health alerts) and Log alerts are not impacted by the migration. The migration only applies to classic alert rules described [here](./monitoring-classic-retirement.md#retirement-of-classic-monitoring-and-alerting-platform). +> Activity log alerts (including Service health alerts) and log search alerts are not impacted by the migration. The migration only applies to classic alert rules described [here](./monitoring-classic-retirement.md#retirement-of-classic-monitoring-and-alerting-platform). > [!NOTE] > If your classic alert rules are invalid i.e. they are on [deprecated metrics](#classic-alert-rules-on-deprecated-metrics) or resources that have been deleted, they will not be migrated and will not be available after service is retired. Customers that are interested in manually migrating their remaining alerts can a Before you can create new metric alerts on guest metrics, the guest metrics must be sent to the Azure Monitor logs store. Follow these instructions to create alerts: - [Enabling guest metrics collection to log analytics](../agents/agent-data-sources.md)-- [Creating log alerts in Azure Monitor](./alerts-log.md)+- [Creating log search alerts in Azure Monitor](./alerts-log.md) There are more options to collect guest metrics and alert on them, [learn more](../agents/agents-overview.md). |
azure-monitor | Api Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/api-alerts.md | Last updated 06/20/2023 -# Legacy Log Analytics alerts REST API +# Legacy Log Analytics Alert REST API This article describes how to manage alert rules using the legacy API. > [!IMPORTANT]-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log alerts by that date. +> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics Alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log search alerts by that date. > Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and several examples for performing different operations. Log Analytics-based query alerts fire every time the threshold is met or exceede For example, if `Suppress` is set for 30 minutes, the alert will fire the first time and send notifications configured. It will then wait for 30 minutes before notification for the alert rule is again used. In the interim period, the alert rule will continue to run. Only notification is suppressed by Log Analytics for a specified time regardless of how many times the alert rule fired in this period. -The `Suppress` property of a Log Analytics alert rule is specified by using the `Throttling` value. The suppression period is specified by using the `DurationInMinutes` value. +The `Suppress` property of a log search alert rule is specified by using the `Throttling` value. The suppression period is specified by using the `DurationInMinutes` value. The following sample response is for an action with only `Threshold`, `Severity`, and `Suppress` properties. armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Na ##### Customize WebhookPayload for an action group -By default, the webhook sent via an action group for Log Analytics has a fixed structure. But you can customize the JSON payload by using specific variables supported to meet requirements of the webhook endpoint. For more information, see [Webhook action for log alert rules](./alerts-log-webhook.md). +By default, the webhook sent via an action group for Log Analytics has a fixed structure. But you can customize the JSON payload by using specific variables supported to meet requirements of the webhook endpoint. For more information, see [Webhook action for log search alert rules](./alerts-log-webhook.md). The customized webhook details must be sent along with `ActionGroup` details. They'll be applied to all webhook URIs specified inside the action group. The following sample illustrates the use: armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Na ## Next steps * Use the [REST API to perform log searches](../logs/log-query-overview.md) in Log Analytics.-* Learn about [log alerts in Azure Monitor](./alerts-unified-log.md). -* Learn how to [create, edit, or manage log alert rules in Azure Monitor](./alerts-log.md). +* Learn about [log search alerts in Azure Monitor](./alerts-types.md#log-alerts). +* Learn how to [create, edit, or manage log search alert rules in Azure Monitor](./alerts-log.md). |
azure-monitor | It Service Management Connector Secure Webhook Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md | -Secure Webhook is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log, and activity log alerts. +Secure Webhook is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log search, and activity log alerts. ITSMC uses username and password credentials. Secure Webhook has stronger authentication because it uses Microsoft Entra ID. Microsoft Entra ID is Microsoft's cloud-based identity and access management service. It helps users sign in and access internal or external resources. Using Microsoft Entra ID with ITSM helps to identify Azure alerts (through the Microsoft Entra application ID) that were sent to the external system. |
azure-monitor | Itsm Connector Secure Webhook Connections Azure Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md | You can do this step by using the same [PowerShell commands](../alerts/action-gr After your application is registered with Microsoft Entra ID, you can create work items in your ITSM tool based on Azure alerts by using the Secure Webhook action in action groups. -Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, activity log alerts, and Log Analytics alerts in the Azure portal. +Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, activity log alerts, and log search alerts in the Azure portal. To learn more about action groups, see [Create and manage action groups in the Azure portal](../alerts/action-groups.md). |
azure-monitor | Itsmc Connections Servicenow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md | When you're successfully connected and synced: - Selected work items from the ServiceNow instance are imported into Log Analytics. You can view the summary of these work items on the **IT Service Management Connector** tile. -- You can create incidents from Log Analytics alerts or log records, or from Azure alerts in this ServiceNow instance.+- You can create incidents from log search alerts or log records, or from Azure alerts in this ServiceNow instance. > [!NOTE] > ServiceNow has a rate limit for requests per hour. To configure the limit, define **Inbound REST API rate limiting** in the ServiceNow instance. The payload that is sent to ServiceNow has a common structure. The structure has The structure of the payload for all alert types except log search V1 alert is [common schema](./alerts-common-schema.md). -For Log Search Alerts (V1 only), the structure is: +For Log search alerts (V1 only), the structure is: - Alert (alert rule name) : \<value> - Search Query : \<value> |
azure-monitor | Itsmc Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md | After you've installed ITSMC, and prepped your ITSM tool, create an ITSM connect ## Create ITSM work items from Azure alerts -After you create your ITSM connection, use the ITSM action in action groups to create work items in your ITSM tool based on Azure alerts. Action groups provide a modular and reusable way to trigger actions for your Azure alerts. You can use action groups with metric alerts, activity log alerts, and Log Analytics alerts in the Azure portal. +After you create your ITSM connection, use the ITSM action in action groups to create work items in your ITSM tool based on Azure alerts. Action groups provide a modular and reusable way to trigger actions for your Azure alerts. You can use action groups with metric alerts, activity log alerts, and log search alerts in the Azure portal. > [!NOTE] > Wait 30 minutes after you create the ITSM connection for the sync process to finish. To create an action group: > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow. For information on the deprecated behavior, see [Use Azure alerts to create a ServiceNow alert or event work item](/previous-versions/azure/azure-monitor/alerts/alerts-create-itsm-work-items). > As of October 2023, we are not supporting UI creation of connector for using ITSM actions to send alerts and events to ServiceNow. Until full deprecation the action creation should be by [API](/rest/api/monitor/action-groups/create-or-update?tabs=HTTP). -1. In the last section of the interface for creating an ITSM action group, if the alert is a log alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert. +1. In the last section of the interface for creating an ITSM action group, if the alert is a log search alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert. :::image type="content" source="media/itsmc-definition/itsm-action-incident.png" lightbox="media/itsmc-definition/itsm-action-incident.png" alt-text="Screenshot that shows the ITSM Ticket area with an incident work item type."::: |
azure-monitor | Itsmc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md | This article describes how you can integrate Azure Monitor with supported IT Ser Azure services like Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service. -Azure Monitor provides a bidirectional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool based on your Azure metric alerts, activity log alerts, and Log Analytics alerts. +Azure Monitor provides a bidirectional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool based on your Azure metric alerts, activity log alerts, and log search alerts. Azure Monitor supports connections with the following ITSM tools: |
azure-monitor | Itsmc Troubleshoot Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md | The following sections identify common symptoms, possible causes, and resolution ### In the incidents received from ServiceNow, the configuration item is blank **Cause**: The cause can be one of several reasons:-* The alert isn't a log alert. Configuration items are only supported by log alerts. +* The alert isn't a log search alert. Configuration items are only supported by log search alerts. * The search results don't include the **Computer** or **Resource** column. * The values in the configuration item field don't match an entry in the CMDB. **Resolution**: -* Check if the alert is a log alert. If it isn't a log alert, configuration items are not supported. +* Check if the alert is a log search alert. If it isn't a log search alert, configuration items are not supported. * If the search results don't have a Computer or Resource column, add them to the query. When you're defining a query in Log Search alerts you need to have in the query result the Configuration items names with one of the label names "Computer", "Resource", "_ResourceId" or "ResourceIdΓÇ¥. This mapping enables to map the configuration items to the ITSM payload * Check that the values in the Computer and Resource columns are identical to the values in the CMDB. If they aren't, add a new entry to the CMDB with the matching values. |
azure-monitor | Log Alert Rule Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/log-alert-rule-health.md | This table describes the possible resource health status values for a log search | Resource health status | Description |Recommended steps| ||| |Available|There are no known issues affecting this log search alert rule.| |-|Unknown|This log search alert rule is currently disabled or in an unknown state.|[Log alert was disabled](alerts-troubleshoot-log.md#log-alert-was-disabled).| +|Unknown|This log search alert rule is currently disabled or in an unknown state.|[Log alert was disabled](alerts-troubleshoot-log.md#log-search-alert-was-disabled).| |Unknown reason|This log search alert rule is currently unavailable due to an unknown reason.|Check if the alert rule was recently created. Health status is updated after the rule completes its first evaluation.| |Degraded due to unknown reason|This log search alert rule is currently degraded due to an unknown reason.| | |Setting up resource health|Setting up Resource health for this resource.|Check if the alert rule was recently created. Health status is updated after the rule completes its first evaluation.| |
azure-monitor | Resource Manager Alerts Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-log.md | Title: Resource Manager template samples for log query alerts -description: Sample Azure Resource Manager templates to deploy Azure Monitor log query alerts. + Title: Resource Manager template samples for log search alerts +description: Sample Azure Resource Manager templates to deploy Azure Monitor log search alerts. -# Resource Manager template samples for log alert rules in Azure Monitor +# Resource Manager template samples for log search alert rules in Azure Monitor -This article includes samples of [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure log query alerts in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template. +This article includes samples of [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure log search alerts in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template. [!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)] resource alert 'Microsoft.Insights/scheduledQueryRules@2021-08-01' = { ## Number of results template (up to version 2018-04-16) -The following sample creates a [number of results alert rule](../alerts/alerts-unified-log.md#result-count). +The following sample creates a [number of results alert rule](../alerts/alerts-types.md#log-alerts). ### Notes resource logQueryAlert 'Microsoft.Insights/scheduledQueryRules@2018-04-16' = { ## Metric measurement template (up to version 2018-04-16) -The following sample creates a [metric measurement alert rule](../alerts/alerts-unified-log.md#calculation-of-a-value). +The following sample creates a [metric measurement alert rule](../alerts/alerts-types.md#log-alerts). ### Template file |
azure-monitor | Resource Manager Alerts Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-metric.md | Title: Resource Manager template samples for metric alerts description: This article provides sample Resource Manager templates used to create metric alerts in Azure Monitor.-+ Previously updated : 10/31/2022 Last updated : 02/16/2024 |
azure-monitor | Tutorial Log Alert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-log-alert.md | Title: Tutorial - Create a log query alert for an Azure resource -description: Tutorial to create a log query alert for an Azure resource. + Title: Tutorial - Create a log search alert for an Azure resource +description: Tutorial to create a log search alert for an Azure resource. Last updated 11/07/2023 -# Tutorial: Create a log query alert for an Azure resource +# Tutorial: Create a log search alert for an Azure resource -Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. Log query alert rules create an alert when a log query returns a particular result. For example, receive an alert when a particular event is created on a virtual machine, or send a warning when excessive anonymous requests are made to a storage account. +Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. Log search alert rules create an alert when a log query returns a particular result. For example, receive an alert when a particular event is created on a virtual machine, or send a warning when excessive anonymous requests are made to a storage account. In this tutorial, you learn how to: > [!div class="checklist"] > * Access prebuilt log queries designed to support alert rules for different kinds of resources-> * Create a log query alert rule +> * Create a log search alert rule > * Create an action group to define notification details - ## Prerequisites -To complete this tutorial you need the following: +To complete this tutorial, you need the following: - An Azure resource to monitor. You can use any resource in your Azure subscription that supports diagnostic settings. To determine whether a resource supports diagnostic settings, go to its menu in the Azure portal and verify that there's a **Diagnostic settings** option in the **Monitoring** section of the menu. - If you're using any Azure resource other than a virtual machine: - A diagnostic setting to send the resource logs from your Azure resource to a Log Analytics workspace. See [Tutorial: Create Log Analytics workspace in Azure Monitor](../essentials/tutorial-resource-logs.md). If you're using any Azure resource other than a virtual machine: If you're using an Azure virtual machine: - A data collection rule to send guest logs and metrics to a Log Analytics workspace. See [Tutorial: Collect guest logs and metrics from Azure virtual machine](../vm/tutorial-monitor-vm-guest.md).-- - ## Select a log query and verify results -Data is retrieved from a Log Analytics workspace using a log query written in Kusto Query Language (KQL). Insights and solutions in Azure Monitor will provide log queries to retrieve data for a particular service, but you can work directly with log queries and their results in the Azure portal with Log Analytics. +## Select a log query and verify results ++Data is retrieved from a Log Analytics workspace using a log query written in Kusto Query Language (KQL). Insights and solutions in Azure Monitor provide log queries to retrieve data for a particular service, but you can work directly with log queries and their results in the Azure portal with Log Analytics. -Select **Logs** from your resource's menu. Log Analytics opens with the **Queries** window that includes prebuilt queries for your **Resource type**. Select **Alerts** to view queries specifically designed for alert rules. +Select **Logs** from your resource's menu. Log Analytics opens with the **Queries** window that includes prebuilt queries for your **Resource type**. Select **Alerts** to view queries designed for alert rules. > [!NOTE] > If the **Queries** window doesn't open, click **Queries** in the top right. :::image type="content" source="media/tutorial-log-alert/queries.png" lightbox="media/tutorial-log-alert/queries.png"alt-text="Log Analytics with queries window"::: -Select a query and click **Run** to load it in the query editor and return results. You may want to modify the query and run it again. For example, the **Show anonymous requests** query for storage accounts is shown below. You may want to modify the **AuthenticationType** or filter on a different column. +Select a query and click **Run** to load it in the query editor and return results. You may want to modify the query and run it again. For example, the **Show anonymous requests** query for storage accounts is shown in the following screenshot. You may want to modify the **AuthenticationType** or filter on a different column. :::image type="content" source="media/tutorial-log-alert/query-results.png" lightbox="media/tutorial-log-alert/query-results.png"alt-text="Query results"::: - ## Create alert rule-Once you verify your query, you can create the alert rule. Select **New alert rule** to create a new alert rule based on the current log query. The **Scope** will already be set to the current resource. You don't need to change this value. ++Once you verify your query, you can create the alert rule. Select **New alert rule** to create a new alert rule based on the current log query. The **Scope** is already set to the current resource. You don't need to change this value. :::image type="content" source="media/tutorial-log-alert/create-alert-rule.png" lightbox="media/tutorial-log-alert/create-alert-rule.png"alt-text="Create alert rule":::+ ## Configure condition -On the **Condition** tab, the **Log query** will already be filled in. The **Measurement** section defines how the records from the log query will be measured. If the query doesn't perform a summary, then the only option will be to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you'll have the option to use number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated. For example, if the aggregation granularity is set to 5 minutes, the alert rule will evaluate the data aggregated over the last 5 minutes. If the aggregation granularity is set to 15 minutes, the alert rule will evaluate the data aggregated over the last 15 minutes. It is important to choose the right aggregation granularity for your alert rule, as it can affect the accuracy of the alert. +On the **Condition** tab, the **Log query** is already filled in. The **Measurement** section defines how the records from the log query are measured. If the query doesn't perform a summary, then the only option is to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you have the option to use the number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated. For example, if the aggregation granularity is set to 5 minutes, the alert rule evaluates the data aggregated over the last 5 minutes. If the aggregation granularity is set to 15 minutes, the alert rule evaluates the data aggregated over the last 15 minutes. It is important to choose the right aggregation granularity for your alert rule, as it can affect the accuracy of the alert. :::image type="content" source="media/tutorial-log-alert/alert-rule-condition.png" lightbox="media/tutorial-log-alert/alert-rule-condition.png"alt-text="Alert rule condition"::: ### Configure dimensions+ **Split by dimensions** allows you to create separate alerts for different resources. This setting is useful when you're creating an alert rule that applies to multiple resources. With the scope set to a single resource, this setting typically isn't used. :::image type="content" source="media/tutorial-log-alert/alert-rule-dimensions.png" lightbox="media/tutorial-log-alert/alert-rule-dimensions.png"alt-text="Alert rule dimensions"::: -If you need a certain dimension(s) included in the alert notification email, you can specify a dimension (e.g. "Computer"), the alert notification email will include the computer name that triggered the alert. The alerting engine uses the alert query to determine the available dimensions. If you do not see the dimension you want in the drop-down list for the "Dimension name", it is because the alert query does not expose that column in the results. You can easily add the dimensions you want by adding a Project line to your query that includes the columns you want to use. You can also use the Summarize line to add additional columns to the query results. +If you need certain dimensions included in the alert notification email, you can specify a dimension (for example, "Computer"), the alert notification email will include the computer name that triggered the alert. The alerting engine uses the alert query to determine the available dimensions. If you do not see the dimension you want in the drop-down list for the "Dimension name", it is because the alert query does not expose that column in the results. You can easily add the dimensions you want by adding a Project line to your query that includes the columns you want to use. You can also use the Summarize line to add more columns to the query results. :::image type="content" source="media/tutorial-log-alert/alert-rule-condition-2.png" lightbox="media/tutorial-log-alert/alert-rule-condition-2.png" alt-text="Screenshot showing the Alert rule dimensions with a dimension called Computer set."::: ## Configure alert logic+ In the alert logic, configure the **Operator** and **Threshold value** to compare to the value returned from the measurement. An alert is created when this value is true. Select a value for **Frequency of evaluation** which defines how often the log query is run and evaluated. The cost for the alert rule increases with a lower frequency. When you select a frequency, the estimated monthly cost is displayed in addition to a preview of the query results over a time period. -For example, if the measurement is **Table rows**, the alert logic may be **Greater than 0** indicating that at least one record was returned. If the measurement is a columns value, then the logic may need to be greater than or less than a particular threshold value. In the example below, the log query is looking for anonymous requests to a storage account. If an anonymous request has been made, then we should trigger an alert. In this case, a single row returned would trigger the alert, so the alert logic should be **Greater than 0**. +For example, if the measurement is **Table rows**, the alert logic may be **Greater than 0** indicating that at least one record was returned. If the measurement is a columns value, then the logic may need to be greater than or less than a particular threshold value. In the following example, the log query is looking for anonymous requests to a storage account. If an anonymous request is made, then we should trigger an alert. In this case, a single row returned would trigger the alert, so the alert logic should be **Greater than 0**. :::image type="content" source="media/tutorial-log-alert/alert-rule-alert-logic.png" lightbox="media/tutorial-log-alert/alert-rule-alert-logic.png"alt-text="Alert logic"::: -- ## Configure actions [!INCLUDE [Action groups](../../../includes/azure-monitor-tutorial-action-group.md)] Click **Create alert rule** to create the alert rule. ## View the alert [!INCLUDE [View alert](../../../includes/azure-monitor-tutorial-view-alert.md)] - ## Next steps-Now that you've learned how to create a log query alert for an Azure resource, have a look at workbooks for creating interactive visualizations of monitoring data. ++Now that you've learned how to create a log search alert for an Azure resource, have a look at workbooks for creating interactive visualizations of monitoring data. > [!div class="nextstepaction"] > [Azure Monitor Workbooks](../visualize/workbooks-overview.md) |
azure-monitor | Availability Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md | A custom alert rule offers higher values for the aggregation period (up to 24 ho 1. The **Configure alerts** option from the menu takes you to the new experience where you can select specific tests or locations on which to set up alert rules. You can also configure the action groups for this alert rule here. -- **Alert on custom analytics queries**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-unified-log.md). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. It's also applicable if you're sending custom availability results by using the TrackAvailability SDK.+- **Alert on custom analytics queries**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-types.md#log-alerts). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. It's also applicable if you're sending custom availability results by using the TrackAvailability SDK. The metrics on availability data include any custom availability results you might be submitting by calling the TrackAvailability SDK. You can use the alerting on metrics support to alert on custom availability results. |
azure-monitor | Convert Classic Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md | From within the Application Insights resource pane, select **Properties** > **Ch This section provides answers to common questions. +### What will happen if I don't migrate my Application Insights classic resource to a workspace-based resource? ++Microsoft will begin an automatic phased approach to migrating classic resources to workspace-based resources beginning in May 2024 and this migration will span the course of several months. We can't provide approximate dates that specific resources, subscriptions, or regions will be migrated. ++We strongly encourage manual migration to workspace-based resources, which is initiated by selecting the deprecation notice banner in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace will be used to store your application data. If you use continuous export, you'll need to additionally migrate to diagnostic settings or disable the feature first. ++If you don't wish to have your classic resource automatically migrated to a workspace-based resource, you may delete or manually migrate the resource. + ### Is there any implication on the cost from migration? There's usually no difference, with a couple of exceptions: If you're using Terraform to manage your Azure resources, it's important to use To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher, which performs the proper migration steps by issuing the appropriate ARM call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values. ### Can I still use the old API to create Application Insights resources programmatically?-Yes, calls to the old API for creating Application Insights resources continue to work as before. The old API version doesn't include a reference to the Log Analytics resource. However, when you trigger a legacy API call, it automatically creates a resource and the required association between Application Insights and Log Analytics. ++For backwards compatibility, calls to the old API for creating Application Insights resources will continue to work. Each of these calls will eventually create both a workspace-based Application Insights resource and a Log Analytics workspace to store the data. ++We strongly encourage updating to the [new API](https://learn.microsoft.com/azure/azure-monitor/app/resource-manager-app-resource) for better control over resource creation. ### Should I migrate diagnostic settings on classic Application Insights before moving to a workspace-based AI? Yes, we recommend migrating diagnostic settings on classic Application Insights resources before transitioning to a workspace-based Application Insights. It ensures continuity and compatibility of your diagnostic settings. -### What is the migration process for Application Insights resources? -The migration of Application Insights resources to the new format isn't instantaneous on the day of deprecation. Instead, it occurs over time. We'll gradually migrate all Application Insights resources, ensuring a smooth transition with minimal disruption to your services. - ## Troubleshooting This section offers troubleshooting tips for common issues. |
azure-monitor | Azure Monitor Rest Api Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-rest-api-index.md | Organized by subject area. | [Metric alerts](/rest/api/monitor/metric-alerts) | Manages and lists [metric alert rules](./alerts/alerts-overview.md). | | [Metric alerts status](/rest/api/monitor/metric-alerts-status) | Lists the status of [metric alert rules](./alerts/alerts-overview.md). | | [Prometheus rule groups](/rest/api/monitor/prometheus-rule-groups) | Manages and lists [Prometheus rule groups](./essentials/prometheus-rule-groups.md) (alert rules and recording rules). |-| [Scheduled query rules - 2023-03-15 (preview)](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2023-03-15-preview&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). | -| [Scheduled query rules - 2018-04-16](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2018-04-16&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). | -| [Scheduled query rules - 2021-08-01](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2021-08-01&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). | +| [Scheduled query rules - 2023-03-15 (preview)](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2023-03-15-preview&preserve-view=true) | Manages and lists [log search alert rules](./alerts/alerts-types.md#log-alerts). | +| [Scheduled query rules - 2018-04-16](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2018-04-16&preserve-view=true) | Manages and lists [log search alert rules](./alerts/alerts-types.md#log-alerts). | +| [Scheduled query rules - 2021-08-01](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2021-08-01&preserve-view=true) | Manages and lists [log search alert rules](./alerts/alerts-types.md#log-alerts). | | [Smart Detector alert rules](/rest/api/monitor/smart-detector-alert-rules) | Manages and lists [smart detection alert rules](./alerts/alerts-types.md#smart-detection-alerts). | | ***Application Insights*** | | | [Components](/rest/api/application-insights/components) | Enables you to manage components that contain Application Insights data. | |
azure-monitor | Best Practices Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md | This table describes Azure Monitor features that provide analysis of collected d |||--| |Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. | |[Metrics Explorer](essentials/metrics-getting-started.md)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no other configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. |-|[Log Analytics](logs/log-analytics-overview.md)|With Log Analytics, you can create log queries to interactively work with log data and create log query alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. | +|[Log Analytics](logs/log-analytics-overview.md)|With Log Analytics, you can create log queries to interactively work with log data and create log search alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. | ## Built-in visualization tools |
azure-monitor | Best Practices Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md | The following table shows the configuration steps required to collect all availa ### Collect tenant and subscription logs -The [Microsoft Entra logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically. When you send them to a Log Analytics workspace, you can analyze these events with other log data by using log queries in Log Analytics. You can also create log query alerts, which are the only way to alert on Microsoft Entra logs and provide more complex logic than activity log alerts. +The [Microsoft Entra logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically. When you send them to a Log Analytics workspace, you can analyze these events with other log data by using log queries in Log Analytics. You can also create log search alerts, which are the only way to alert on Microsoft Entra logs and provide more complex logic than activity log alerts. There's no cost for sending the activity log to a workspace, but there's a data ingestion and retention charge for Microsoft Entra logs. |
azure-monitor | Container Insights Custom Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-custom-metrics.md | Title: Custom metrics collected by Container insights description: Describes the custom metrics collected for a Kubernetes cluster by Container insights in Azure Monitor. Previously updated : 09/28/2022 Last updated : 02/15/2024 |
azure-monitor | Container Insights Log Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md | Title: Log alerts from Container insights | Microsoft Docs -description: This article describes how to create custom log alerts for memory and CPU utilization from Container insights. + Title: Log search alerts from Container insights | Microsoft Docs +description: This article describes how to create custom log search alerts for memory and CPU utilization from Container insights. Last updated 08/29/2022 -# Create log alerts from Container insights +# Create log search alerts from Container insights Container insights monitors the performance of container workloads that are deployed to managed or self-managed Kubernetes clusters. To alert on what matters, this article describes how to create log-based alerts for the following situations with Azure Kubernetes Service (AKS) clusters: Container insights monitors the performance of container workloads that are depl - `Failed`, `Pending`, `Unknown`, `Running`, or `Succeeded` pod-phase counts - When free disk space on cluster nodes exceeds a threshold -To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. Metric alerts have lower latency than log alerts, but log alerts provide advanced querying and greater sophistication. Log alert queries compare a datetime to the present by using the `now` operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time [UTC] format.) +To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. Metric alerts have lower latency than log search alerts, but log search alerts provide advanced querying and greater sophistication. Log search alert queries compare a datetime to the present by using the `now` operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time [UTC] format.) > [!IMPORTANT] > Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create alert rules, see the "Alert rules" section in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). -If you aren't familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log alerts in Azure Monitor](../alerts/alerts-unified-log.md). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md). +If you aren't familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log search alerts in Azure Monitor](../alerts/alerts-types.md#log-alerts). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md). ## Log query measurements-[Log alerts](../alerts/alerts-unified-log.md) can measure two different things, which can be used to monitor virtual machines in different scenarios: +[Log search alerts](../alerts/alerts-types.md#log-alerts) can measure two different things, which can be used to monitor virtual machines in different scenarios: -- [Result count](../alerts/alerts-unified-log.md#result-count): Counts the number of rows returned by the query and can be used to work with events such as Windows event logs, Syslog, and application exceptions.-- [Calculation of a value](../alerts/alerts-unified-log.md#calculation-of-a-value): Makes a calculation based on a numeric column and can be used to include any number of resources. An example is CPU percentage.+- [Result count](../alerts/alerts-types.md#log-alerts): Counts the number of rows returned by the query and can be used to work with events such as Windows event logs, Syslog, and application exceptions. +- [Calculation of a value](../alerts/alerts-types.md#log-alerts): Makes a calculation based on a numeric column and can be used to include any number of resources. An example is CPU percentage. ### Target resources and dimensions To create resource-centric alerts at scale for a subscription or resource group, You might also decide not to split when you want a condition on multiple resources in the scope. For example, you might want to create an alert if at least five machines in the resource group scope have CPU usage over 80%. You might want to see a list of the alerts by affected computer. You can use a custom workbook that uses a custom [resource graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook. -## Create a log query alert rule -To create a log query alert rule by using the portal, see [this example of a log query alert](../alerts/tutorial-log-alert.md), which provides a complete walkthrough. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article. +## Create a log search alert rule +To create a log search alert rule by using the portal, see [this example of a log search alert](../alerts/tutorial-log-alert.md), which provides a complete walkthrough. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article. -To create a query alert rule by using an Azure Resource Manager (ARM) template, see [Resource Manager template samples for log alert rules in Azure Monitor](../alerts/resource-manager-alerts-log.md). You can use these same processes to create ARM templates for the log queries in this article. +To create a query alert rule by using an Azure Resource Manager (ARM) template, see [Resource Manager template samples for log search alert rules in Azure Monitor](../alerts/resource-manager-alerts-log.md). You can use these same processes to create ARM templates for the log queries in this article. ## Resource utilization |
azure-monitor | Container Insights Metric Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md | Source code for the recommended alerts can be found in [GitHub](https://aka.ms/a | Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 | > [!NOTE]-> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule isn't included with the Prometheus alert rules. +> The recommended alert rules in the Azure portal also include a log search alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule isn't included with the Prometheus alert rules. >-> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) that uses the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`. +> You can create this rule on your own by creating a [log search alert rule](../alerts/alerts-types.md#log-alerts) that uses the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`. Common properties across all these alert rules include: |
azure-monitor | Container Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md | No. Container insights don't support collection of Kubernetes audit logs. **Does Container Insights support pod sandboxing?** Yes, Container Insights supports pod sandboxing through support for Kata Containers. See [Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)](../../aks/use-pod-sandboxing.md). +**Is it possible for a single AKS cluster to use multiple Log Analytics workspaces in Container Insights?** +No. Container insights only accepts one Log Analytics Workspace in Container Insights for each AKS cluster. + ## Next steps - See [Enable monitoring for Kubernetes clusters](kubernetes-monitoring-enable.md) to enable Managed Prometheus and Container insights on your cluster. |
azure-monitor | Container Insights Prometheus Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-logs.md | -This article describes how to send Prometheus metrics from your Kubernetes cluster monitored by Container insights to a Log Analytics workspace. Before you perform this configuration, you should first ensure that you're [scraping Prometheus metrics from your cluster using Azure Monitor managed service for Prometheus](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration), which is the recommended method for monitoring your clusters. Use the configuration described in this article only if you also want to send this same data to a Log Analytics workspace where you can analyze it using [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log-query.md). +This article describes how to send Prometheus metrics from your Kubernetes cluster monitored by Container insights to a Log Analytics workspace. Before you perform this configuration, you should first ensure that you're [scraping Prometheus metrics from your cluster using Azure Monitor managed service for Prometheus](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration), which is the recommended method for monitoring your clusters. Use the configuration described in this article only if you also want to send this same data to a Log Analytics workspace where you can analyze it using [log queries](../logs/log-query-overview.md) and [log search alerts](../alerts/alerts-log-query.md). This configuration requires configuring the *monitoring addon* for the Azure Monitor agent, which is the same one used by Container insights to send data to a Log Analytics workspace. It requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring the monitoring addon for the Azure Monitor agent used by Container insights as shown the following diagram. |
azure-monitor | Container Insights Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md | Title: Troubleshoot Container insights | Microsoft Docs description: This article describes how you can troubleshoot and resolve issues with Container insights. Previously updated : 05/24/2022 Last updated : 02/15/2024 |
azure-monitor | Monitor Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md | The following table describes the different types of custom alert rules that you |:|:| | Prometheus alerts | [Prometheus alerts](../alerts/prometheus-alerts.md) are written in Prometheus Query Language (Prom QL) and applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Recommended alerts already include the most common Prometheus alerts, and you can [create addition alert rules](../essentials/prometheus-rule-groups.md) as required. | | Metric alert rules | Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. Metric alert rules can be useful to alert on AKS performance using any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). |-| Log alert rules | Use log alert rules to generate an alert from the results of a log query. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). | +| Log search alert rules | Use log search alert rules to generate an alert from the results of a log query. For more information, see [How to create log search alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). | #### Recommended alerts Start with a set of recommended Prometheus alerts from [Metric alert rules in Container insights (preview)](container-insights-metric-alerts.md#prometheus-alert-rules) which include the most common alerting conditions for a Kubernetes cluster. You can add more alert rules later as you identify additional alerting conditions. |
azure-monitor | Cost Estimate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-estimate.md | This section includes charges for alert rules. | Category | Description | |:|:| | Metric Signals Monitored | Number of [metrics alert rules](alerts/alerts-types.md#metric-alerts) and their time series. | -| Log Signals Monitored | Number of [log alert rules](alerts/alerts-types.md#log-alerts) and their frequency. | +| Log Signals Monitored | Number of [log search alert rules](alerts/alerts-types.md#log-alerts) and their frequency. | ## ITSM connector - ticket creation/update This section includes charges for ITSM events, which are sent in response to alerts being triggered. |
azure-monitor | Cost Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md | Several other features don't have a direct cost, but you instead pay for the ing | Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. | | Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). | | Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |-| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. | +| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log search alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1), the cost will also depend on the number of time series created by the dimensions resulting from your query. | | Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated. |
azure-monitor | Data Platform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md | Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [A > >Azure Monitor Logs is a log data platform that collects Activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources. - You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal. You can also add the results to an [Azure dashboard](app/overview-dashboard.md#create-custom-kpi-dashboards-using-application-insights) for visualization in combination with other data. You can create [log alerts](alerts/alerts-log.md), which will trigger an alert based on the results of a schedule query. + You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal. You can also add the results to an [Azure dashboard](app/overview-dashboard.md#create-custom-kpi-dashboards-using-application-insights) for visualization in combination with other data. You can create [log search alerts](alerts/alerts-log.md), which will trigger an alert based on the results of a schedule query. Read more about Azure Monitor logs including their sources of data in [Logs in Azure Monitor](logs/data-platform-logs.md). |
azure-monitor | Activity Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md | You can also access activity log events by using the following methods: - Correlate activity log data with other monitoring data collected by Azure Monitor. - Consolidate log entries from multiple Azure subscriptions and tenants into one location for analysis together. - Use log queries to perform complex analysis and gain deep insights on activity log entries.-- Use log alerts with Activity entries for more complex alerting logic.+- Use log search alerts with Activity entries for more complex alerting logic. - Store activity log entries for longer than the activity log retention period. - Incur no data ingestion or retention charges for activity log data stored in a Log Analytics workspace. - The default retention period in Log Analytics is 90 days |
azure-monitor | Collect Custom Metrics Linux Telegraf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md | Last updated 08/01/2023 # Collect custom metrics for a Linux VM with the InfluxData Telegraf agent +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor. > [!NOTE] |
azure-monitor | Monitor Azure Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md | The **Activity log** menu item lets you view entries in the [activity log](../es The **Alerts** page shows you any recent alerts that were fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs. -To learn how to create alert rules and view alerts, see [Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md). +To learn how to create alert rules and view alerts, see [Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Create a log search alert for an Azure resource](../alerts/tutorial-log-alert.md). :::image type="content" source="media/monitor-azure-resource/alerts-view.png" lightbox="media/monitor-azure-resource/alerts-view.png" alt-text="Screenshot that shows the Alerts page."::: |
azure-monitor | Platform Logs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md | Resource logs must have a diagnostic setting to be viewed. Create a [diagnostic | Destination | Description | |:|:|-| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. | +| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log search alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. | | Event hub | Send platform log data outside of Azure, for example, to a third-party SIEM or custom telemetry platform via Event hubs | | Azure Storage | Archive the logs to Azure storage for audit or backup. | | [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Partner integrations are specialized integrations between Azure Monitor and non-Microsoft monitoring platforms. Partner integrations are especially useful when you're already using one of the supported partners. | |
azure-monitor | Prometheus Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-get-started.md | + + Title: Get started with Azure Monitor Managed Service for Prometheus +description: Get started with Azure Monitor managed service for Prometheus, which provides a Prometheus-compatible interface for storing and retrieving metric data. ++++ Last updated : 02/15/2024+++# Get Started with Azure Monitor managed service for Prometheus ++The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics. ++- To collect Prometheus metrics from your Kubernetes cluster, see [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana). +- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write](./remote-write-prometheus.md). ++## Data sources ++Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources: ++- Azure Kubernetes service (AKS) +- Azure Arc-enabled Kubernetes +- Any server or Kubernetes cluster running self-managed Prometheus using [remote-write](./remote-write-prometheus.md). ++## Next steps ++- [Learn more about Azure Monitor Workspace](./azure-monitor-workspace-overview.md) +- [Enable Azure Monitor managed service for Prometheus on your Kubernetes clusters](../containers/kubernetes-monitoring-enable.md). +- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md). +- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md). |
azure-monitor | Prometheus Metrics Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md | Azure Monitor managed service for Prometheus allows you to collect and analyze m Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources: - Azure Kubernetes service (AKS)-- Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw).-- Azure Arc-enabled Kubernetes +- Azure Arc-enabled Kubernetes +- Any server or Kubernetes cluster running self-managed Prometheus using [remote-write](./remote-write-prometheus.md). ## Enable The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics. - To collect Prometheus metrics from your Kubernetes cluster, see [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).-- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write - managed identity](prometheus-remote-write-managed-identity.md).+- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write](./remote-write-prometheus.md). ## Grafana integration The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards. |
azure-monitor | Remote Write Prometheus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/remote-write-prometheus.md | + + Title: Remote-write Prometheus metrics to Azure Monitor managed service for Prometheus +description: Describes how customers can configure remote-write to send data from self-managed Prometheus running in any environment to Azure Monitor managed service for Prometheus ++ Last updated : 02/12/2024+++# Prometheus Remote-Write to Azure Monitor Workspace ++Azure Monitor managed service for Prometheus is intended to be a replacement for self-managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters. +In case you are using self-managed Prometheus, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into the Azure managed service. ++For sending data from self-managed Prometheus running on your environments to Azure Monitor workspace, follow the steps in this document. ++## Choose the right solution for remote-write ++Based on where your self-managed Prometheus is running, choose from the options below: ++- **Self-managed Prometheus running on Azure Kubernetes Services (AKS) or Azure VM/VMSS**: Follow the steps in this documentation for configuring remote-write in Prometheus using User-assigned managed identity authentication. +- **Self-managed Prometheus running on non-Azure environments**: Azure Monitor managed service for Prometheus has a managed offering for supported [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). However, if you wish to send data from self-managed Prometheus running on non-Azure or on-premises environments, consider the following options: + - Onboard supported Kubernetes or VM/VMSS to [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) / [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) which will allow you to manage and configure them in Azure. Then follow the steps in this documentation for configuring remote-write in Prometheus using User-assigned managed identity authentication. + - For all other scenarios, follow the steps in this documentation for configuring remote-write in Prometheus using Azure Entra application. ++> [!NOTE] +> Currently user-assigned managed identity and Azure Entra application are the authentication methods supported for remote-writing to Azure Monitor Workspace. If you are using other authentication methods and running self-managed Prometheus on **Kubernetes**, Azure Monitor provides a reverse proxy container that provides an abstraction for ingestion and authentication for Prometheus remote-write metrics. Please see [remote-write from Kubernetes to Azure Monitor Managed Service for Prometheus](../containers/prometheus-remote-write.md) to use this reverse proxy container. ++## Prerequisites ++- You must have [self-managed Prometheus](https://prometheus.io/) running on your environment. Supported versions are: + - For managed identity, versions greater than v2.45 + - For Azure Entra, versions greater than v2.48 +- Azure Monitor managed service for Prometheus stores metrics in [Azure Monitor workspace](./azure-monitor-workspace-overview.md). To proceed, you need to have an Azure Monitor Workspace instance. [Create a new workspace](./azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace) if you don't already have one. ++## Configure Remote-Write to send data to Azure Monitor Workspace ++You can enable remote-write by configuring one or more remote-write sections in the Prometheus configuration file. Details about the Prometheus remote write setting can be found [here](https://prometheus.io/docs/practices/remote_write/). ++The **remote_write** section in the Prometheus configuration file defines one or more remote-write configurations, each of which has a mandatory url parameter and several optional parameters. The url parameter specifies the HTTP URL of the remote endpoint that implements the Prometheus remote-write protocol. In this case, the URL is the metrics ingestion endpoint for your Azure Monitor Workspace. The optional parameters can be used to customize the behavior of the remote-write client, such as authentication, compression, retry, queue, or relabeling settings. For a full list of the available parameters and their meanings, see the Prometheus documentation: [https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). ++To send data to your Azure Monitor Workspace, you will need the following information: ++- **Remote-write URL**: This is the metrics ingestion endpoint of the Azure Monitor workspace. To find this, go to the Overview page of your Azure Monitor Workspace instance in Azure portal, and look for the Metrics ingestion endpoint property. ++ :::image type="content" source="media/azure-monitor-workspace-overview/remote-write-ingestion-endpoint.png" lightbox="media/azure-monitor-workspace-overview/remote-write-ingestion-endpoint.png" alt-text="Screenshot of Azure Monitor workspaces menu and ingestion endpoint."::: ++- **Authentication settings**: Currently **User-assigned managed identity** and **Azure Entra application** are the authentication methods supported for remote-writing to Azure Monitor Workspace. Note that for Azure Entra application, client secrets have an expiration date and it is the responsibility of the user to keep secrets valid. ++### User-assigned managed identity ++1. Create a managed identity and then add a role assignment for the managed identity to access your environment. For details, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). +1. Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the managed identity: + 1. The managed identity must be assigned the **Monitoring Metrics Publisher** role on the data collection rule that is associated with your Azure Monitor Workspace. + 1. On the resource menu for your Azure Monitor workspace, select Overview. Select the link for Data collection rule: ++ :::image type="content" source="media/azure-monitor-workspace-overview/remote-write-dcr.png" lightbox="media/azure-monitor-workspace-overview/remote-write-dcr.png" alt-text="Screenshot of how to navigate to the data collection rule."::: ++ 1. On the resource menu for the data collection rule, select **Access control (IAM)**. Select Add, and then select Add role assignment. + 1. Select the **Monitoring Metrics Publisher role**, and then select **Next**. + 1. Select Managed Identity, and then choose Select members. Select the subscription that contains the user-assigned identity, and then select User-assigned managed identity. Select the user-assigned identity that you want to use, and then choose Select. + 1. To complete the role assignment, select **Review + assign**. ++### Azure Entra application ++The process to set up Prometheus remote write for an application by using Microsoft Entra authentication involves completing the following tasks: ++1. Complete the steps to [register an application with Microsoft Entra ID](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) and create a service principal. ++1. Get the client ID and secret ID of the Microsoft Entra application. In the Azure portal, go to the **Microsoft Entra ID** menu and select **App registrations**. +1. In the list of applications, copy the value for **Application (client) ID** for the registered application. +++1. Open the **Certificates and Secrets** page of the application, and click on **+ New client secret** to create a new Secret. Copy the value of the secret securely. ++> [!WARNING] +> Client secrets have an expiration date. It's the responsibility of the user to keep them valid. ++1. Assign the **Monitoring Metrics Publisher** role on the workspace data collection rule to the application. The application must be assigned the Monitoring Metrics Publisher role on the data collection rule that is associated with your Azure Monitor workspace. +1. On the resource menu for your Azure Monitor workspace, select **Overview**. For **Data collection rule**, select the link. ++ :::image type="content" source="../containers/media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot that shows the data collection rule that's used by Azure Monitor workspace." lightbox="../containers/media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png"::: ++1. On the resource menu for the data collection rule, select **Access control (IAM)**. ++1. Select **Add**, and then select **Add role assignment**. ++ :::image type="content" source="../containers/media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot that shows adding a role assignment on Access control pages." lightbox="../containers/media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png"::: ++1. Select the **Monitoring Metrics Publisher** role, and then select **Next**. ++ :::image type="content" source="../containers/media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot that shows a list of role assignments." lightbox="../containers/media/prometheus-remote-write-managed-identity/add-role-assignment.png"::: ++1. Select **User, group, or service principal**, and then choose **Select members**. Select the application that you created, and then choose **Select**. ++ :::image type="content" source="../containers/media/prometheus-remote-write-active-directory/select-application.png" alt-text="Screenshot that shows selecting the application." lightbox="../containers/media/prometheus-remote-write-active-directory/select-application.png"::: ++1. To complete the role assignment, select **Review + assign**. ++## Configure remote-write ++Now, that you have the required information, configure the following section in the Prometheus.yml config file of your self-managed Prometheus instance to send data to your Azure Monitor Workspace. ++```yaml +remote_write: + url: "<<Metrics Ingestion Endpoint for your Azure Monitor Workspace>>" +# AzureAD configuration. +# The Azure Cloud. Options are 'AzurePublic', 'AzureChina', or 'AzureGovernment'. + azuread: + cloud: 'AzurePublic' + managed_identity: + client_id: "<<client-id of the managed identity>>" + oauth: + client_id: "<<client-id of the app>>" + client_secret: "<<client secret>>" + tenant_id: "<<tenant id of Azure subscription>>" +``` ++Replace the values in the YAML with the values that you copied in the previous steps. If you are using Managed Identity authentication, then you can skip the **"oauth"** section of the yaml. And similarly, if you are using Azure Entra as the authentication method, you can skip the **"managed_identity"** section of the yaml. ++After editing the configuration file, you need to reload or restart Prometheus to apply the changes. ++## Verify if the remote-write is setup correctly ++Use the following methods to verify that Prometheus data is being sent into your Azure Monitor workspace. ++### PromQL queries ++Use PromQL queries in Grafana and verify that the results return expected data. See [getting Grafana setup with Managed Prometheus](../essentials/prometheus-grafana.md) to configure Grafana. ++### Prometheus explorer in Azure Monitor Workspace ++Go to your Azure Monitor workspace in the Azure portal and click on Prometheus Explorer to query the metrics that you are expecting from the self-managed Prometheus environment. ++## Troubleshoot remote write ++You can look at few remote write metrics that can help understand possible issues. A list of these metrics can be found [here](https://github.com/prometheus/prometheus/blob/v2.26.0/storage/remote/queue_manager.go#L76-L223) and [here](https://github.com/prometheus/prometheus/blob/v2.26.0/tsdb/wal/watcher.go#L88-L136). ++For example, *prometheus_remote_storage_retried_samples_total* could indicate problems with the remote setup if there is a steady high rate for this metric, and you can contact support if such issues arise. ++### Hitting your ingestion quota limit ++With remote write you will typically get started using the remote write endpoint shown on the Azure Monitor workspace overview page. Behind the scenes, this uses a system Data Collection Rule (DCR) and system Data Collection Endpoint (DCE). These resources have an ingestion limit covered in the [Azure Monitor service limits](../service-limits.md#prometheus-metrics) document. You may hit these limits if you set up remote write for several clusters all sending data into the same endpoint in the same Azure Monitor workspace. If this is the case you can [create additional DCRs and DCEs](https://aka.ms/prometheus/remotewrite/dcrartifacts) and use them to spread out the ingestion loads across a few ingestion endpoints. ++The INGESTION-URL uses the following format: +https\://\<**Metrics-Ingestion-URL**>/dataCollectionRules/\<**DCR-Immutable-ID**>/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview ++**Metrics-Ingestion-URL**: can be obtained by viewing DCE JSON body with API version 2021-09-01-preview or newer. See screenshot below for reference. +++**DCR-Immutable-ID**: can be obtained by viewing DCR JSON body or running the following command in the Azure CLI: ++```azureccli +az monitor data-collection rule show --name "myCollectionRule" --resource-group "myResourceGroup" +``` ++## Next steps ++- [Learn more about Azure Monitor managed service for Prometheus](./prometheus-metrics-overview.md). +- [Learn more about Azure Monitor reverse proxy side car for remote-write from self-managed Prometheus running on Kubernetes](../containers/prometheus-remote-write.md) |
azure-monitor | Resource Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md | Azure resource logs are [platform logs](../essentials/platform-logs-overview.md) - Correlate resource log data with other monitoring data collected by Azure Monitor. - Consolidate log entries from multiple Azure resources, subscriptions, and tenants into one location for analysis together. - Use log queries to perform complex analysis and gain deep insights on log data.-- Use log alerts with complex alerting logic.+- Use log search alerts with complex alerting logic. [Create a diagnostic setting](../essentials/diagnostic-settings.md) to send resource logs to a Log Analytics workspace. This data is stored in tables as described in [Structure of Azure Monitor Logs](../logs/data-platform-logs.md). The tables used by resource logs depend on what type of collection the resource is using: |
azure-monitor | Tutorial Resource Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md | Browse through the available queries. Identify one to run and select **Run**. Th :::image type="content" source="media/tutorial-resource-logs/query-results.png" lightbox="media/tutorial-resource-logs/query-results.png"alt-text="Screenshot that shows the results of a sample log query."::: ## Next steps-Now that you're collecting resource logs, create a log query alert to be proactively notified when interesting data is identified in your log data. +Now that you're collecting resource logs, create a log search alert to be proactively notified when interesting data is identified in your log data. > [!div class="nextstepaction"]-> [Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md) +> [Create a log search alert for an Azure resource](../alerts/tutorial-log-alert.md) |
azure-monitor | Analyze Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md | An unexpected increase in any of these factors can result in increased charges f To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period. -The following example is a [log alert rule](../alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule. +The following example is a [log search alert rule](../alerts/alerts-types.md#log-alerts) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule. | Setting | Value | |:|:| |
azure-monitor | Cross Workspace Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md | If you manage subscriptions in other Microsoft Entra tenants through [Azure Ligh * Cross-resource and cross-service queries donΓÇÖt support parameterized functions and functions whose definition includes other cross-workspace or cross-service expressions, including `adx()`, `arg()`, `resource()`, `workspace()`, and `app()`. * You can include up to 100 Log Analytics workspaces or classic Application Insights resources in a single query. * Querying across a large number of resources can substantially slow down the query.-* Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md). +* Cross-resource queries in log search alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md). * References to a cross resource, such as another workspace, should be explicit and can't be parameterized. ## Query across workspaces, applications, and resources using functions applicationsScoping ``` >[!NOTE]-> This method can't be used with log alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isn't supported. If you prefer to use a function for resource scoping in log alerts, you must edit the alert rule in the portal or with an Azure Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log alert query. +> This method can't be used with log search alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isn't supported. If you prefer to use a function for resource scoping in log search alerts, you must edit the alert rule in the portal or with an Azure Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log search alert query. ## Query across Log Analytics workspaces using workspace() |
azure-monitor | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md | Key rotation has two modes: All your data remains accessible after the key rotation operation. Data always encrypted with the Account Encryption Key ("AEK"), which is encrypted with your new Key Encryption Key ("KEK") version in Key Vault. -## Customer-managed key for saved queries and log alerts +## Customer-managed key for saved queries and log search alerts -The query language used in Log Analytics is expressive and can contain sensitive information in comments, or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store saved queries and log alerts encrypted with your key in your own Storage Account when linked to your workspace. +The query language used in Log Analytics is expressive and can contain sensitive information in comments, or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store saved queries and log search alerts encrypted with your key in your own Storage Account when linked to your workspace. ## Customer-managed key for Workbooks -With the considerations mentioned for [Customer-managed key for saved queries and log alerts](#customer-managed-key-for-saved-queries-and-log-alerts), Azure Monitor enables you to store Workbook queries encrypted with your key in your own Storage Account, when selecting **Save content to an Azure Storage Account** in Workbook 'Save' operation. +With the considerations mentioned for [Customer-managed key for saved queries and log search alerts](#customer-managed-key-for-saved-queries-and-log-search-alerts), Azure Monitor enables you to store Workbook queries encrypted with your key in your own Storage Account, when selecting **Save content to an Azure Storage Account** in Workbook 'Save' operation. <!-- convertborder later --> :::image type="content" source="media/customer-managed-keys/cmk-workbook.png" lightbox="media/customer-managed-keys/cmk-workbook.png" alt-text="Screenshot of Workbook save." border="false"::: > [!NOTE] > Queries remain encrypted with Microsoft key ("MMK") in the following scenarios regardless Customer-managed key configuration: Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks. -When linking your Storage Account for saved queries, the service stores saved-queries and log alerts queries in your Storage Account. Having control on your Storage Account [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md), you can protect saved queries and log alerts with Customer-managed key. You will, however, be responsible for the costs associated with that Storage Account. +When linking your Storage Account for saved queries, the service stores saved queries and log search alert queries in your Storage Account. Having control on your Storage Account [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md), you can protect saved queries and log search alerts with Customer-managed key. You will, however, be responsible for the costs associated with that Storage Account. **Considerations before setting Customer-managed key for queries** * You need to have "write" permissions on your workspace and Storage Account. When linking your Storage Account for saved queries, the service stores saved-qu * Saves queries in storage are considered service artifacts and their format may change. * Linking a Storage Account for queries removes existing saves queries from your workspace. Copy saves queries that you need before this configuration. You can view your saved queries using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch). * Query 'history' and 'pin to dashboard' aren't supported when linking Storage Account for queries.-* You can link a single Storage Account to a workspace for both saved queries and log alerts queries. -* Log alerts are saved in blob storage and Customer-managed key encryption can be configured at Storage Account creation, or later. -* Fired log alerts won't contain search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts. +* You can link a single Storage Account to a workspace for both saved queries and log search alert queries. +* Log search alerts are saved in blob storage and Customer-managed key encryption can be configured at Storage Account creation, or later. +* Fired log search alerts won't contain search results or alert query. You can use [alert dimensions](../alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to get context in the fired alerts. **Configure BYOS for saved queries** Content-type: application/json After the configuration, any new *saved search* query will be saved in your storage. -**Configure BYOS for log alerts queries** +**Configure BYOS for log search alert queries** -Link a Storage Account for *Alerts* to keep *log alerts* queries in your Storage Account. +Link a Storage Account for *Alerts* to keep *log search alert* queries in your Storage Account. # [Azure portal](#tab/portal) |
azure-monitor | Daily Cap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md | To configure the daily cap with Azure Resource Manager, set the `dailyQuota`, `d ## Alert when daily cap is reached When the daily cap is reached for a Log Analytics workspace, a banner is displayed in the Azure portal, and an event is written to the **Operations** table in the workspace. You should create an alert rule to proactively notify you when this occurs. -To receive an alert when the daily cap is reached, create a [log alert rule](../alerts/alerts-unified-log.md) with the following details. +To receive an alert when the daily cap is reached, create a [log search alert rule](../alerts/alerts-types.md#log-alerts) with the following details. | Setting | Value | |:|:| To create an alert when the daily cap is reached, create an [Activity log alert ## View the effect of the daily cap-The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. This accounts for the security data types that aren't included in the daily cap. In this example, the workspace's reset hour is 14:00. Change this value for your workspace. +The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. In this example, the workspace's reset hour is 14:00. Change `DailyCapResetHour` to match the reset hour of your workspace which you can see on the Daily Cap configuration page. ```kusto let DailyCapResetHour=14; Usage-| where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent") | where TimeGenerated > ago(32d) | extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime) | where StartTime > startofday(ago(31d)) |
azure-monitor | Data Platform Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md | The following table describes some of the ways that you can use Azure Monitor Lo | Capability | Description | |:|:| | Analyze | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data by using a powerful analysis engine. |-| Alert | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | +| Alert | Configure a [log search alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. | | Visualize | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.| | Get insights | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. | | Retrieve | Access log query results from:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> | This configuration will be different depending on the data source. For example: Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./workspace-design.md). You must create at least one workspace to use Azure Monitor Logs. For a description of Log Analytics workspaces, see [Log Analytics workspace overview](log-analytics-workspace-overview.md). ## Log Analytics -Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log query alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal. +Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log search alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal. For a description of Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md). To walk through using Log Analytics features to create a simple log query and analyze its results, see [Log Analytics tutorial](./log-analytics-tutorial.md). |
azure-monitor | Log Analytics Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md | Log Analytics is a tool in the Azure portal that's used to edit and run log quer You might write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you might write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend. -Whether you work with the results of your queries interactively or use them with other Azure Monitor features, such as log query alerts or workbooks, Log Analytics is the tool that you'll use to write and test them. +Whether you work with the results of your queries interactively or use them with other Azure Monitor features, such as log search alerts or workbooks, Log Analytics is the tool that you'll use to write and test them. > [!TIP] > This article describes Log Analytics and its features. If you want to jump right into a tutorial, see [Log Analytics tutorial](./log-analytics-tutorial.md). The top bar has controls for working with a query in the query window. | Time picker | Select the time range for the data available to the query. This action is overridden if you include a time filter in the query. See [Log query scope and time range in Azure Monitor Log Analytics](./scope.md). | | Save button | Save the query to a [query pack](./query-packs.md). Saved queries are available from: <ul><li> The **Other** section in the **Queries** dialog for the workspace</li><li>The **Other** section in the **Queries** tab in the [left sidebar](#left-sidebar) for the workspace</ul> | Share button | Copy a link to the query, the query text, or the query results to the clipboard. |-| New alert rule button | Open the Create an alert rule page. Use this page to [create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=log) with an alert type of [log alert](../alerts/alerts-types.md#log-alerts). The page opens with the [Conditions tab](../alerts/alerts-create-new-alert-rule.md?tabs=log#set-the-alert-rule-conditions) selected, and your query is added to the **Search query** field. | +| New alert rule button | Open the Create an alert rule page. Use this page to [create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=log) with an alert type of [log search alert](../alerts/alerts-types.md#log-alerts). The page opens with the [Conditions tab](../alerts/alerts-create-new-alert-rule.md?tabs=log#set-the-alert-rule-conditions) selected, and your query is added to the **Search query** field. | | Export button | Export the results of the query to a CSV file or the query to Power Query Formula Language format for use with Power BI. | | Pin to button | Pin the results of the query to an Azure dashboard or add them to an Azure workbook. | | Format query button | Arrange the selected text for readability. | |
azure-monitor | Log Query Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md | Azure Monitor Logs is based on Azure Data Explorer, and log queries are written Areas in Azure Monitor where you'll use queries include: - [Log Analytics](../logs/log-analytics-overview.md): Use this primary tool in the Azure portal to edit log queries and interactively analyze their results. Even if you intend to use a log query elsewhere in Azure Monitor, you'll typically write and test it in Log Analytics before you copy it to its final location.-- [Log alert rules](../alerts/alerts-overview.md): Proactively identify issues from data in your workspace. Each alert rule is based on a log query that's automatically run at regular intervals. The results are inspected to determine if an alert should be created.+- [Log search alert rules](../alerts/alerts-overview.md): Proactively identify issues from data in your workspace. Each alert rule is based on a log query that's automatically run at regular intervals. The results are inspected to determine if an alert should be created. - [Workbooks](../visualize/workbooks-overview.md): Include the results of log queries by using different visualizations in interactive visual reports in the Azure portal. - [Azure dashboards](../visualize/tutorial-logs-dashboards.md): Pin the results of any query into an Azure dashboard, which allows you to visualize log and metric data together and optionally share with other Azure users. - [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md): Use the results of a log query in an automated workflow by using a logic app workflow. |
azure-monitor | Monitor Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md | The list shows the resource IDs where the agent has the wrong configuration. To ## Alert rules -Use [log query alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. Use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription will be charged for each alert rule as listed in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs). +Use [log search alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. Use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription will be charged for each alert rule as listed in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs). A recommended strategy is to start with two alert rules based on the level of the issue. Use a short frequency such as every 5 minutes for Errors and a longer frequency such as 24 hours for Warnings. Because Errors indicate potential data loss, you want to respond to them quickly to minimize any loss. Warnings typically indicate an issue that doesn't require immediate attention, so you can review them daily. -Use the process in [Create, view, and manage log alerts by using Azure Monitor](../alerts/alerts-log.md) to create the log alert rules. The following sections describe the details for each rule. +Use the process in [Create, view, and manage log search alerts by using Azure Monitor](../alerts/alerts-log.md) to create the log search alert rules. The following sections describe the details for each rule. | Query | Threshold value | Period | Frequency | |:|:|:|:| The following example creates a Warning alert when the data collection has reach ## Next steps -- Learn more about [log alerts](../alerts/alerts-log.md).+- Learn more about [log search alerts](../alerts/alerts-log.md). - [Collect query audit data](./query-audit.md) for your workspace. |
azure-monitor | Private Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md | To configure your Azure Storage account to use CMKs with Key Vault, use the [Azu > - When linking Storage Account for query, existing saved queries in workspace are deleted permanently for privacy. You can copy existing saved queries before storage link using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch). > - Queries saved in [query pack](./query-packs.md) aren't encrypted with Customer-managed key. Select **Save as Legacy query** when saving queries instead, to protect them with Customer-managed key. > - Saved queries are stored in table storage and encrypted with Customer-managed key when encryption is configured at Storage Account creation.-> - Log alerts are saved in blob storage where configuration of Customer-managed key encryption can be at Storage Account creation, or later. +> - Log search alerts are saved in blob storage where configuration of Customer-managed key encryption can be at Storage Account creation, or later. > - You can use a single Storage Account for all purposes, query, alert, custom log and IIS logs. Linking storage for custom log and IIS logs might require more Storage Accounts for scale, depending on the ingestion rate and storage limits. You can link up to five Storage Accounts to a workspace. ## Link storage accounts to your Log Analytics workspace |
azure-monitor | Query Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-audit.md | An audit record is created each time a query is run. If you send the data to a L |AzureAutomation|[Azure Automation.](../../automation/overview.md)| |AzureMonitorLogsConnector|[Azure Monitor Logs Connector](../../connectors/connectors-azure-monitor-logs.md).| |csharpsdk|[Log Analytics Query API.](../logs/api/overview.md)|-|Draft-Monitor|[Log alert creation in the Azure portal.](../alerts/alerts-create-new-alert-rule.md?tabs=log)| +|Draft-Monitor|[Log search alert creation in the Azure portal.](../alerts/alerts-create-new-alert-rule.md?tabs=log)| |Grafana|[Grafana connector.](../visualize/grafana-plugin.md)| |IbizaExtension|Experiences of Log Analytics in the Azure portal.| |infraInsights/container|[Container insights.](../containers/container-insights-overview.md)| |
azure-monitor | Monitor Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md | These are now listed in the [Log Analytics user interface](./logs/queries.md). ## Alerts -Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](./alerts/alerts-metric-overview.md), [logs](./alerts/alerts-unified-log.md), and the [activity log](./alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks. +Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](./alerts/alerts-metric-overview.md), [logs](./alerts/alerts-types.md#log-alerts), and the [activity log](./alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks. For an in-depth discussion of using alerts with autoscale, see [Troubleshoot Azure autoscale](./autoscale/autoscale-troubleshoot.md). |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | An effective monitoring solution proactively responds to critical events, withou **[Azure Monitor Alerts](alerts/alerts-overview.md)** notify you of critical conditions and can take corrective action. Alert rules can be based on metric or log data. - Metric alert rules provide near-real-time alerts based on collected metrics. -- Log alerts rules based on logs allow for complex logic across data from multiple sources.+- Log search alert rules based on logs allow for complex logic across data from multiple sources. Alert rules use [action groups](alerts/action-groups.md), which can perform actions such as sending email or SMS notifications. Action groups can send notifications using webhooks to trigger external processes or to integrate with your IT service management tools. Action groups, actions, and sets of recipients can be shared across multiple rules. |
azure-monitor | Resource Manager Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md | In the request body, provide a link to your template and parameter file. - [Agents](agents/resource-manager-agent.md): Deploy and configure the Log Analytics agent and a diagnostic extension. - Alerts:- - [Log alert rules](alerts/resource-manager-alerts-log.md): Configure alerts from log queries and Azure Activity Log. + - [Log search alert rules](alerts/resource-manager-alerts-log.md): Configure alerts from log queries and Azure Activity Log. - [Metric alert rules](alerts/resource-manager-alerts-metric.md): Configure alerts from metrics that use different kinds of logic. - [Application Insights](app/resource-manager-app-resource.md) - [Diagnostic settings](essentials/resource-manager-diagnostic-settings.md): Create diagnostic settings to forward logs and metrics from different resource types. |
azure-monitor | Roles Permissions Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md | If the preceding built-in roles don't meet the exact needs of your team, you can | Microsoft.Insights/MetricDefinitions/Read |Read metric definitions (list of available metric types for a resource). | | Microsoft.Insights/Metrics/Read |Read metrics for a resource. | | Microsoft.Insights/Register/Action |Register the Azure Monitor resource provider. |-| Microsoft.Insights/ScheduledQueryRules/[Read, Write, Delete] |Read, write, or delete log alerts in Azure Monitor. | +| Microsoft.Insights/ScheduledQueryRules/[Read, Write, Delete] |Read, write, or delete log search alerts in Azure Monitor. | > [!NOTE] > Access to alerts, diagnostic settings, and metrics for a resource requires that the user has read access to the resource type and scope of that resource. Creating a diagnostic setting that sends data to a storage account or streams to event hubs requires the user to also have ListKeys permission on the target resource. |
azure-monitor | Monitor Virtual Machine Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-agent.md | |
azure-monitor | Monitor Virtual Machine Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md | Azure Monitor provides a set of [recommended alert rules](tutorial-monitor-vm-al ## Alert types -The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario depends on where the data that you're alerting on is located. +The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log search alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario depends on where the data that you're alerting on is located. You might have cases where data for a particular alerting scenario is available in both Metrics and Logs. If so, you need to determine which rule type to use. You might also have flexibility in how you collect certain data and let your decision of alert rule type drive your decision for data collection method. Data sources for metric alerts: - Host metrics for Azure virtual machines, which are collected automatically - Metrics collected by Azure Monitor Agent from the guest operating system -### Log alerts -Common uses for log alerts: +### Log search alerts +Common uses for log search alerts: - Alert when a particular event or pattern of events from Windows event log or Syslog are found. These alert rules typically measure table rows returned from the query. - Alert based on a calculation of numeric data across multiple machines. These alert rules typically measure the calculation of a numeric column in the query results. -Data sources for log alerts: +Data sources for log search alerts: - All data collected in a Log Analytics workspace ## Scaling alert rules As you identify requirements for more metric alert rules, follow this same strat - Minimize the number of alert rules you need to manage. - Ensure that they're automatically applied to any new machines. -### Log alert rules +### Log search alert rules -If you set the target resource of a log alert rule to a specific machine, queries are limited to data associated with that machine, which gives you individual alerts for it. This arrangement requires a separate alert rule for each machine. +If you set the target resource of a log search alert rule to a specific machine, queries are limited to data associated with that machine, which gives you individual alerts for it. This arrangement requires a separate alert rule for each machine. -If you set the target resource of a log alert rule to a Log Analytics workspace, you have access to all data in that workspace. For this reason, you can alert on data from all machines in the workgroup with a single rule. This arrangement gives you the option of creating a single alert for all machines. You can then use dimensions to create a separate alert for each machine. +If you set the target resource of a log search alert rule to a Log Analytics workspace, you have access to all data in that workspace. For this reason, you can alert on data from all machines in the workgroup with a single rule. This arrangement gives you the option of creating a single alert for all machines. You can then use dimensions to create a separate alert for each machine. For example, you might want to alert when an error event is created in the Windows event log by any machine. You first need to create a data collection rule as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) to send these events to the `Event` table in the Log Analytics workspace. Then you create an alert rule that queries this table by using the workspace as the target resource and the condition shown in the following image. The query returns a record for any error messages on any machine. Use the **Split by dimensions** option and specify **_ResourceId** to instruct the rule to create an alert for each machine if multiple machines are returned in the results. #### Dimensions Depending on the information you want to include in the alert, you might need to split by using different dimensions. In this case, make sure the necessary dimensions are projected in the query by using the [project](/azure/data-explorer/kusto/query/projectoperator) or [extend](/azure/data-explorer/kusto/query/extendoperator) operator. Set the **Resource ID column** field to **Don't split** and include all the meaningful dimensions in the list. Make sure **Include all future values** is selected so that any value returned from the query is included. #### Dynamic thresholds-Another benefit of using log alert rules is the ability to include complex logic in the query for determining the threshold value. You can hardcode the threshold, apply it to all resources, or calculate it dynamically based on some field or calculated value. The threshold is applied to resources only according to specific conditions. For example, you might create an alert based on available memory but only for machines with a particular amount of total memory. +Another benefit of using log search alert rules is the ability to include complex logic in the query for determining the threshold value. You can hardcode the threshold, apply it to all resources, or calculate it dynamically based on some field or calculated value. The threshold is applied to resources only according to specific conditions. For example, you might create an alert based on available memory but only for machines with a particular amount of total memory. ## Common alert rules -The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log alerts are provided for each. For guidance on which type of alert to use, see [Alert types](#alert-types). If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md). +The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log search alerts are provided for each. For guidance on which type of alert to use, see [Alert types](#alert-types). If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md). > [!NOTE]-> The details for log alerts provided here are using data collected by using [VM Insights](vminsights-overview.md), which provides a set of common performance counters for the client operating system. This name is independent of the operating system type. +> The details for log search alerts provided here are using data collected by using [VM Insights](vminsights-overview.md), which provides a set of common performance counters for the client operating system. This name is independent of the operating system type. ### Machine unavailable One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method is to create a metric alert rule in Azure Monitor by using the VM availability metric, which is currently in public preview. For a walk-through on this metric, see [Create availability alert rule for Azure virtual machine](tutorial-monitor-vm-alert-availability.md). The agent heartbeat is slightly different than the machine unavailable alert bec A metric called **Heartbeat** is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**. -#### Log alert rules +#### Log search alert rules -Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine. +Log search alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine. Use a rule with the following query: This section describes CPU alerts. | Windows guest | \Processor Information(_Total)\% Processor Time | | Linux guest | cpu/usage_active | -#### Log alert rules +#### Log search alert rules **CPU utilization** This section describes memory alerts. | Windows guest | \Memory\% Committed Bytes in Use<br>\Memory\Available Bytes | | Linux guest | mem/available<br>mem/available_percent | -#### Log alert rules +#### Log search alert rules **Available memory in MB** This section describes disk alerts. | Windows guest | \Logical Disk\(_Total)\% Free Space<br>\Logical Disk\(_Total)\Free Megabytes | | Linux guest | disk/free<br>disk/free_percent | -#### Log query alert rules +#### Log search alert rules **Logical disk used - all disks on each computer** InsightsMetrics | Windows guest | \Network Interface\Bytes Sent/sec<br>\Logical Disk\(_Total)\Free Megabytes | | Linux guest | disk/free<br>disk/free_percent | -#### Log query alert rules +#### Log search alert rules **Network interfaces bytes received - all interfaces** |
azure-monitor | Monitor Virtual Machine Analyze | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md | |
azure-monitor | Monitor Virtual Machine Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md | -This article provides guidance on collecting the most common types of telemetry from virtual machines. The exact configuration that you choose depends on the workloads that you run on your machines. Included in each section are sample log query alerts that you can use with that data. +This article provides guidance on collecting the most common types of telemetry from virtual machines. The exact configuration that you choose depends on the workloads that you run on your machines. Included in each section are sample log search alerts that you can use with that data. - For more information about analyzing telemetry collected from your virtual machines, see [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md). - For more information about using telemetry collected from your virtual machines to create alerts in Azure Monitor, see [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md). For guidance on creating a DCR to collect performance counters, see [Collect eve Destination | Description | |:|:| | Metrics | Host metrics are automatically sent to Azure Monitor Metrics. You can use a DCR to collect client metrics so that they can be analyzed together with [metrics explorer](../essentials/metrics-getting-started.md) or used with [metrics alerts](../alerts/alerts-create-new-alert-rule.md?tabs=metric). This data is stored for 93 days. |-| Logs | Performance data stored in Azure Monitor Logs can be stored for extended periods. The data can be analyzed along with your event data by using [log queries](../logs/log-query-overview.md) with [Log Analytics](../logs/log-analytics-overview.md) or [log query alerts](../alerts/alerts-create-new-alert-rule.md?tabs=log). You can also correlate data by using complex logic across multiple machines, regions, and subscriptions.<br><br>Performance data is sent to the following tables:<br>- VM insights: [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics)<br>- Other performance data: [Perf](/azure/azure-monitor/reference/tables/perf) | +| Logs | Performance data stored in Azure Monitor Logs can be stored for extended periods. The data can be analyzed along with your event data by using [log queries](../logs/log-query-overview.md) with [Log Analytics](../logs/log-analytics-overview.md) or [log search alerts](../alerts/alerts-create-new-alert-rule.md?tabs=log). You can also correlate data by using complex logic across multiple machines, regions, and subscriptions.<br><br>Performance data is sent to the following tables:<br>- VM insights: [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics)<br>- Other performance data: [Perf](/azure/azure-monitor/reference/tables/perf) | ### Sample log queries The following samples use the `Perf` table with custom performance data. For information on performance data collected by VM insights, see [How to query logs from VM insights](../vm/vminsights-log-query.md#performance-records). Azure Monitor has no ability on its own to monitor the status of a service or da For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You have to [create an Azure Automation account](../../automation/quickstarts/create-azure-automation-account-portal.md) to support the solution. -When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for logs queries and log query alert rules. +When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for logs queries and log search alert rules. | Table | Description | |:|:| When you enable Change Tracking and Inventory, two new tables are created in you | sort by Computer, SvcName ``` -- **Alert when a specific service stops.** Use this query in a log alert rule.+- **Alert when a specific service stops.** Use this query in a log search alert rule. ```kusto ConfigurationData When you enable Change Tracking and Inventory, two new tables are created in you | summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m) ``` -- **Alert when one of a set of services stops.** Use this query in a log alert rule.+- **Alert when one of a set of services stops.** Use this query in a log search alert rule. ```kusto let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]); When you enable Change Tracking and Inventory, two new tables are created in you Port monitoring verifies that a machine is listening on a particular port. Two potential strategies for port monitoring are described here. ### Dependency agent tables-If you're using VM insights with **Processes and dependencies collection** enabled, you can use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The `VMBoundPort` table is updated every minute with each process running on the computer and the port it's listening on. You can create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port. +If you're using VM insights with **Processes and dependencies collection** enabled, you can use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The `VMBoundPort` table is updated every minute with each process running on the computer and the port it's listening on. You can create a log search alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port. - **Review the count of ports open on your VMs to assess which VMs have configuration and security vulnerabilities.** There's an extra cost for Connection Manager. For more information, see [Network ## Run a process on a local machine Monitoring of some workloads requires a local process. An example is a PowerShell script that runs on the local machine to connect to an application and collect or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md), which is part of [Azure Automation](../../automation/automation-intro.md), to run a local PowerShell script. There's no direct charge for Hybrid Runbook Worker, but there's a cost for each runbook that it uses. -The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log. Then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry. +The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log. Then configure that log to be collected by Azure Monitor. Create a log search alert rule that fires on that log entry. ## Next steps |
azure-monitor | Monitor Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md | |
azure-monitor | Tutorial Monitor Vm Guest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md | In the empty query window, enter either **Event** or **Syslog** depending on whe :::image type="content" source="media/tutorial-monitor-vm/log-analytics-query.png" lightbox="media/tutorial-monitor-vm/log-analytics-query.png" alt-text="Screenshot that shows Log Analytics with query results."::: -For a tutorial on using Log Analytics to analyze log data, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md). For a tutorial on creating alert rules from log data, see [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md). +For a tutorial on using Log Analytics to analyze log data, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md). For a tutorial on creating alert rules from log data, see [Tutorial: Create a log search alert for an Azure resource](../alerts/tutorial-log-alert.md). ## View guest metrics You can view metrics for your host virtual machine with metrics explorer without a DCR like [any other Azure resource](../essentials/tutorial-metrics.md). With the DCR, you can use metrics explorer to view guest metrics and host metrics. |
azure-monitor | Vminsights Dependency Agent Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md | Last updated 09/28/2023 # Dependency Agent +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + The Dependency Agent collects data about processes running on the virtual machine and external process dependencies. Dependency Agent updates include bug fixes or support of new features or functionality. This article describes Dependency Agent requirements and how to upgrade Dependency Agent manually or through automation. >[!NOTE] |
azure-monitor | Vminsights Enable Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md | Output for this command will look similar to the following and specify whether a When you enable VM Insights for a machine, the following agents are installed. For the network requirements for these agents, see [Network requirements](../agents/log-analytics-agent.md#network-requirements). > [!IMPORTANT]-> Azure Monitor Agent has several advantages over the legacy Log Analytics agent, which will be deprecated by August 2024. After this date, Microsoft will no longer provide any support for the Log Analytics agent. [Migrate to Azure Monitor agent](../agents/azure-monitor-agent-migration.md) before August 2024 to continue ingesting data. +> Azure Monitor Agent has several advantages over the legacy Log Analytics agent, which will be deprecated by August 2024. After this date, Microsoft will no longer provide any support for the Log Analytics agent. [Migrate to Azure Monitor agent](../agents/azure-monitor-agent-migration.md) before August 2024 to continue ingesting data. - **[Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or [Log Analytics agent](../agents/log-analytics-agent.md):** Collects data from the virtual machine or Virtual Machine Scale Set and delivers it to the Log Analytics workspace. When you enable VM Insights for a machine, the following agents are installed. F (If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)) - The Dependency agent requires a connection from the virtual machine to the address 169.254.169.254. This address identifies the Azure metadata service endpoint. Ensure that firewall settings allow connections to this endpoint.-## Data collection rule +## Data collection rule When you enable VM Insights on a machine with the Azure Monitor agent, you must specify a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to use. The DCR specifies the data to collect and the workspace to use. VM Insights creates a default DCR if one doesn't already exist. For more information on how to create and edit the VM Insights DCR, see [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent). The DCR is defined by the options in the following table. > [!IMPORTANT] > VM Insights automatically creates a DCR that includes a special data stream required for its operation. Do not modify the VM Insights DCR or create your own DCR to support VM Insights. To collect additional data, such as Windows and Syslog events, create separate DCRs and associate them with your machines. -If you associate a data collection rule with the Map feature enabled to a machine on which Dependency Agent isn't installed, the Map view won't be available. To enable the Map view, set `enableAMA property = true` in the Dependency Agent extension when you install Dependency Agent. We recommend following the procedure described in [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent). +If you associate a data collection rule with the Map feature enabled to a machine on which Dependency Agent isn't installed, the Map view won't be available. To enable the Map view, set `enableAMA property = true` in the Dependency Agent extension when you install Dependency Agent. We recommend following the procedure described in [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent). ## Migrate from Log Analytics agent to Azure Monitor Agent - You can install both Azure Monitor Agent and Log Analytics agent on the same machine during migration. If a machine has both agents installed, you'll see a warning in the Azure portal that you might be collecting duplicate data.- + :::image type="content" source="media/vminsights-enable-portal/both-agents-installed.png" lightbox="media/vminsights-enable-portal/both-agents-installed.png" alt-text="Screenshot that shows both agents installed."::: > [!WARNING] If you associate a data collection rule with the Map feature enabled to a machin > | summarize max(TimeGenerated) by Computer, Category > | sort by Computer > ```- + ## Diagnostic and usage data Microsoft automatically collects usage and performance data through your use of Azure Monitor. Microsoft uses this data to improve the quality, security, and integrity of the service. |
azure-monitor | Vminsights Log Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md | Last updated 09/28/2023 # How to query logs from VM insights +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + VM insights collects performance and connection metrics, computer and process inventory data, and health state information and forwards it to the Log Analytics workspace in Azure Monitor. This data is available for [query](../logs/log-query-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. ## Map records |
azure-monitor | Vminsights Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md | Last updated 09/28/2023 # Chart performance with VM insights +> [!CAUTION] +> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. + VM insights includes a set of performance charts that target several key [performance indicators](vminsights-log-query.md#performance-records) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected. VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps to: |
azure-monitor | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md | Agents|[MMA Discovery and Removal Utility](agents/azure-monitor-agent-mma-remova Agents|[Send data to Event Hubs and Storage (Preview)](agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md)|Update azure-monitor-agent-send-data-to-event-hubs-and-storage.md| Alerts|[Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md)|We've added a clarification about the parameters used when creating metric alert rules programatically.| Alerts|[Manage your alert instances](alerts/alerts-manage-alert-instances.md)|We've added documentation about the new alerts timeline view.|-Alerts|[Create or edit a log alert rule](alerts/alerts-create-log-alert-rule.md)|Added limitations to log search alert queries.| -Alerts|[Create or edit a log alert rule](alerts/alerts-create-log-alert-rule.md)|We've added samples of log search alert rule queries that use Azure Data Explorer and Azure Resource Graph.| +Alerts|[Create or edit a log search alert rule](alerts/alerts-create-log-alert-rule.md)|Added limitations to log search alert queries.| +Alerts|[Create or edit a log search alert rule](alerts/alerts-create-log-alert-rule.md)|We've added samples of log search alert rule queries that use Azure Data Explorer and Azure Resource Graph.| Application-Insights|[Data Collection Basics of Azure Monitor Application Insights](app/opentelemetry-overview.md)|We've provided information on how to get a list of Application Insights SDK versions and their names.| Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|We've clarified steps to view ILogger telemetry.| Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|The script to discover classic resources has been updated.| Logs|[Query data in Azure Data Explorer and Azure Resource Graph from Azure Moni |||| Agents|[Azure Monitor Agent Health (Preview)](agents/azure-monitor-agent-health.md)|Introduced a new Azure Monitor Agent Health workbook, which monitors the health of agents deployed across your organization. | Alerts|[Manage your alert instances](alerts/alerts-manage-alert-instances.md)|View alerts as a timeline (preview)|-Alerts|[Upgrade to the Log Alerts API from the legacy Log Analytics alerts API](alerts/alerts-log-api-switch.md)|Changes to the log alert rule creation experience| +Alerts|[Upgrade to the Scheduled Query Rules API from the legacy Log Analytics alerts API](alerts/alerts-log-api-switch.md)|Changes to the log alert rule creation experience| Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|We now support migrating classic components to workspace-based components via PowerShell cmdlet. | Application-Insights|[EventCounters introduction](app/eventcounters.md)|Code samples have been provided for the latest .NET versions.| Application-Insights|[Enable a framework extension for Application Insights JavaScript SDK](app/javascript-framework-extensions.md)|We've added a section for the React Native Manual Device Plugin, and clarified exception tracking and device info collection.| Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to |[Convert ITSM actions that send events to ServiceNow to Secure Webhook actions](./alerts/itsm-convert-servicenow-to-webhook.md)|As of September 2022, we're starting the three-year process of deprecating support of using ITSM actions to send events to ServiceNow. Learn how to convert ITSM actions that send events to ServiceNow to Secure Webhook actions.| |[Create a new alert rule](./alerts/alerts-create-new-alert-rule.md)|Added description of all available monitoring services to **Create a new alert rule** and **Alert processing rules** pages. <br><br>Added support for regional processing for metric alert rules that monitor a custom metric with the scope defined as one of the supported regions. <br><br> Clarified that selecting the **Automatically resolve alerts** setting makes log alerts stateful.| |[Types of Azure Monitor alerts](alerts/alerts-types.md)|Azure Database for PostgreSQL - Flexible Servers is supported for monitoring multiple resources.|-|[Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API](./alerts/alerts-log-api-switch.md)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.| +|[Upgrade legacy rules management to the current Scheduled Query Rules API from legacy Log Analytics Alert API](./alerts/alerts-log-api-switch.md)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.| ### Application Insights Azure Monitor Workbooks documentation previously resided on an external GitHub r |:|:| | [Configure Azure to connect ITSM tools by using Secure Webhook](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md) | Added the workflow for ITSM management and removed all references to System Center Service Manager. | | [Overview of Azure Monitor Alerts](alerts/alerts-overview.md) | Complete rewrite. |-| [Resource Manager template samples for log query alerts](alerts/resource-manager-alerts-log.md) | Added Bicep samples for alerting to the Resource Manager template samples articles. | +| [Resource Manager template samples for log search alerts](alerts/resource-manager-alerts-log.md) | Added Bicep samples for alerting to the Resource Manager template samples articles. | | [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md) | Added a newly supported resource type. | ### Application Insights |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | The following table describes resource limits for Azure NetApp Files: | Minimum size of a single regular volume | 100 GiB | No | | Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 102,401 GiB | No |+| Large volume size increase | 30% of lowest provisioned size | Yes | | Maximum size of a single large volume | 500 TiB | No | | Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No | |
azure-netapp-files | Azure Netapp Files Service Levels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md | Service levels are an attribute of a capacity pool. Service levels are defined a Azure NetApp Files supports three service levels: *Ultra*, *Premium*, and *Standard*. -* <a name="Ultra"></a>Ultra storage: - The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned. --* <a name="Premium"></a>Premium storage: - The Premium service level provides up to 64 MiB/s of throughput per 1 TiB of capacity provisioned. - * <a name="Standard"></a>Standard storage: The Standard service level provides up to 16 MiB/s of throughput per 1 TiB of capacity provisioned. * Standard storage with cool access: The throughput experience for this service level is the same as the Standard service level for data that is in the hot tier. But it may differ when data that resides in the cool tier is accessed. For more information, see [Standard storage with cool access in Azure NetApp Files](cool-access-introduction.md#effects-of-cool-access-on-data). +* <a name="Premium"></a>Premium storage: + The Premium service level provides up to 64 MiB/s of throughput per 1 TiB of capacity provisioned. ++* <a name="Ultra"></a>Ultra storage: + The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned. + ## Throughput limits The throughput limit for a volume is determined by the combination of the following factors: |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | The following diagram demonstrates how customer-managed keys work with Azure Net * Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * For increased security, you can select the **Disable public access** option within the network settings of your key vault. When selecting this option, you must also select **Allow trusted Microsoft services to bypass this firewall** to permit the Azure NetApp Files service to access your encryption key.-* Automatic Managed System Identity (MSI) certificate renewal isn't currently supported. It's recommended you create an Azure monitor alert to notify you when the MSI certificate is set to expire. -* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.** - * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility. - * Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example: -- `az netappfiles account renew-credentials ΓÇô-account-name myaccount -ΓÇôresource-group myresourcegroup` -- * If the account isn't eligible for MSI certificate renewal, an error message communicates the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline. +* Customer-managed keys support automatic Managed System Identity (MSI) certificate renewal. If your certificate is valid, you don't need to manually update it. * Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled. * If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information. * If Azure Key Vault becomes inaccessible, Azure NetApp Files loses its access to the encryption keys and the ability to read or write data to volumes enabled with customer-managed keys. In this situation, create a support ticket to have access manually restored for the affected volumes. |
azure-netapp-files | Cool Access Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md | Standard storage with cool access is supported for the following regions: * East US 2 * France Central * Germany West Central+* Japan East * North Central US * North Europe * Southeast Asia |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | +## February 2024 ++* [Large volumes (Preview) improvement:](large-volumes-requirements-considerations.md#requirements-and-considerations) volume size increase beyond 30% default limit ++ For capacity and resources planning purposes the Azure NetApp Files large volume feature has a [volume size increase limit of up to 30% of the lowest provisioned size](large-volumes-requirements-considerations.md#requirements-and-considerations). This volume size increase limit is now adjustable beyond this 30% (default) limit via a support ticket. For more information, see [Resource limits](azure-netapp-files-resource-limits.md). + + ## January 2024 * [Standard network features - Edit volumes available in US Gov regions](azure-netapp-files-network-topologies.md#regions-edit-network-features) (Preview) |
azure-resource-manager | Bicep Config Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md | Title: Module setting for Bicep config description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 01/17/2024 Last updated : 02/16/2024 # Add module settings in the Bicep config file You can override the public module registry alias definition in the bicepconfig. ## Configure profiles and credentials -To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the profile and the credential precedence for authenticating to the registry. By default, Bicep uses the `AzureCloud` profile and the credentials from the user authenticated in Azure CLI or Azure PowerShell. You can customize `currentProfile` and `credentialPrecedence` in the config file. +To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can manually configure `currentProfile` and `credentialPrecedence` in the [Bicep config file](./bicep-config.md) for authenticating to the registry. ```json { The available profiles are: - AzureChinaCloud - AzureUSGovernment -You can customize these profiles, or add new profiles for your on-premises environments. +By default, Bicep uses the `AzureCloud` profile and the credentials of the user authenticated in Azure CLI or Azure PowerShell. You can customize these profiles or include new ones for your on-premises environments. If you want to publish or restore a module to a national cloud environment such as `AzureUSGovernment`, you must set `"currentProfile": "AzureUSGovernment"` even if you've selected that cloud profile in the Azure CLI. Bicep is unable to automatically determine the current cloud profile based on Azure CLI settings. Bicep uses the [Azure.Identity SDK](/dotnet/api/azure.identity) to do authentication. The available credential types are: |
azure-resource-manager | Azure Services Resource Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md | The resource providers for storage services are: | Resource provider namespace | Azure service | | | - | | Microsoft.ClassicStorage | Classic deployment model storage |-| Microsoft.ElasticSan | [Elastic SAN Preview](../../storage/elastic-san/index.yml) | +| Microsoft.ElasticSan | [Elastic SAN](../../storage/elastic-san/index.yml) | | Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) | | Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) | | Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) | |
azure-resource-manager | Linked Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/linked-templates.md | Make sure there is no leading "?" in QueryString. The deployment adds one when a ## Template specs -Instead of maintaining your linked templates at an accessible endpoint, you can create a [template spec](template-specs.md) that packages the main template and its linked templates into a single entity you can deploy. The template spec is a resource in your Azure subscription. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview. +Instead of maintaining your linked templates at an accessible endpoint, you can create a [template spec](template-specs.md) that packages the main template and its linked templates into a single entity you can deploy. The template spec is a resource in your Azure subscription. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. For more information, see: |
azure-vmware | Azure Vmware Solution Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md | description: This article provides details about the known issues of Azure VMwar Previously updated : 2/14/2024 Last updated : 2/15/2024 # Known issues: Azure VMware Solution Refer to the table to find details about resolution dates or possible workaround | When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This alert should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than four clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue should get detected through Microsoft, however you can also open a support request. | 2023 | | When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option isn't available. | 2023 | The default VMware HCX Compute Profile doesn't have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 |-| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necassary to gain interactive access to the vCenter network segment. Microsoft is working on a plan to roll out security fixes soon to completely remediate the security vulnerability. | October 2023 | +| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Microsoft is working on a plan to roll out security fixes soon to completely remediate the security vulnerability. | October 2023 | | The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | Use AV36, AV36P, or AV52 SKUs when RAID-6 FTT2 or RAID-1 FTT3 storage policies are needed. | N/A |-| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | N/A | +| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://docs.vmware.com/en/VMware-HCX/4.8.2/rn/vmware-hcx-482-release-notes/https://docsupdatetracker.net/index.html) | In this article, you learned about the current known issues with the Azure VMware Solution. |
azure-vmware | Azure Vmware Solution Platform Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md | All Azure NetApp Files features available on Azure public cloud are also availab **Azure Arc-enabled VMware vSphere** -Customers can start their onboarding with Azure Arc-enabled VMware vSphere, install agents at-scale, and enable Azure management, observability, and security solutions, while benefitting from the existing lifecycle management capabilities. Azure Arc-enabled VMware vSphere VMs now show up alongside other Azure Arc-enabled servers under ΓÇÿMachinesΓÇÖ view in the Azure portal. [Learn more](https://aka.ms/vSphereGAblog) +Azure Arc-enabled VMware vSphere term refers to both vSphere on-premises and Azure VMware Solutions customer. Customers can start their onboarding with Azure Arc-enabled VMware vSphere, install agents at-scale, and enable Azure management, observability, and security solutions, while benefitting from the existing lifecycle management capabilities. Azure Arc-enabled VMware vSphere VMs now show up alongside other Azure Arc-enabled servers under ΓÇÿMachinesΓÇÖ view in the Azure portal. [Learn more](https://aka.ms/vSphereGAblog) **Five-year Reserved Instance** |
azure-vmware | Configure Azure Elastic San | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md | Title: Use Azure VMware Solution with Azure Elastic SAN Preview -description: Learn how to use Elastic SAN Preview with Azure VMware Solution + Title: Use Azure VMware Solution with Azure Elastic SAN +description: Learn how to use Elastic SAN with Azure VMware Solution -This article explains how to use Azure Elastic SAN Preview as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters. +This article explains how to use Azure Elastic SAN as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters. -Azure Elastic storage area network (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. For more information on Azure Elastic SAN, see [What is Azure Elastic SAN? Preview](../storage/elastic-san/elastic-san-introduction.md). +Azure Elastic storage area network (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. For more information on Azure Elastic SAN, see [What is Azure Elastic SAN?](../storage/elastic-san/elastic-san-introduction.md). ## Prerequisites |
azure-vmware | Deploy Arc For Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md | Title: Deploy Arc-enabled Azure VMware Solution + Title: Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud. Last updated 12/08/2023 -# Deploy Arc-enabled Azure VMware Solution +# Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud -In this article, learn how to deploy Arc for Azure VMware Solution. Once you set up the components needed, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the actions: +In this article, learn how to deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud. Once you set up the components needed, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the following actions: - Identify your VMware vSphere resources (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register them with Arc at scale. - Perform different virtual machine (VM) operations directly from Azure like; create, resize, delete, and power cycle operations (start/stop/restart) on VMware VMs consistently with Azure. In this article, learn how to deploy Arc for Azure VMware Solution. Once you set ## Deployment Considerations -Running software in Azure VMware Solution, as a private cloud in Azure, offers some benefits not realized by operating your environment outside of Azure. For software running in a VM, such as SQL Server and Windows Server, running in Azure VMware Solution provides additional value such as free Extended Security Updates (ESUs). +When you run software in Azure VMware Solution, as a private cloud in Azure, there are benefits not realized by operating your environment outside of Azure. For software running in a virtual machine (VM) like, SQL Server and Windows Server, running in Azure VMware Solution provides more value such as free Extended Security Updates (ESUs). -To take advantage of these benefits if you are running in an Azure VMware Solution it is important to enable Arc through this document to fully integrate the experience with the AVS private cloud. Alternatively, Arc-enabling VMs through the following mechanisms will not create the necessary attributes to register the VM and software as part of Azure VMware Solution and therefore result in billing for SQL Server ESUs for: +To take advantage of the benefits when you're running in an Azure VMware Solution, use this article to enable Arc and fully integrate the experience with the Azure VMware Solution private cloud. Alternatively, Arc-enabling VMs through the following mechanisms won't create the necessary attributes to register the VM and software as part of Azure VMware Solution and will result in billing for SQL Server ESUs for: - Arc-enabled servers- - Arc-enabled VMware vSphere- - SQL Server enabled by Azure Arc ## How to manually integrate an Arc-enabled VM into Azure VMware Solutions There are two ways to refresh the integration between the Arc-enabled VMs and Az 1. In the Azure VMware Solution private cloud, navigate to the vCenter Server inventory and Virtual Machines section within the portal. Locate the virtual machine that requires updating and follow the process to 'Enable in Azure'. If the option is grayed out, you must first **Remove from Azure** and then proceed to **Enable in Azure** -2. Run the [az connectedvmware vm create ](/cli/azure/connectedvmware/vm#az-connectedvmware-vm-create)Azure CLI command on the VM in Azure VMware Solution to update the machine type.  +2. Run the [az connectedvmware vm create](/cli/azure/connectedvmware/vm?view=azure-cli-latest%22%20\l%20%22az-connectedvmware-vm-create&preserve-view=true) Azure CLI command on the VM in Azure VMware Solution to update the machine type.  ```azurecli You need the following items to ensure you're set up to begin the onboarding pro - From the Management VM, verify you have access to [vCenter Server and NSX-T manager portals](/azure/azure-vmware/tutorial-access-private-cloud#connect-to-the-vcenter-server-of-your-private-cloud). - A resource group in the subscription where you have an owner or contributor role. - An unused, isolated [NSX Data Center network segment](/azure/azure-vmware/tutorial-nsx-t-network-segment) that is a static network segment used for deploying the Arc for Azure VMware Solution OVA. If an isolated NSX-T Data Center network segment doesn't exist, one gets created.-- Verify your Azure subscription is enabled and has connectivity to Azure end points.-- The firewall and proxy URLs must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. See the [Azure Arc resource bridge network requirements](/azure/azure-arc/resource-bridge/network-requirements).-- Verify your vCenter Server version is 6.7 or higher.+- The firewall and proxy URLs must be allowlisted to enable communication from the management machine and Appliance VM to the required Arc resource bridge URLs. See the [Azure Arc resource bridge network requirements](/azure/azure-arc/resource-bridge/network-requirements). +- Verify your vCenter Server version is 7.0 or higher. - A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs. - A datastore with a minimum of 100 GB of free disk space is available through the resource pool or cluster. -- On the vCenter Server, allow inbound connections on TCP port 443. This action ensures that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server. > [!NOTE] > - Private endpoint is currently not supported. > - DHCP support isn't available to customers at this time, only static IP addresses are currently supported. +If you want to use a custom DNS, use the following steps: -## Registration to Arc for Azure VMware Solution feature set --The following **Register features** are for provider registration using Azure CLI. --```azurecli -az provider register --namespace Microsoft.ConnectedVMwarevSphere -az provider register --namespace Microsoft.ExtendedLocation -az provider register --namespace Microsoft.KubernetesConfiguration -az provider register --namespace Microsoft.ResourceConnector -az provider register --namespace Microsoft.AVS -``` -Alternately, users can sign in to their Subscription, navigate to the **Resource providers** tab, and register themselves on the resource providers mentioned previously. -+1. In your Azure VMware Solution private cloud, navigate to the DNS page, under **Workload networking**, select **DNS** and identify the default forwarder-zones under the **DNS zones** tab. +1. Edit the forwarder zone to add the custom DNS server IP. By adding the custom DNS as the first IP, it allows requests to be directly forwarded to the first IP and decreases the number of retries. ## Onboard process to deploy Azure Arc Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution. -1. Sign in to the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the preview software. +1. Sign in to the Management VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the software. 2. Open the 'config_avs.json' file and populate all the variables. **Config JSON** Use the following steps to guide you through the process to onboard Azure Arc fo - `GatewayIPAddress` is the gateway for the segment for Arc appliance VM. - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the K8s node pool IP range. - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd` are the starting and ending IP of the pool of IPs to assign to the appliance VM. Both need to be within the `networkCIDRForApplianceVM`. - - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You can choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM` + - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You can choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM` with the first lp as the gateway. + - If all the parameters are provided, the firewall and proxy URLs must be allowlisted for the lps between k8sNodeIPPoolStart, k8sNodeIPPoolEnd. + - If you're skipping the optional fields, the firewall and proxy URLs must be allowlisted the following IPs in the segment. If the networkCIDRForApplianceVM is x.y.z.1/28, the IPs to allowlist are between x.y.z.11 – x.y.z.14. See the [Azure Arc resource bridge network requirements](/azure/azure-arc/resource-bridge/network-requirements).  **Json example** ```json Once you connected your Azure VMware Solution private cloud to Azure, you can br Repeat the previous steps for one or more virtual machine, network, resource pool, and VM template resources. Additionally, for virtual machines there is an additional section to configure **VM extensions**. This will enable guest management to facilitate additional Azure extensions to be installed on the VM. The steps to enable this would be:+ 1. Select **Enable guest management**. 2. Choose a __Connectivity Method__ for the Arc agent. 3. Provide an Administrator/Root access username and password for the VM. -If you choose to enable the guest management as a separate step or have issues with the VM extension install steps please review the prerequisites and steps discussed in the section below. +If you choose to enable the guest management as a separate step or have issues with the VM extension install steps, review the prerequisites and steps discussed in the following section. ## Enable guest management and extension installation -Before you install an extension, you need to enable guest management on the VMware VM. +Before you install an extension, you must enable guest management on the VMware VM. ### Prerequisite You need to enable guest management on the VMware VM before you can install an e 1. Select **Configuration** from the left navigation for a VMware VM. 1. Verify **Enable guest management** is now checked. -From here additional extensions can be installed. See the [VM extensions](/azure/azure-arc/servers/manage-vm-extensions?branch=main) for a list of current extensions. --### Install extensions -To add extensions, follow these steps: -1. Go to **vCenter Server Inventory >** **Virtual Machines** and select the virtual machine to which you need to add an extension. -2. Locate **Settings >** **Extensions** from the left navigation and select **Add**. Alternatively, in the **Overview** page an **Extensions** click-through is listed under Properties. -1. Select the extension you want to install. Some extensions require additional information. -4. When you're done, select **Review + create**. +From here additional extensions can be installed. See the [VM extensions Overview](/azure/azure-arc/servers/manage-vm-extensions) for a list of current extensions. ### Next Steps To manage Arc-enabled Azure VMware Solution go to: [Manage Arc-enabled Azure VMware private cloud - Azure VMware Solution](/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution)- To remove Arc-enabled  Azure VMWare Solution resources from Azure go to: [Remove Arc-enabled Azure VMware Solution vSphere resources from Azure - Azure VMware Solution](/azure/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure) |
azure-vmware | Manage Arc Enabled Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/manage-arc-enabled-azure-vmware-solution.md | Title: Manage Arc-enabled Azure VMware private cloud description: Learn how to manage your Arc-enabled Azure VMware private cloud. Previously updated : 12/18/2023 Last updated : 2/6/2024 The following command invokes the set credential for the specified appliance res ## Upgrade the Arc resource bridge +> [!NOTE] +> Arc resource bridges, on a supported [private cloud provider](/azure/azure-arc/resource-bridge/upgrade#private-cloud-providers) with an appliance version **1.0.15 or higher**, are automatically opted in to [cloud-managed upgrade](/azure/azure-arc/resource-bridge/upgrade#cloud-managed-upgrade).  + Azure Arc-enabled Azure VMware Private Cloud requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. The Arc resource bridge can be manually upgraded from the vCenter server. You must meet all upgrade [prerequisites](/azure/azure-arc/resource-bridge/upgrade#prerequisites) before attempting to upgrade. The vCenter server must have the kubeconfig and appliance configuration files stored locally. If the cloudadmin credentials change after the initial deployment of the resource bridge, [update the Arc appliance credential](/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution#update-arc-appliance-credential) before you attempt a manual upgrade. Arc resource bridge can be manually upgraded from the management machine. The [manual upgrade](/azure/azure-arc/resource-bridge/upgrade#manual-upgrade) generally takes between 30-90 minutes, depending on the network speed. The upgrade command takes your Arc resource bridge to the immediate next version, which might not be the latest available version. Multiple upgrades could be needed to reach a [supported version](/azure/azure-arc/resource-bridge/upgrade#supported-versions). Verify your resource bridge version by checking the Azure resource of your Arc resource bridge. -Arc resource bridges, on a supported [private cloud provider](/azure/azure-arc/resource-bridge/upgrade#private-cloud-providers) with an appliance version 1.0.15 or higher, are automatically opted in to [cloud-managed upgrade](/azure/azure-arc/resource-bridge/upgrade#cloud-managed-upgrade).  - ## Collect logs from the Arc resource bridge |
azure-vmware | Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md | During onboarding, to create a connection between your VMware vCenter and Azure, As a last step, run the following command: -`az rest --method delete` +[`az rest --method delete --url`](https://management.azure.com/subscriptions/%3csubscrption-id%3e/resourcegroups/%3cresource-group-name%3e/providers/Microsoft.AVS/privateClouds/%3cprivate-cloud-name%3e/addons/arc?api-version=2022-05-01%22) Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer. |
backup | Azure Kubernetes Service Cluster Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md | Title: Azure Kubernetes Service (AKS) backup support matrix description: This article provides a summary of support settings and limitations of Azure Kubernetes Service (AKS) backup. Previously updated : 12/25/2023 Last updated : 02/16/2024 - references_regions - ignite-2023 You can use [Azure Backup](./backup-overview.md) to help protect Azure Kubernete - During restore from Vault Tier, the provided staging location shouldn't have a *Read*/*Delete Lock*; otherwise, hydrated resources aren't cleaned after restore. +- Don't install AKS Backup Extension along with Velero or other Velero-based backup services. This could lead to disruption of backup service during any future Velero upgrades driven by you or AKS backup + ## Next steps - [About Azure Kubernetes Service cluster backup](azure-kubernetes-service-cluster-backup-concept.md) |
chaos-studio | Chaos Studio Fault Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md | -> [!CAUTION] -> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly. - The faults listed in this article are currently available for use. To understand which resource types are supported, see [Supported resource types and role assignments for Azure Chaos Studio](./chaos-studio-fault-providers.md). ## Time delay The faults listed in this article are currently available for use. To understand | Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. | | Description | Adds CPU pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the **% Processor Utility** performance counter is used at fault start to determine current CPU percentage, which is subtracted from the `pressureLevel` defined in the fault so that **% Processor Utility** hits approximately the `pressureLevel` defined in the fault parameters. |-| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | +| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | | | **Windows**: None. | | Urn | urn:csci:microsoft:agent:cpuPressure/1.0 | | Parameters (key, value) | Known issues on Linux: | Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. | | Description | Adds physical memory pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. |-| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | +| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | | | **Windows**: None. | | Urn | urn:csci:microsoft:agent:physicalMemoryPressure/1.0 | | Parameters (key, value) | | Currently, the Windows agent doesn't reduce memory pressure when other applicati | Target type | Microsoft-Agent | | Supported OS types | Linux | | Description | Uses stress-ng to apply pressure to the disk. One or more worker processes are spawned that perform I/O processes with temporary files. Pressure is added to the primary disk by default, or the disk specified with the targetTempDirectory parameter. For information on how pressure is applied, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. |-| Prerequisites | The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | +| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | | Urn | urn:csci:microsoft:agent:linuxDiskIOPressure/1.1 | | Parameters (key, value) | | | workerCount | Number of worker processes to run. Setting `workerCount` to 0 generated as many worker processes as there are number of processors. | Currently, the Windows agent doesn't reduce memory pressure when other applicati | Target type | Microsoft-Agent | | Supported OS types | Linux | | Description | Runs any stress-ng command by passing arguments directly to stress-ng. Useful when one of the predefined faults for stress-ng doesn't meet your needs. |-| Prerequisites | The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | +| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. | | Urn | urn:csci:microsoft:agent:stressNg/1.0 | | Parameters (key, value) | | | stressNgArguments | One or more arguments to pass to the stress-ng process. For information on possible stress-ng arguments, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. | Currently, only virtual machine scale sets configured with the **Uniform** orche ### Limitations * A maximum of 1000 topic entities can be passed to this fault.++## Change Event Hub State + +| Property | Value | +| - | | +| Capability name | ChangeEventHubState-1.0 | +| Target type | Microsoft-EventHub | +| Description | Sets individual event hubs to the desired state within an Azure Event Hubs namespace. You can affect specific event hub names or use ΓÇ£*ΓÇ¥ to affect all within the namespace. This can help test your messaging infrastructure for maintenance or failure scenarios. This is a discrete fault, so the entity will not be returned to the starting state automatically. | +| Prerequisites | An Azure Event Hubs namespace with at least one [event hub entity](../event-hubs/event-hubs-create.md). | +| Urn | urn:csci:microsoft:eventHub:changeEventHubState/1.0 | +| Fault type | Discrete. | +| Parameters (key, value) | | +| desiredState | The desired state for the targeted event hubs. The possible states are Active, Disabled, and SendDisabled. | +| eventHubs | A comma-separated list of the event hub names within the targeted namespace. Use "*" to affect all entities within the namespace. | ++### Sample JSON ++```json +{ + "name": "Branch1", + "actions": [ + { + "selectorId": "Selector1", + "type": "discrete", + "parameters": [ + { + "key": "eventhubs", + "value": "[\"*\"]" + }, + { + "key": "desiredState", + "value": "Disabled" + } + ], + "name": "urn:csci:microsoft:eventHub:changeEventHubState/1.0" + } + ] +} +``` |
chaos-studio | Chaos Studio Fault Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md | +More information about role assignments can be found on the [Azure built-in roles page](../role-based-access-control/built-in-roles.md). + | Resource type | Target name/type | Suggested role assignment | |-|--|-|-| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor | -| Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | Classic Virtual Machine Contributor | -| Microsoft.Compute/virtualMachines (agent-based) | Microsoft-Agent | Reader | -| Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | Reader | -| Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | Virtual Machine Contributor | -| Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | Virtual Machine Contributor | -| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster Admin Role | -| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Azure Cosmos DB Operator | -| Microsoft.Insights/autoscalesettings (service-direct) | Microsoft-AutoScaleSettings | Web Plan Contributor | -| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Azure Key Vault Contributor | -| Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor | -| Microsoft.Web/sites (service-direct) | Microsoft-AppService | Website Contributor | +| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | [Redis Cache Contributor](../role-based-access-control/built-in-roles.md#redis-cache-contributor) | +| Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | [Classic Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#classic-virtual-machine-contributor) | +| Microsoft.Compute/virtualMachines (agent-based) | Microsoft-Agent | [Reader](../role-based-access-control/built-in-roles.md#reader) | +| Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | [Reader](../role-based-access-control/built-in-roles.md#reader) | +| Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) | +| Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) | +| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) | +| Microsoft.DocumentDb/databaseAccounts (Cosmos DB, service-direct) | Microsoft-Cosmos DB | [Azure Cosmos DB Operator](../role-based-access-control/built-in-roles.md#cosmos-db-operator) | +| Microsoft.Insights/autoscalesettings (service-direct) | Microsoft-AutoScaleSettings | [Web Plan Contributor](../role-based-access-control/built-in-roles.md#web-plan-contributor) | +| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | [Azure Key Vault Contributor](../role-based-access-control/built-in-roles.md#key-vault-contributor) | +| Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) | +| Microsoft.Web/sites (service-direct) | Microsoft-AppService | [Website Contributor](../role-based-access-control/built-in-roles.md#website-contributor) | +| Microsoft.ServiceBus/namespaces (service-direct) | Microsoft-ServiceBus | [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner) | +| Microsoft.EventHub/namespaces (service-direct) | Microsoft-EventHub | [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner) | |
chaos-studio | Chaos Studio Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-versions.md | Chaos Studio currently tests with the following version combinations. | Chaos Studio fault version | Kubernetes version | Chaos Mesh version | Notes | |::|::|::|::|+| 2.1 | 1.27 | 2.6.3 | | | 2.1 | 1.25.11 | 2.5.1 | | The *Chaos Studio fault version* column refers to the individual fault version for each AKS Chaos Mesh fault used in the experiment JSON, for example `urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1`. If a past version of the corresponding Chaos Studio fault remains available from the Chaos Studio API (for example, `...podChaos/1.0`), it is within support. |
cloud-shell | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md | description: Overview of features in Azure Cloud Shell ms.contributor: jahelmic Previously updated : 12/06/2023 Last updated : 02/15/2024 tags: azure-resource-manager Title: Azure Cloud Shell features # Features & tools for Azure Cloud Shell -Azure Cloud Shell is a browser-based shell experience to manage and develop Azure resources. --Cloud Shell offers a browser-accessible, preconfigured shell experience for managing Azure -resources without the overhead of installing, versioning, and maintaining a machine yourself. --Cloud Shell allocates machines on a per-request basis and as a result machine state doesn't -persist across sessions. Since Cloud Shell is built for interactive sessions, shells automatically -terminate after 20 minutes of shell inactivity. +Azure Cloud Shell is a browser-based terminal that provides an authenticated, preconfigured shell +experience for managing Azure resources without the overhead of installing and maintaining a machine +yourself. Azure Cloud Shell runs on **Azure Linux**, Microsoft's Linux distribution for cloud infrastructure-edge products and services. Microsoft internally compiles all the packages included in the **Azure -Linux** repository to help guard against supply chain attacks. +edge products and services. You can choose Bash or PowerShell as your default shell. ## Features -### Secure automatic authentication +### Secure environment ++Microsoft internally compiles all the packages included in the **Azure Linux** repository to help +guard against supply chain attacks. For more information or to request changes to the **Azure +Linux** image, see the [Cloud Shell GitHub repository][24]. -Cloud Shell securely and automatically authenticates account access for the Azure CLI and Azure -PowerShell. +Cloud Shell automatically authenticates your Azure account to allow secure access for Azure CLI, +Azure PowerShell, and other cloud management tools. ### $HOME persistence across sessions -To persist files across sessions, Cloud Shell walks you through attaching an Azure file share on -first launch. Once completed, Cloud Shell will automatically attach your storage (mounted as -`$HOME\clouddrive`) for all future sessions. Additionally, your `$HOME` directory is persisted as an -.img in your Azure File share. Files outside of `$HOME` and machine state aren't persisted across -sessions. Learn more about [Persisting files in Cloud Shell][09]. +When you start Cloud Shell for the first time, you have the option of using Cloud Shell with or +without an attached storage account. Choosing to continue without storage is the fastest way to +start using Cloud Shell. In Cloud Shell, this is known as an _ephemeral session_. When you close the +Cloud Shell window, all files you saved are deleted and don't persist across sessions. ++To persist files across sessions, you can choose to mount a storage account. Cloud Shell +automatically attaches your storage (mounted as `$HOME\clouddrive`) for all future sessions. +Additionally, your `$HOME` directory is persisted as an `.img` file in your Azure File share. The +machine state and files outside of `$HOME` aren't persisted across sessions. Learn more about +[Persisting files in Cloud Shell][32]. Use best practices when storing secrets such as SSH keys. You can use Azure Key Vault to securely-store and retrieve your keys. For more information, see [Manage Key Vault using the Azure CLI][02]. +store and retrieve your keys. For more information, see [Manage Key Vault using the Azure CLI][05]. ### Azure drive (Azure:) PowerShell in Cloud Shell provides the Azure drive (`Azure:`). You can switch to the Azure drive with `cd Azure:` and back to your home directory with `cd ~`. The Azure drive enables easy discovery and navigation of Azure resources such as Compute, Network, Storage etc. similar to-filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][14] to manage -these resources regardless of the drive you are in. Any changes made to the Azure resources, either -made directly in Azure portal or through Azure PowerShell cmdlets, are reflected in the Azure drive. -You can run `dir -Force` to refresh your resources. --![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.][06] --### Manage Exchange Online +filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][09] to manage +these resources regardless of the drive you are in. -PowerShell in Cloud Shell contains the ExchangeOnlineManagement module. Run the following command to -get a list of Exchange cmdlets. --```powershell -Get-Command -Module ExchangeOnlineManagement -``` --For more information about using the ExchangeOnlineManagement module, see -[Exchange Online PowerShell][15]. +> [!NOTE] +> Any changes made to the Azure resources, either made directly in Azure portal or through Azure +> PowerShell cmdlets, are reflected in the `Azure:` drive. However, you must run `dir -Force` to +> refresh the view of your resources in the `Azure:`. ### Deep integration with open source tooling Cloud Shell includes preconfigured authentication for open source tools such as Terraform, Ansible, and Chef InSpec. For more information, see the following articles: -- [Run Ansible playbook][11]-- [Manage your Azure dynamic inventories][10]-- [Install and configure Terraform][12]+- [Run Ansible playbook][03] +- [Manage your Azure dynamic inventories][02] +- [Install and configure Terraform][04] -### Preinstalled tools +## Preinstalled tools -The most commonly used tools are preinstalled in Cloud Shell. +The most commonly used tools are preinstalled in Cloud Shell. If you're using PowerShell, use the +`Get-PackageVersion` command to see a more complete list of tools and versions. If you're using +Bash, use the `tdnf list` command. -#### Azure tools +### Azure tools Cloud Shell comes with the following Azure command-line tools preinstalled: -| Tool | Version | Command | -| - | -- | | -| [Azure CLI][13] | 2.55.0 | `az --version` | -| [Azure PowerShell][14] | 11.1.0 | `Get-Module Az -ListAvailable` | -| [AzCopy][04] | 10.15.0 | `azcopy --version` | -| [Azure Functions CLI][01] | 4.0.5390 | `func --version` | -| [Service Fabric CLI][03] | 11.2.0 | `sfctl --version` | -| [Batch Shipyard][18] | 3.9.1 | `shipyard --version` | -| [blobxfer][19] | 1.11.0 | `blobxfer --version` | --You can verify the version of the language using the command listed in the table. -Use the `Get-PackageVersion` to see a more complete list of tools and versions. --#### Linux tools --- bash-- zsh-- sh-- tmux-- dig--#### Text editors +- [Azure CLI][08] +- [Azure PowerShell][09] +- [Az.Tools.Predictor][10] +- [AzCopy][07] +- [Azure Functions CLI][01] +- [Service Fabric CLI][06] +- [Batch Shipyard][17] +- [blobxfer][18] ++### Other Microsoft services ++- [Office 365 CLI][28] +- [Exchange Online PowerShell][11] +- A basic set of [Microsoft Graph PowerShell][12] modules + - Microsoft.Graph.Applications + - Microsoft.Graph.Authentication + - Microsoft.Graph.Groups + - Microsoft.Graph.Identity.DirectoryManagement + - Microsoft.Graph.Identity.Governance + - Microsoft.Graph.Identity.SignIns + - Microsoft.Graph.Users.Actions + - Microsoft.Graph.Users.Functions +- [MicrosoftPowerBIMgmt][13] PowerShell modules +- [SqlServer][14] PowerShell modules ++### Productivity tools ++Linux tools ++- `bash` +- `zsh` +- `sh` +- `tmux` +- `dig` ++Text editors - Cloud Shell editor (code) - vim - nano - emacs -#### Source control +### Cloud management tools -- Git-- GitHub CLI+- [Docker Desktop][23] +- [Kubectl][27] +- [Helm][26] +- [D2iQ Kubernetes Platform CLI][22] +- [Cloud Foundry CLI][21] +- [Terraform][31] +- [Ansible][30] +- [Chef InSpec][20] +- [Puppet Bolt][29] +- [HashiCorp Packer][19] -#### Build tools +## Developer tools -- make-- maven-- npm-- pip+Build tools -#### Containers +- `make` +- `maven` +- `npm` +- `pip` -- [Docker Desktop][24]-- [Kubectl][29]-- [Helm][28]-- [D2iQ Kubernetes Platform CLI][23]+Source control ++- Git +- GitHub CLI -#### Databases +Database tools - MySQL client - PostgreSql client-- [sqlcmd Utility][17]-- [mssql-scripter][27]--#### Other --- iPython Client-- [Cloud Foundry CLI][22]-- [Terraform][33]-- [Ansible][32]-- [Chef InSpec][21]-- [Puppet Bolt][31]-- [HashiCorp Packer][20]-- [Office 365 CLI][30]--### Preinstalled developer languages --Cloud Shell comes with the following languages preinstalled: +- [sqlcmd Utility][15] +- [mssql-scripter][25] -| Language | Version | Command | -| - | - | | -| .NET Core | [7.0.400][25] | `dotnet --version` | -| Go | 1.19.11 | `go version` | -| Java | 17.0.8 | `java --version` | -| Node.js | 16.20.1 | `node --version` | -| PowerShell | [7.4.0][16] | `pwsh -Version` | -| Python | 3.9.14 | `python --version` | -| Ruby | 3.1.4p223 | `ruby --version` | +Programming languages -You can verify the version of the language using the command listed in the table. +- .NET Core 7.0 +- PowerShell 7.4 +- Node.js +- Java +- Python 3.9 +- Ruby +- Go ## Next steps -- [Cloud Shell Quickstart][05]-- [Learn about Azure CLI][13]-- [Learn about Azure PowerShell][14]+- [Cloud Shell Quickstart][16] +- [Learn about Azure CLI][08] +- [Learn about Azure PowerShell][09] <!-- link references -->-[01]: ../azure-functions/functions-run-local.md -[02]: ../key-vault/general/manage-with-cli2.md#prerequisites -[03]: ../service-fabric/service-fabric-cli.md -[04]: ../storage/common/storage-use-azcopy-v10.md -[05]: ./get-started.md -[06]: ./media/features/azure-drive.png -[09]: ./persisting-shell-storage.md -[10]: /azure/developer/ansible/dynamic-inventory-configure -[11]: /azure/developer/ansible/getting-started-cloud-shell -[12]: /azure/developer/terraform/quickstart-configure -[13]: /cli/azure/ -[14]: /powershell/azure -[15]: /powershell/exchange/exchange-online-powershell -[16]: /powershell/scripting/whats-new/what-s-new-in-powershell-74 -[17]: /sql/tools/sqlcmd-utility -[18]: https://batch-shipyard.readthedocs.io/en/latest/ -[19]: https://blobxfer.readthedocs.io/en/latest/ -[20]: https://developer.hashicorp.com/packer/docs -[21]: https://docs.chef.io/ -[22]: https://docs.cloudfoundry.org/cf-cli/ -[23]: https://docs.d2iq.com/dkp/2.6/azure-infrastructure -[24]: https://docs.docker.com/desktop/ -[25]: https://dotnet.microsoft.com/download/dotnet/7.0 -[27]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md -[28]: https://helm.sh/docs/ -[29]: https://kubernetes.io/docs/reference/kubectl/ -[30]: https://pnp.github.io/office365-cli/ -[31]: https://puppet.com/docs/bolt/latest/bolt.html -[32]: https://www.ansible.com/microsoft-azure -[33]: https://www.terraform.io/docs/providers/azurerm/ +[01]: /azure/azure-functions/functions-run-local +[02]: /azure/developer/ansible/dynamic-inventory-configure +[03]: /azure/developer/ansible/getting-started-cloud-shell +[04]: /azure/developer/terraform/quickstart-configure +[05]: /azure/key-vault/general/manage-with-cli2#prerequisites +[06]: /azure/service-fabric/service-fabric-cli +[07]: /azure/storage/common/storage-use-azcopy-v10 +[08]: /cli/azure/ +[09]: /powershell/azure +[10]: /powershell/azure/predictor-overview +[11]: /powershell/exchange/exchange-online-powershell +[12]: /powershell/module/?term=Microsoft.Graph +[13]: /powershell/module/?term=MicrosoftPowerBIMgmt +[14]: /powershell/module/sqlserver +[15]: /sql/tools/sqlcmd-utility +[16]: get-started.md +[17]: https://batch-shipyard.readthedocs.io/en/latest/ +[18]: https://blobxfer.readthedocs.io/en/latest/ +[19]: https://developer.hashicorp.com/packer/docs +[20]: https://docs.chef.io/ +[21]: https://docs.cloudfoundry.org/cf-cli/ +[22]: https://docs.d2iq.com/dkp/2.6/azure-infrastructure +[23]: https://docs.docker.com/desktop/ +[24]: https://github.com/Azure/CloudShell +[25]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md +[26]: https://helm.sh/docs/ +[27]: https://kubernetes.io/docs/reference/kubectl/ +[28]: https://pnp.github.io/office365-cli/ +[29]: https://puppet.com/docs/bolt/latest/bolt.html +[30]: https://www.ansible.com/microsoft-azure +[31]: https://www.terraform.io/docs/providers/azurerm/ +[32]: persisting-shell-storage.md |
communication-services | Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md | The following list presents the set of features that are currently available in | | Mute participant | ✔️ | ✔️ | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ | | | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |-| | Blind Transfer* a participant from group call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ | +| | Blind Transfer* a participant from group call to another endpoint| ✔️ | ✔️ | ✔️ | ✔️ | | | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel media operations | ✔️ | ✔️ | ✔️ | ✔️ |+| | Share [custom info](../../how-tos/call-automation/custom-context.md) (via VOIP or SIP headers) with endpoints when adding them to a call or transferring a call to them| ✔️ | ✔️ | ✔️ | ✔️ | | Query scenarios | Get the call state | ✔️ | ✔️ | ✔️ | ✔️ | | | Get a participant in a call | ✔️ | ✔️ | ✔️ | ✔️ | | | List all participants in a call | ✔️ | ✔️ | ✔️ | ✔️ | |
communication-services | Video Constraints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-constraints.md | +The Video constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The ACS video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality isn't a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality. -The Video Constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The Azure Communication Services video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality is not a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality. --Another benefit of the Video Constraints API is that it enables developers to optimize the video call for different devices. For example, if a user is using an older device with limited processing power, developers can set constraints on the video resolution to ensure that the video call runs smoothly on that device --Azure Communication Services Web Calling SDK supports setting the maximum video resolution, framerate, or bitrate that a client sends. The sender video constraints are supported on Desktop browsers (Chrome, Edge, Firefox) and when using iOS Safari mobile browser or Android Chrome mobile browser. --The native Calling SDK (Android, iOS, Windows) supports setting the maximum values of video resolution and framerate for outgoing video streams and setting the maximum resolution for incoming video streams. These constraints can be set at the start of the call and during the call. +Another benefit of the Video Constraints API is that it enables developers to optimize the video call for different devices. For example, if a user is using an older device with limited processing power, developers can set constraints on the video resolution to ensure that the video call runs smoothly on that device. ## Supported constraints | Platform | Supported Constraints | | -- | -- |-| Web | Outgoing video: resolution, framerate, bitrate | -| Android | Incoming video: resolution<br />Outgoing video: resolution, framerate | -| iOS | Incoming video: resolution<br />Outgoing video: resolution, framerate | -| Windows | Incoming video: resolution<br />Outgoing video: resolution, framerate | +| **Web** | **Incoming video**: resolution<br />**Outgoing video**: resolution, framerate, bitrate | +| **Android** | **Incoming video**: resolution<br />**Outgoing video**: resolution, framerate | +| **iOS** | **Incoming video**: resolution<br />**Outgoing video**: resolution, framerate | +| **Windows** | **Incoming video**: resolution<br />**Outgoing** video: resolution, framerate | ## Next steps For more information, see the following articles: |
communication-services | Record Calls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md | zone_pivot_groups: acs-plat-web-ios-android-windows [!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include-document.md)] -[Call recording](../../concepts/voice-video-calling/call-recording.md), lets your users record their calls made with Azure Communication Services. Here we learn how to manage recording on the client side. Before this can work, you'll need to set up [server side](../../quickstarts/voice-video-calling/call-recording-sample.md) recording. +[Call recording](../../concepts/voice-video-calling/call-recording.md) lets your users record calls that they make with Azure Communication Services. In this article, you learn how to manage recording on the client side. Before you start, you need to set up recording on the [server side](../../quickstarts/voice-video-calling/call-recording-sample.md). ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). - A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).-- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)+- Optional: Completion of the [quickstart to add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md). ::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android" ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ::: zone pivot="platform-windows" ::: zone-end -### Compliance Recording -Compliance recording is Teams policy based recording that could be enabled using this tutorial: [Introduction to Teams policy-based recording for callings](/microsoftteams/teams-recording-policy).<br> -Policy based recording will be started automatically when user with this policy will join a call. To get notification from Azure Communication Service about recording - we can use Cloud Recording section from this article. +### Compliance recording ++Compliance recording is recording that's based on Microsoft Teams policy. You can enable it by using this tutorial: [Introduction to Teams policy-based recording for callings](/microsoftteams/teams-recording-policy). ++Policy-based recording starts automatically when a user who has the policy joins a call. To get a notification from Azure Communication Services about recording, use the following code: ```js const callRecordingApi = call.feature(Features.Recording); const isComplianceRecordingActiveChangedHandler = () => { callRecordingApi.on('isRecordingActiveChanged', isComplianceRecordingActiveChangedHandler); ``` -Compliance recording could be implemented by using custom recording bot [GitHub Example](https://github.com/microsoftgraph/microsoft-graph-comms-samples/tree/a3943bafd73ce0df780c0e1ac3428e3de13a101f/Samples/BetaSamples/LocalMediaSamples/ComplianceRecordingBot).<br> +You can also implement compliance recording by using a custom recording bot. See the [GitHub example](https://github.com/microsoftgraph/microsoft-graph-comms-samples/tree/a3943bafd73ce0df780c0e1ac3428e3de13a101f/Samples/BetaSamples/LocalMediaSamples/ComplianceRecordingBot). ## Next steps+ - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md) - [Learn how to transcribe calls](./call-transcription.md) |
communication-services | Get Started Teams Auto Attendant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md | If you want to clean up and remove a Communication Services subscription, you ca For more information, see the following articles: -- Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](../ui-library/get-started-composites.md)+- Get started with [UI Calling to Teams Voice Apps](../../tutorials/calling-widget/calling-widget-tutorial.md) - Learn about [Calling SDK capabilities](./getting-started-with-calling.md) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md) |
communication-services | Get Started Teams Call Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md | If you want to clean up and remove a Communication Services subscription, you ca For more information, see the following articles: -- Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](../ui-library/get-started-composites.md)+- Get started with [UI Calling to Teams Voice Apps](../../tutorials/calling-widget/calling-widget-tutorial.md) - Learn about [Calling SDK capabilities](./getting-started-with-calling.md) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md) |
communication-services | Calling Widget Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial.md | If you wish to try it out, you can download the code from [GitHub](https://githu Following this tutorial will: - Allow you to control your customers audio and video experience depending on your customer scenario-- Teach you how to build a simple widget for starting calls on your webapp using the UI library.+- Teach you how to build a widget for starting calls on your webapp using the UI library. ## Prerequisites Following this tutorial will: ### Set up the project -Only use this step if you are creating a new application. +Only use this step if you're creating a new application. To set up the react App, we use the `create-react-app` command line tool. This tool-creates an easy to run TypeScript application powered by React. This command creates a simple react application using TypeScript. +creates an easy to run TypeScript application powered by React. This command creates a react application using TypeScript. ```bash # Create an Azure Communication Services App powered by React. cd ui-library-calling-widget-app ### Get your dependencies -Then you need to update the dependency array in the `package.json` to include some packages from Azure Communication Services for the widget experience we are going to build to work: +Then you need to update the dependency array in the `package.json` to include some packages from Azure Communication Services for the widget experience we're going to build to work: ```json-"@azure/communication-calling": "1.19.1-beta.2", -"@azure/communication-chat": "1.4.0-beta.1", -"@azure/communication-react": "1.10.0-beta.1", +"@azure/communication-calling": "1.22.1", +"@azure/communication-chat": "1.4.0", +"@azure/communication-react": "1.13.0", "@azure/communication-calling-effects": "1.0.1", "@azure/communication-common": "2.3.0", "@fluentui/react-icons": "~2.0.203", Your `App.tsx` file should look like this: `src/App.tsx` ```ts+import "./App.css"; +import { + CommunicationIdentifier, + MicrosoftTeamsAppIdentifier, +} from "@azure/communication-common"; +import { + Spinner, + Stack, + initializeIcons, + registerIcons, + Text, +} from "@fluentui/react"; +import { CallAdd20Regular, Dismiss20Regular } from "@fluentui/react-icons"; +import logo from "./logo.svg"; -import './App.css'; -import { CommunicationIdentifier, MicrosoftTeamsAppIdentifier } from '@azure/communication-common'; -import { Spinner, Stack, initializeIcons, registerIcons, Text } from '@fluentui/react'; -import { CallAdd20Regular, Dismiss20Regular } from '@fluentui/react-icons'; -import logo from './logo.svg'; --import { CallingWidgetComponent } from './components/CallingWidgetComponent'; +import { CallingWidgetComponent } from "./components/CallingWidgetComponent"; registerIcons({ icons: { dismiss: <Dismiss20Regular />, callAdd: <CallAdd20Regular /> }, function App() { /** * Token for local user. */- const token = "<Enter your Azure Communication Services token here>"; + const token = "<Enter your ACS Token here>"; /** * User identifier for local user. */ const userId: CommunicationIdentifier = {- communicationUserId: "<Enter your Azure Communication Services ID here>", + communicationUserId: "Enter your ACS Id here", }; /** * Enter your Teams voice app identifier from the Teams admin center here */ const teamsAppIdentifier: MicrosoftTeamsAppIdentifier = {- teamsAppId: '<Enter your teams voice app ID here>', cloud: 'public' - } + teamsAppId: "<Enter your Teams Voice app id here>", + cloud: "public", + }; const widgetParams = { userId, function App() { teamsAppIdentifier, }; - if (!token || !userId || !teamsAppIdentifier) { return (- <Stack verticalAlign='center' style={{ height: '100%', width: '100%' }}> - <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />; + <Stack verticalAlign="center" style={{ height: "100%", width: "100%" }}> + <Spinner + label={"Getting user credentials from server"} + ariaLive="assertive" + labelPosition="top" + /> + ; </Stack>- ) -+ ); } - return ( <Stack style={{ height: "100%", width: "100%", padding: "3rem" }} tokens={{ childrenGap: "1.5rem" }} >- <Stack tokens={{ childrenGap: '1rem' }} style={{ margin: "auto" }}> + <Stack tokens={{ childrenGap: "1rem" }} style={{ margin: "auto" }}> <Stack style={{ padding: "3rem" }} horizontal function App() { </Stack> <Text>- Welcome to a Calling Widget sample for the Azure Communication Services UI - Library. Sample has the ability to connect you through Teams voice apps to a agent to help you. + Welcome to a Calling Widget sample for the Azure Communication + Services UI Library. Sample has the ability to connect you through + Teams voice apps to a agent to help you. </Text> <Text> As a user all you need to do is click the widget below, enter your function App() { action the <b>start call</b> button. </Text> </Stack>- <Stack horizontal tokens={{ childrenGap: '1.5rem' }} style={{ overflow: 'hidden', margin: 'auto' }}> + <Stack + horizontal + tokens={{ childrenGap: "1.5rem" }} + style={{ overflow: "hidden", margin: "auto" }} + > <CallingWidgetComponent widgetAdapterArgs={widgetParams} onRenderLogo={() => { return ( <img- style={{ height: '4rem', width: '4rem', margin: 'auto' }} + style={{ height: "4rem", width: "4rem", margin: "auto" }} src={logo} alt="logo" /> export default App; ``` -In this snippet we register two new icons `<Dismiss20Regular/>` and `<CallAdd20Regular>`. These new icons are used inside the widget component that we are creating in the next section. +In this snippet, we register two new icons `<Dismiss20Regular/>` and `<CallAdd20Regular>`. These new icons are used inside the widget component that we're creating in the next section. ### Create the widget Now we need to make a widget that can show in three different modes: - Waiting: This widget state is how the component will be in before and after a call is made - Setup: This state is when the widget asks for information from the user like their name.-- In a call: The widget is replaced here with the UI library Call Composite. This is the mode when the user is calling the Voice app or talking with an agent.+- In a call: The widget is replaced here with the UI library Call Composite. This widget mode is when the user is calling the Voice app or talking with an agent. -Lets create a folder called `src/components`. In this folder make a new file called `CallingWidgetComponent.tsx`. This file should look like the following snippet: +Lets create a folder called `src/components`. In this folder, make a new file called `CallingWidgetComponent.tsx`. This file should look like the following snippet: `CallingWidgetComponent.tsx` ```ts-import { IconButton, PrimaryButton, Stack, TextField, useTheme, Checkbox, Icon } from '@fluentui/react'; -import React, { useState } from 'react'; import {- callingWidgetSetupContainerStyles, - checkboxStyles, - startCallButtonStyles, - callingWidgetContainerStyles, - callIconStyles, - logoContainerStyles, - collapseButtonStyles, - callingWidgetInCallContainerStyles -} from '../styles/CallingWidgetComponent.styles'; -import { AzureCommunicationTokenCredential, CommunicationIdentifier, MicrosoftTeamsAppIdentifier } from '@azure/communication-common'; + IconButton, + PrimaryButton, + Stack, + TextField, + useTheme, + Checkbox, + Icon, + Spinner, +} from "@fluentui/react"; +import React, { useEffect, useRef, useState } from "react"; +import { + callingWidgetSetupContainerStyles, + checkboxStyles, + startCallButtonStyles, + callingWidgetContainerStyles, + callIconStyles, + logoContainerStyles, + collapseButtonStyles, +} from "../styles/CallingWidgetComponent.styles"; ++import { + AzureCommunicationTokenCredential, + CommunicationUserIdentifier, + MicrosoftTeamsAppIdentifier, +} from "@azure/communication-common"; import {- CallAdapter, - CallComposite, - useAzureCommunicationCallAdapter, - AzureCommunicationCallAdapterArgs -} from '@azure/communication-react'; -import { useCallback, useMemo } from 'react'; + CallAdapter, + CallAdapterState, + CallComposite, + CommonCallAdapterOptions, + StartCallIdentifier, + createAzureCommunicationCallAdapter, +} from "@azure/communication-react"; +// lets add to our react imports as well +import { useMemo } from "react"; ++import { callingWidgetInCallContainerStyles } from "../styles/CallingWidgetComponent.styles"; /** * Properties needed for our widget to start a call. */ export type WidgetAdapterArgs = {- token: string; - userId: CommunicationIdentifier; - teamsAppIdentifier: MicrosoftTeamsAppIdentifier; + token: string; + userId: CommunicationUserIdentifier; + teamsAppIdentifier: MicrosoftTeamsAppIdentifier; }; export interface CallingWidgetComponentProps {- /** - * arguments for creating an AzureCommunicationCallAdapter for your Calling experience - */ - widgetAdapterArgs: WidgetAdapterArgs; - /** - * Custom render function for displaying logo. - * @returns - */ - onRenderLogo?: () => JSX.Element; + /** + * arguments for creating an AzureCommunicationCallAdapter for your Calling experience + */ + widgetAdapterArgs: WidgetAdapterArgs; + /** + * Custom render function for displaying logo. + * @returns + */ + onRenderLogo?: () => JSX.Element; } /** export interface CallingWidgetComponentProps { * @param props */ export const CallingWidgetComponent = (- props: CallingWidgetComponentProps + props: CallingWidgetComponentProps ): JSX.Element => {- const { onRenderLogo, widgetAdapterArgs } = props; -- const [widgetState, setWidgetState] = useState<'new' | 'setup' | 'inCall'>('new'); - const [displayName, setDisplayName] = useState<string>(); - const [consentToData, setConsentToData] = useState<boolean>(false); - const [useLocalVideo, setUseLocalVideo] = useState<boolean>(false); + const { onRenderLogo, widgetAdapterArgs } = props; - const theme = useTheme(); + const [widgetState, setWidgetState] = useState<"new" | "setup" | "inCall">( + "new" + ); + const [displayName, setDisplayName] = useState<string>(); + const [consentToData, setConsentToData] = useState<boolean>(false); + const [useLocalVideo, setUseLocalVideo] = useState<boolean>(false); + const [adapter, setAdapter] = useState<CallAdapter>(); ++ const callIdRef = useRef<string>(); ++ const theme = useTheme(); ++ // add this before the React template + const credential = useMemo(() => { + try { + return new AzureCommunicationTokenCredential(widgetAdapterArgs.token); + } catch { + console.error("Failed to construct token credential"); + return undefined; + } + }, [widgetAdapterArgs.token]); ++ const adapterOptions: CommonCallAdapterOptions = useMemo( + () => ({ + callingSounds: { + callEnded: { url: "/sounds/callEnded.mp3" }, + callRinging: { url: "/sounds/callRinging.mp3" }, + callBusy: { url: "/sounds/callBusy.mp3" }, + }, + }), + [] + ); - const credential = useMemo(() => { - try { - return new AzureCommunicationTokenCredential(widgetAdapterArgs.token); - } catch { - console.error('Failed to construct token credential'); - return undefined; + const callAdapterArgs = useMemo(() => { + return { + userId: widgetAdapterArgs.userId, + credential: credential, + targetCallees: [ + widgetAdapterArgs.teamsAppIdentifier, + ] as StartCallIdentifier[], + displayName: displayName, + options: adapterOptions, + }; + }, [ + widgetAdapterArgs.userId, + widgetAdapterArgs.teamsAppIdentifier.teamsAppId, + credential, + displayName, + ]); ++ useEffect(() => { + if (adapter) { + adapter.on("callEnded", () => { + /** + * We only want to reset the widget state if the call that ended is the same as the current call. + */ + if ( + adapter.getState().acceptedTransferCallState && + adapter.getState().acceptedTransferCallState?.id !== callIdRef.current + ) { + return; }- }, [widgetAdapterArgs.token]); -- const callAdapterArgs = useMemo(() => { - return { - userId: widgetAdapterArgs.userId, - credential: credential, - locator: {participantIds: [`28:orgid:${widgetAdapterArgs.teamsAppIdentifier.teamsAppId}`]}, - displayName: displayName + setDisplayName(undefined); + setWidgetState("new"); + setConsentToData(false); + setAdapter(undefined); + adapter.dispose(); + }); ++ adapter.on("transferAccepted", (e) => { + console.log("transferAccepted", e); + }); ++ adapter.onStateChange((state: CallAdapterState) => { + if (state?.call?.id && callIdRef.current !== state?.call?.id) { + callIdRef.current = state?.call?.id; + console.log(`Call Id: ${callIdRef.current}`); }- }, [widgetAdapterArgs.userId, widgetAdapterArgs.teamsAppIdentifier.teamsAppId, credential, displayName]); -- const afterCreate = useCallback(async (adapter: CallAdapter): Promise<CallAdapter> => { - adapter.on('callEnded', () => { - setDisplayName(undefined); - setWidgetState('new'); - }); - return adapter; - }, []) -- const adapter = useAzureCommunicationCallAdapter(callAdapterArgs as AzureCommunicationCallAdapterArgs, afterCreate); -- // Widget template for when widget is open, put any fields here for user information desired - if (widgetState === 'setup' ) { - return ( - <Stack styles={callingWidgetSetupContainerStyles(theme)} tokens={{ childrenGap: '1rem' }}> - <IconButton - styles={collapseButtonStyles} - iconProps={{ iconName: 'Dismiss' }} - onClick={() => setWidgetState('new')} /> - <Stack tokens={{ childrenGap: '1rem' }} styles={logoContainerStyles}> - <Stack style={{ transform: 'scale(1.8)' }}>{onRenderLogo && onRenderLogo()}</Stack> - </Stack> - <TextField - label={'Name'} - required={true} - placeholder={'Enter your name'} - onChange={(_, newValue) => { - setDisplayName(newValue); - }} /> - <Checkbox - styles={checkboxStyles(theme)} - label={'Use video - Checking this box will enable camera controls and screen sharing'} - onChange={(_, checked?: boolean | undefined) => { - setUseLocalVideo(true); - }} - ></Checkbox> - <Checkbox - required={true} - styles={checkboxStyles(theme)} - label={'By checking this box, you are consenting that we will collect data from the call for customer support reasons'} - onChange={(_, checked?: boolean | undefined) => { - setConsentToData(!!checked); - }} - ></Checkbox> - <PrimaryButton - styles={startCallButtonStyles(theme)} - onClick={() => { - if (displayName && consentToData && adapter && widgetAdapterArgs.teamsAppIdentifier) { - setWidgetState('inCall'); - adapter.startCall([widgetAdapterArgs.teamsAppIdentifier]); - } - }} - > - StartCall - </PrimaryButton> - </Stack> - ); - } -- if (widgetState === 'inCall' && adapter) { - return ( - <Stack styles={callingWidgetInCallContainerStyles(theme)}> - <CallComposite - adapter={adapter} - options={{ - callControls: { - cameraButton: useLocalVideo, - screenShareButton: useLocalVideo, - moreButton: false, - peopleButton: false, - displayType: 'compact' - }, - localVideoTile: !useLocalVideo ? false : { position: 'floating' } - }}/> - </Stack> - ) + }); }+ }, [adapter]); + /** widget template for when widget is open, put any fields here for user information desired */ + if (widgetState === "setup") { return (- <Stack - horizontalAlign="center" - verticalAlign="center" - styles={callingWidgetContainerStyles(theme)} + <Stack + styles={callingWidgetSetupContainerStyles(theme)} + tokens={{ childrenGap: "1rem" }} + > + <IconButton + styles={collapseButtonStyles} + iconProps={{ iconName: "Dismiss" }} onClick={() => {- setWidgetState('setup'); + setDisplayName(undefined); + setConsentToData(false); + setUseLocalVideo(false); + setWidgetState("new"); }}- > - <Stack - horizontalAlign="center" - verticalAlign="center" - style={{ height: '4rem', width: '4rem', borderRadius: '50%', background: theme.palette.themePrimary }} - > - <Icon iconName="callAdd" styles={callIconStyles(theme)} /> - </Stack> + /> + <Stack tokens={{ childrenGap: "1rem" }} styles={logoContainerStyles}> + <Stack style={{ transform: "scale(1.8)" }}> + {onRenderLogo && onRenderLogo()} + </Stack> </Stack>+ <TextField + label={"Name"} + required={true} + placeholder={"Enter your name"} + onChange={(_, newValue) => { + setDisplayName(newValue); + }} + /> + <Checkbox + styles={checkboxStyles(theme)} + label={ + "Use video - Checking this box will enable camera controls and screen sharing" + } + onChange={(_, checked?: boolean | undefined) => { + setUseLocalVideo(!!checked); + setUseLocalVideo(true); + }} + ></Checkbox> + <Checkbox + required={true} + styles={checkboxStyles(theme)} + disabled={displayName === undefined} + label={ + "By checking this box, you are consenting that we will collect data from the call for customer support reasons" + } + onChange={async (_, checked?: boolean | undefined) => { + setConsentToData(!!checked); + if (callAdapterArgs && callAdapterArgs.credential) { + setAdapter( + await createAzureCommunicationCallAdapter({ + displayName: displayName ?? "", + userId: callAdapterArgs.userId, + credential: callAdapterArgs.credential, + targetCallees: callAdapterArgs.targetCallees, + options: callAdapterArgs.options, + }) + ); + } + }} + ></Checkbox> + <PrimaryButton + styles={startCallButtonStyles(theme)} + onClick={() => { + if (displayName && consentToData && adapter) { + setWidgetState("inCall"); + adapter?.startCall(callAdapterArgs.targetCallees, { + audioOptions: { muted: false }, + }); + } + }} + > + {!consentToData && `Enter your name`} + {consentToData && !adapter && ( + <Spinner ariaLive="assertive" labelPosition="top" /> + )} + {consentToData && adapter && `StartCall`} + </PrimaryButton> + </Stack> );+ } ++ if (widgetState === "inCall" && adapter) { + return ( + <Stack styles={callingWidgetInCallContainerStyles(theme)}> + <CallComposite + adapter={adapter} + options={{ + callControls: { + cameraButton: useLocalVideo, + screenShareButton: useLocalVideo, + moreButton: false, + peopleButton: false, + displayType: "compact", + }, + localVideoTile: !useLocalVideo ? false : { position: "floating" }, + }} + /> + </Stack> + ); + } ++ return ( + <Stack + horizontalAlign="center" + verticalAlign="center" + styles={callingWidgetContainerStyles(theme)} + onClick={() => { + setWidgetState("setup"); + }} + > + <Stack + horizontalAlign="center" + verticalAlign="center" + style={{ + height: "4rem", + width: "4rem", + borderRadius: "50%", + background: theme.palette.themePrimary, + }} + > + <Icon iconName="callAdd" styles={callIconStyles(theme)} /> + </Stack> + </Stack> + ); }; ``` #### Style the widget -We need to write some styles to make sure the widget looks appropriate and can hold our call composite. These styles should already be used in the widget if copying the snippet above. +We need to write some styles to make sure the widget looks appropriate and can hold our call composite. These styles should already be used in the widget if copying the snippet we added to the file `CallingWidgetComponent.tsx`. -lets make a new folder called `src/styles` in this folder create a file called `CallingWidgetComponent.styles.ts`. The file should look like the following snippet: +Lets make a new folder called `src/styles` in this folder, create a file called `CallingWidgetComponent.styles.ts`. The file should look like the following snippet: ```ts-import { IButtonStyles, ICheckboxStyles, IIconStyles, IStackStyles, Theme } from '@fluentui/react'; +import { + IButtonStyles, + ICheckboxStyles, + IIconStyles, + IStackStyles, + Theme, +} from "@fluentui/react"; export const checkboxStyles = (theme: Theme): ICheckboxStyles => { return { export const callingWidgetContainerStyles = (theme: Theme): IStackStyles => { }; }; -export const callingWidgetSetupContainerStyles = (theme: Theme): IStackStyles => { +export const callingWidgetSetupContainerStyles = ( + theme: Theme +): IStackStyles => { return { root: { width: "18rem", export const callingWidgetSetupContainerStyles = (theme: Theme): IStackStyles => position: "absolute", overflow: "hidden", cursor: "pointer",- background: theme.palette.white + background: theme.palette.white, }, }; }; export const collapseButtonStyles: IButtonStyles = { }, }; -export const callingWidgetInCallContainerStyles = (theme: Theme): IStackStyles => { +export const callingWidgetInCallContainerStyles = ( + theme: Theme +): IStackStyles => { return { root: {- width: '35rem', - height: '25rem', - padding: '0.5rem', + width: "35rem", + height: "25rem", + padding: "0.5rem", boxShadow: theme.effects.elevation16, borderRadius: theme.effects.roundedCorner6, bottom: 0,- right: '1rem', - position: 'absolute', - overflow: 'hidden', - cursor: 'pointer', - background: theme.semanticColors.bodyBackground - } - } -} + right: "1rem", + position: "absolute", + overflow: "hidden", + cursor: "pointer", + background: theme.semanticColors.bodyBackground, + }, + }; +}; ``` ### Run the app Then when you action the widget button, you should see a little menu: ![Screenshot of calling widget sample app home page widget open.](../media/calling-widget/sample-app-splash-widget-open.png) -after you fill out your name click start call and the call should begin. The widget should look like so after starting a call: +After you fill out your name click start call and the call should begin. The widget should look like so after starting a call: ![Screenshot of click to call sample app home page with calling experience embedded in widget.](../media/calling-widget/calling-widget-embedded-start.png) ## Next steps -If you haven't had the chance, check out our documentation on Teams auto attendants and Teams call queues. +For more information about Teams voice applications, check out our documentation on Teams auto attendants and Teams call queues. > [!div class="nextstepaction"] |
confidential-computing | Virtual Machine Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions.md | Confidential VMs run on specialized hardware, so you can only [resize confidenti It's not possible to resize a non-confidential VM to a confidential VM. -### Disk encryption +### Guest Operating System Support OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include: - Ubuntu 20.04 LTS (AMD SEV-SNP supported only) - Ubuntu 22.04 LTS+- Red Hat Enterprise Linux 9.3 (AMD SEV-SNP supported only) - Windows Server 2019 Datacenter - x64 Gen 2 (AMD SEV-SNP supported only) - Windows Server 2019 Datacenter Server Core - x64 Gen 2 (AMD SEV-SNP supported only) - Windows Server 2022 Datacenter - x64 Gen 2 OS images for confidential VMs have to meet certain security and compatibility r - Windows 11 Enterprise, version 22H2 -x64 Gen 2 - Windows 11 Enterprise multi-session, version 22H2 -x64 Gen 2 +As we work to onboard more OS images with confidential OS disk encryption, there are various images available in early preview that can be tested. You can sign up below: ++- [Red Hat Enterprise Linux 9.3 (Support for Intel TDX)](https://aka.ms/tdx-rhel-93-preview) +- [SUSE Enterprise Linux 15 SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview) +- [SUSE Enterprise Linux 15 SAP SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview) + For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md). ### High availability and disaster recovery Azure Resource Manager is the deployment and management service for Azure. You c Make sure to specify the following properties for your VM in the parameters section (`parameters`): - VM size (`vmSize`). Choose from the different [confidential VM families and sizes](#sizes).-- OS image name (`osImageName`). Choose from the [qualified OS images](#disk-encryption).+- OS image name (`osImageName`). Choose from the qualified OS images. - Disk encryption type (`securityType`). Choose from VMGS-only encryption (`VMGuestStateOnly`) or full OS disk pre-encryption (`DiskWithVMGuestState`), which might result in longer provisioning times. For Intel TDX instances only we also support another security type (`NonPersistedTPM`) which has no VMGS or OS disk encryption. ## Next steps |
connectors | Built In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md | ms.suite: integration Previously updated : 02/12/2024 Last updated : 02/15/2024 # Built-in connectors in Azure Logic Apps Built-in connectors provide ways for you to control your workflow's schedule and For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multitenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type and not the other. -For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob Storage, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors. +For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob Storage, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, FTP, IBM DB2, IBM MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, and Azure App Services, while a Standard workflow doesn't have these built-in connectors. Also, in Standard workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Microsoft Entra ID, or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multitenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). The following table lists the current and expanding galleries of built-in connec | Consumption | Standard | |-|-|-| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure Automation* <br>Azure Blob Storage* <br>Azure Cosmos DB* <br>Azure File Storage* <br>Azure Functions <br>Azure Queue Storage* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Grid Publisher* <br>Event Hubs* <br>File System* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>JDBC* <br>Key Vault* <br>Liquid operations <br>MQ* <br>Request <br>SAP* <br>Schedule <br>Service Bus* <br>SFTP* <br>SMTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations | +| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure AI Search* <br>Azure Automation* <br>Azure Blob Storage* <br>Azure Cosmos DB* <br>Azure Event Grid Publisher* <br>Azure Event Hubs* <br>Azure File Storage* <br>Azure Functions <br>Azure Key Vault* <br>Azure OpenAI* <br>Azure Queue Storage* <br>Azure Service Bus* <br>Azure Table Storage* <br>Batch Operations <br>Control <br>Data Mapper Operations <br>Data Operations <br>Date Time <br>EDIFACT <br>File System* <br>Flat File <br>FTP* <br>HTTP <br>IBM 3270* <br>IBM CICS* <br>IBM DB2* <br>IBM Host File* <br>IBM IMS* <br>IBM MQ* <br>Inline Code <br>Integration Account <br>JDBC* <br>Liquid Operations <br>Request <br>RosettaNet <br>SAP* <br>Schedule <br>SFTP* <br>SMTP* <br>SQL Server* <br>SWIFT <br>Variables <br>Workflow Operations <br>X12 <br>XML Operations | <a name="service-provider-interface-implementation"></a> You can use the following built-in connectors to perform general tasks, for exam [![Batch icon][batch-icon]][batch-doc] \ \- [**Batch**][batch-doc]<br>(*Consumption workflow only*) + [**Batch**][batch-doc] \ \ [**Batch messages**][batch-doc]: Trigger a workflow that processes messages in batches. You can use the following built-in connectors to perform general tasks, for exam You can use the following built-in connectors to access specific services and systems. In Standard workflows, some of these built-in connectors are also informally known as *service providers*, which can differ from their managed connector counterparts in some ways. :::row:::+ :::column::: + [![Azure AI Search icon][azure-ai-search-icon]][azure-ai-search-doc] + \ + \ + [**Azure API Search**][azure-ai-search-doc]<br>(*Standard workflow only*) + \ + \ + Connect to AI Search so that you can perform document indexing and search operations in your workflow. + :::column-end::: :::column::: [![Azure API Management icon][azure-api-management-icon]][azure-api-management-doc] \ You can use the following built-in connectors to access specific services and sy \ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end:::+ :::column::: + [![Azure Event Grid Publisher icon][azure-event-grid-publisher-icon]][azure-event-grid-publisher-doc] + \ + \ + [**Azure Event Grid Publisher**][azure-event-grid-publisher-doc]<br>(*Standard workflow only*) + \ + \ + Connect to Azure Event Grid for event-based programming using pub-sub semantics. + :::column-end::: :::column::: [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc] \ You can use the following built-in connectors to access specific services and sy [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc] \ \- [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption workflow*) <br><br>-or-<br><br>**Workflow operations**<br>(*Standard workflow*) + [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption workflow*) <br><br>-or-<br><br>**Workflow Operations**<br>(*Standard workflow*) \ \ Call other workflows that start with the Request trigger named **When a HTTP request is received**. :::column-end:::+ :::column::: + [![Azure OpenAI icon][azure-openai-icon]][azure-openai-doc] + \ + \ + [**Azure OpenAI**][azure-openai-doc]<br>(*Standard workflow only*) + \ + \ + Connect to Azure Open AI to perform operations on large language models. + :::column-end::: :::column::: [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc] \ You can use the following built-in connectors to access specific services and sy \ Connect to your Azure Storage account so that you can create, update, and manage queues. :::column-end:::+ :::column::: + [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc] + \ + \ + [**IBM 3270**][ibm-3270-doc]<br>(*Standard workflow only*) + \ + \ + Call 3270 screen-driven apps on IBM mainframes from your workflow. + :::column-end::: + :::column::: + [![IBM CICS icon][ibm-cics-icon]][ibm-cics-doc] + \ + \ + [**IBM CICS**][ibm-cics-doc]<br>(*Standard workflow only*) + \ + \ + Call CICS programs on IBM mainframes from your workflow. + :::column-end::: :::column::: [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc] \ You can use the following built-in connectors to access specific services and sy \ Connect to IBM Host File and generate or parse contents. :::column-end:::+ :::column::: + [![IBM IMS icon][ibm-ims-icon]][ibm-ims-doc] + \ + \ + [**IBM IMS**][ibm-ims-doc]<br>(*Standard workflow only*) + \ + \ + Call IMS programs on IBM mainframes from your workflow. + :::column-end::: :::column::: [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc] \ You can use the following built-in connectors to access specific services and sy \ Connect to IBM MQ on-premises or in Azure to send and receive messages. :::column-end::: :::column::: [![JDBC icon][jdbc-icon]][jdbc-doc] \ You can use the following built-in connectors to access specific services and sy \ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. :::column-end:::- :::column::: - :::column-end::: :::row-end::: ## Run code from workflows Azure Logic Apps provides the following built-in actions for running your own co [**Inline Code**][inline-code-doc] \ \- [**Execute JavaScript Code**][inline-code-doc]: Add and run your own inline JavaScript *code snippets* within your workflow. + [Add and run inline JavaScript code snippets](../logic-apps/logic-apps-add-run-inline-code.md) from your workflow. :::column-end::: :::column:::+ [![Local Function Operations icon][local-function-icon]][local-function-doc] + \ + \ + [**Local Function Operations**][local-function-doc]<br>(Standard workflow only) + \ + \ + [Create and run .NET Framework code](../logic-apps/create-run-custom-code-functions.md) from your workflow. :::column-end::: :::column::: :::column-end::: Azure Logic Apps provides the following built-in actions for structuring and con [![Scope action icon][scope-icon]][scope-doc] \ \- [**Name**][scope-doc] + [**Scope**][scope-doc] \ \ Group actions into *scopes*, which get their own status after the actions in the scope finish running. Azure Logic Apps provides the following built-in actions for working with data o :::column-end::: :::row-end::: -<a name="integration-account-built-in"></a> +<a name="b2b-built-in-operations"></a> -## Integration account built-in connectors +## Business-to-business (B2B) built-in operations -Integration account operations support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, and others, you can use integration account built-in actions to encode and decode messages, transform content, and more. +Azure Logic Apps supports business-to-business (B2B) communication scenarios through various B2B built-in operations. Based on whether you have a Consumption or Standard workflow and the B2B operations that you want to use, [you might have to create and link an integration account to your logic app resource](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). You then use this integration account to define your B2B artifacts, such as trading partners, agreements, maps, schemas, certificates, and so on. * Consumption workflows - Before you use any integration account operations in a workflow, [link your logic app resource to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). + Before you can use any B2B operations in a workflow, [you must create and link an integration account to your logic app resource](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). After you create your integration account, you must then define your B2B artifacts, such as trading partners, agreements, maps, schemas, certificates, and so on. You can then use the B2B operations to encode and decode messages, transform content, and more. * Standard workflows - While most integration account operations don't require that you link your logic app resource to your integration account, linking lets you share artifacts across multiple Standard workflows and their child workflows. Based on the integration account operation that you want to use, complete one of the following steps before you use the operation: + Some B2B operations require that you [create and link an integration account to your logic app resource](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). Linking lets you share artifacts across multiple Standard workflows and their child workflows. Based on the B2B operation that you want to use, complete one of the following steps before you use the operation: * For operations that require maps or schemas, you can either: For more information, review the following documentation: :::row::: :::column:::- [![AS2 Decode v2 icon][as2-v2-icon]][as2-doc] + [![AS2 v2 icon][as2-v2-icon]][as2-doc] + \ + \ + [**AS2 (v2)**][as2-doc]<br>(*Standard workflow only*) + \ + \ + Encode and decode messages that use the AS2 protocol. + :::column-end::: + :::column::: + [![EDIFACT icon][edifact-icon]][edifact-doc] \ \- [**AS2 Decode (v2)**][as2-doc]<br>(*Standard workflow only*) + [**EDIFACT**][edifact-doc] \ \- Decode messages received using the AS2 protocol. + Encode and decode messages that use the EDIFACT protocol. :::column-end::: :::column:::- [![AS2 Encode (v2) icon][as2-v2-icon]][as2-doc] + [![Flat File icon][flat-file-icon]][flat-file-doc] \ \- [**AS2 Encode (v2)**][as2-doc]<br>(*Standard workflow only*) + [**Flat File**][flat-file-doc] \ \- Encode messages sent using the AS2 protocol. + Encode and decode XML messages between trading partners. :::column-end::: :::column:::- [![Flat file decoding icon][flat-file-decode-icon]][flat-file-decode-doc] + [![Integration account icon][integration-account-icon]][integration-account-doc] \ \- [**Flat file decoding**][flat-file-decode-doc] + [**Integration Account Artifact Lookup**][integration-account-doc] \ \- Encode XML before sending the content to a trading partner. + Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account. :::column-end::: :::column:::- [![Flat file encoding icon][flat-file-encode-icon]][flat-file-encode-doc] + [![Liquid Operations icon][liquid-icon]][liquid-transform-doc] \ \- [**Flat file encoding**][flat-file-encode-doc] + [**Liquid Operations**][liquid-transform-doc] \ \- Decode XML after receiving the content from a trading partner. + Convert the following formats by using Liquid templates: <br><br>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT :::column-end::: :::row-end::: :::row::: :::column:::- [![Integration account icon][integration-account-icon]][integration-account-doc] + [![RosettaNet icon][rosettanet-icon]][rosettanet-doc] \ \- [**Integration Account Artifact Lookup**][integration-account-doc]<br>(*Consumption workflow only*) + [**RosettaNet**][rosettanet-doc] \ \- Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account. + Encode and decode messages that use the RosettaNet protocol. :::column-end::: :::column:::- [![Liquid operations icon][liquid-icon]][json-liquid-transform-doc] + [![SWIFT icon][swift-icon]][swift-doc] \ \- [**Liquid operations**][json-liquid-transform-doc] + [**SWIFT**][swift-doc]<br>(*Standard workflow only*) \ \- Convert the following formats by using Liquid templates: <br><br>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT + Encode and decode Society for Worldwide Interbank Financial Telecommuncation (SIWFT) transactions in flat-file XML message format. :::column-end::: :::column::: [![Transform XML icon][xml-transform-icon]][xml-transform-doc] For more information, review the following documentation: \ Convert the source XML format to another XML format. :::column-end:::+ :::column::: + [![X12 icon][x12-icon]][x12-doc] + \ + \ + [**X12**][x12-doc] + \ + \ + Encode and decode messages that use the X12 protocol. + :::column-end::: :::column::: [![XML validation icon][xml-validate-icon]][xml-validate-doc] \ \- [**XML validation**][xml-validate-doc] + [**XML Validation**][xml-validate-doc] \ \ Validate XML documents against the specified schema. For more information, review the following documentation: > [Create custom APIs that you can call from Azure Logic Apps](../logic-apps/logic-apps-create-api-app.md) <!-- Built-in icons -->+[azure-ai-search-icon]: ./media/apis-list/azure-ai-search.png [azure-api-management-icon]: ./media/apis-list/azure-api-management.png [azure-app-services-icon]: ./media/apis-list/azure-app-services.png [azure-automation-icon]: ./media/apis-list/azure-automation.png [azure-blob-storage-icon]: ./media/apis-list/azure-blob-storage.png [azure-cosmos-db-icon]: ./media/apis-list/azure-cosmos-db.png+[azure-event-grid-publisher-icon]: ./media/apis-list/azure-event-grid-publisher.png [azure-event-hubs-icon]: ./media/apis-list/azure-event-hubs.png [azure-file-storage-icon]: ./media/apis-list/azure-file-storage.png [azure-functions-icon]: ./media/apis-list/azure-functions.png [azure-key-vault-icon]: ./media/apis-list/azure-key-vault.png [azure-logic-apps-icon]: ./media/apis-list/azure-logic-apps.png-[azure-queue-storage-icon]: ./media/apis-list/azure-queues.png +[azure-openai-icon]: ./media/apis-list/azure-openai.png +[azure-queue-storage-icon]: ./media/apis-list/azure-queue-storage.png [azure-service-bus-icon]: ./media/apis-list/azure-service-bus.png [azure-table-storage-icon]: ./media/apis-list/azure-table-storage.png [batch-icon]: ./media/apis-list/batch.png For more information, review the following documentation: [http-response-icon]: ./media/apis-list/response.png [http-swagger-icon]: ./media/apis-list/http-swagger.png [http-webhook-icon]: ./media/apis-list/http-webhook.png+[ibm-3270-icon]: ./media/apis-list/ibm-3270.png +[ibm-cics-icon]: ./media/apis-list/ibm-cics.png [ibm-db2-icon]: ./media/apis-list/ibm-db2.png [ibm-host-file-icon]: ./media/apis-list/ibm-host-file.png+[ibm-ims-icon]: ./media/apis-list/ibm-ims.png [ibm-mq-icon]: ./media/apis-list/ibm-mq.png [inline-code-icon]: ./media/apis-list/inline-code.png [jdbc-icon]: ./media/apis-list/jdbc.png+[local-function-icon]: ./media/apis-list/local-function.png [sap-icon]: ./media/apis-list/sap.png [schedule-icon]: ./media/apis-list/recurrence.png [scope-icon]: ./media/apis-list/scope.png [sftp-ssh-icon]: ./media/apis-list/sftp.png [smtp-icon]: ./media/apis-list/smtp.png [sql-server-icon]: ./media/apis-list/sql.png+[swift-icon]: ./media/apis-list/swift.png [switch-icon]: ./media/apis-list/switch.png [terminate-icon]: ./media/apis-list/terminate.png [until-icon]: ./media/apis-list/until.png [variables-icon]: ./media/apis-list/variables.png -<!--Built-in integration account connector icons --> +<!--B2B built-in operation icons --> [as2-v2-icon]: ./media/apis-list/as2-v2.png-[flat-file-encode-icon]: ./media/apis-list/flat-file-encoding.png -[flat-file-decode-icon]: ./media/apis-list/flat-file-decoding.png +[edifact-icon]: ./media/apis-list/edifact.png +[flat-file-icon]: ./media/apis-list/flat-file-decoding.png [integration-account-icon]: ./media/apis-list/integration-account.png [liquid-icon]: ./media/apis-list/liquid-transform.png+[rosettanet-icon]: ./media/apis-list/rosettanet.png +[x12-icon]: ./media/apis-list/x12.png [xml-transform-icon]: ./media/apis-list/xml-transform.png [xml-validate-icon]: ./media/apis-list/xml-validation.png <!--Built-in doc links-->+[azure-ai-search-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/public-preview-of-azure-openai-and-ai-search-in-app-connectors/ba-p/4049584 "Connect to AI Search so that you can perform document indexing and search operations in your workflow" [azure-api-management-doc]: ../api-management/get-started-create-service-instance.md "Create an Azure API Management service instance for managing and publishing your APIs" [azure-app-services-doc]: ../logic-apps/logic-apps-custom-api-host-deploy-call.md "Integrate logic app workflows with App Service API Apps" [azure-automation-doc]: /azure/logic-apps/connectors/built-in/reference/azureautomation/ "Connect to your Azure Automation accounts so you can create and manage Azure Automation jobs" [azure-blob-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azureblob/ "Manage files in your blob container with Azure Blob storage" [azure-cosmos-db-doc]: /azure/logic-apps/connectors/built-in/reference/azurecosmosdb/ "Connect to Azure Cosmos DB so you can access and manage Azure Cosmos DB documents"+[azure-event-grid-publisher-doc]: /azure/logic-apps/connectors/built-in/reference/eventgridpublisher/ "Connect to Azure Event Grid for event-based programming using pub-sub semantics" [azure-event-hubs-doc]: /azure/logic-apps/connectors/built-in/reference/eventhub/ "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs" [azure-file-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurefile/ "Connect to Azure File Storage so you can create and manage files in your Azure storage account" [azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic app workflows with Azure Functions" [azure-key-vault-doc]: /azure/logic-apps/connectors/built-in/reference/keyvault/ "Connect to Azure Key Vault to securely store, access, and manage secrets"+[azure-openai-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/public-preview-of-azure-openai-and-ai-search-in-app-connectors/ba-p/4049584 "Connect to Azure Open AI to perform operations on large language models" [azure-queue-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurequeues/ "Connect to Azure Storage so you can create and manage queue entries and queues" [azure-service-bus-doc]: /azure/logic-apps/connectors/built-in/reference/servicebus/ "Manage messages from Service Bus queues, topics, and topic subscriptions" [azure-table-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azuretables/ "Connect to Azure Storage so you can create, update, and query tables and more" [batch-doc]: ../logic-apps/logic-apps-batch-process-send-receive-messages.md "Process messages in groups, or as batches" [condition-doc]: ../logic-apps/logic-apps-control-flow-conditional-statement.md "Evaluate a condition and run different actions based on whether the condition is true or false" [data-operations-doc]: ../logic-apps/logic-apps-perform-data-operations.md "Perform data operations such as filtering arrays or creating CSV and HTML tables"-[event-grid-publisher-doc]: /azure/logic-apps/connectors/built-in/reference/eventgridpublisher/ "Connect to Azure Event Grid for event-based programming using pub-sub semantics" [file-system-doc]: /azure/logic-apps/connectors/built-in/reference/filesystem/ "Connect to a file system on your network machine to create and manage files" [for-each-doc]: ../logic-apps/logic-apps-control-flow-loops.md#foreach-loop "Perform the same actions on every item in an array" [ftp-doc]: /azure/logic-apps/connectors/built-in/reference/ftp/ "Connect to an FTP or FTPS server for FTP tasks, like uploading, getting, deleting files, and more" For more information, review the following documentation: [http-response-doc]: ./connectors-native-reqres.md "Respond to HTTP requests from your logic app workflows" [http-swagger-doc]: ./connectors-native-http-swagger.md "Call REST endpoints from your logic app workflows" [http-webhook-doc]: ./connectors-native-webhook.md "Wait for specific events from HTTP or HTTPS endpoints"+[ibm-3270-doc]: /azure/connectors/integrate-3270-apps-ibm-mainframe?tabs=standard "Integrate IBM 3270 screen-driven apps with Azure" +[ibm-cics-doc]: /azure/connectors/integrate-cics-apps-ibm-mainframe "Integrate CICS programs on IBM mainframes with Azure" [ibm-db2-doc]: /azure/logic-apps/connectors/built-in/reference/db2/ "Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more" [ibm-host-file-doc]: /azure/logic-apps/connectors/built-in/reference/hostfile/ "Connect to your IBM host to work with offline files"+[ibm-ims-doc]: /azure/connectors/integrate-ims-apps-ibm-mainframe "Integrate IMS programs on IBM mainframes with Azure" [ibm-mq-doc]: /azure/logic-apps/connectors/built-in/reference/mq/ "Connect to IBM MQ on-premises or in Azure to send and receive messages" [inline-code-doc]: ../logic-apps/logic-apps-add-run-inline-code.md "Add and run JavaScript code snippets from your logic app workflows" [jdbc-doc]: /azure/logic-apps/connectors/built-in/reference/jdbc/ "Connect to a relational database using JDBC drivers"+[local-function-doc]: ../logic-apps/create-run-custom-code-functions.md "Create and run .NET Framework code from your workflow" [nested-logic-app-doc]: ../logic-apps/logic-apps-http-endpoint.md "Integrate logic app workflows with nested workflows" [query-doc]: ../logic-apps/logic-apps-perform-data-operations.md#filter-array-action "Select and filter arrays with the Query action" [sap-doc]: /azure/logic-apps/connectors/built-in/reference/sap/ "Connect to SAP so you can send or receive messages and invoke actions" For more information, review the following documentation: [smtp-doc]: /azure/logic-apps/connectors/built-in/reference/smtp/ "Connect to your SMTP server so you can send email" [sql-server-doc]: /azure/logic-apps/connectors/built-in/reference/sql/ "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table" [switch-doc]: ../logic-apps/logic-apps-control-flow-switch-statement.md "Organize actions into cases, which are assigned unique values. Run only the case whose value matches the result from an expression, object, or token. If no matches exist, run the default case"+[swift-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/announcement-public-preview-of-swift-message-processing-using/ba-p/3670014 "Encode and decode SWIFT transactions in flat-file XML format" [terminate-doc]: ../logic-apps/logic-apps-workflow-actions-triggers.md#terminate-action "Stop or cancel an actively running workflow for your logic app workflow" [until-doc]: ../logic-apps/logic-apps-control-flow-loops.md#until-loop "Repeat actions until the specified condition is true or some state has changed" [variables-doc]: ../logic-apps/logic-apps-create-variables-store-values.md "Perform operations with variables, such as initialize, set, increment, decrement, and append to string or array variable" -<!--Built-in integration account doc links--> +<!--B2B built-in operation doc links--> [as2-doc]: ../logic-apps/logic-apps-enterprise-integration-as2.md "Encode and decode messages that use the AS2 protocol"-[flat-file-decode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Decode XML content with a flat file schema" -[flat-file-encode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Encode XML content with a flat file schema" +[edifact-doc]: ../logic-apps/logic-apps-enterprise-integration-edifact.md "Encode and decode messages that use the EDIFACT protocol" +[flat-file-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Encode and decode XML content with a flat file schema" [integration-account-doc]: ../logic-apps/logic-apps-enterprise-integration-metadata.md "Manage metadata for integration account artifacts"-[json-liquid-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-liquid-transform.md "Transform JSON or XML content with Liquid templates" +[liquid-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-liquid-transform.md "Transform JSON or XML content with Liquid templates" +[rosettanet-doc]: ../logic-apps/logic-apps-enterprise-integration-rosettanet.md "Exchange RosettaNet messages in your workflow" +[x12-doc]: ../logic-apps/logic-apps-enterprise-integration-x12.md "Encode and decode messages that use the X12 protocol" [xml-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-transform.md "Transform XML content" [xml-validate-doc]: ../logic-apps/logic-apps-enterprise-integration-xml-validation.md "Validate XML content" |
connectors | Connect Common Data Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md | -tags: connectors # Connect to Microsoft Dataverse (previously Common Data Service) from workflows in Azure Logic Apps |
connectors | Connectors Azure Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-application-insights.md | ms.suite: integration Last updated 01/10/2024-tags: connectors # Customer intent: As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps. |
connectors | Connectors Azure Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-monitor-logs.md | ms.suite: integration Last updated 02/08/2024-tags: connectors # Customer intent: As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps. |
connectors | Connectors Create Api Azure Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azure-event-hubs.md | ms.suite: integration Last updated 01/04/2024-tags: connectors # Connect to an event hub from workflows in Azure Logic Apps |
connectors | Connectors Create Api Azureblobstorage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md | ms.suite: integration Last updated 01/18/2024-tags: connectors # Connect to Azure Blob Storage from workflows in Azure Logic Apps |
connectors | Connectors Create Api Container Instances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-container-instances.md | -tags: connectors Last updated 01/04/2024 |
connectors | Connectors Create Api Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-cosmos-db.md | -tags: connectors # Process and create Azure Cosmos DB documents using Azure Logic Apps |
connectors | Connectors Create Api Crmonline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-crmonline.md | ms.suite: integration Last updated 01/04/2024-tags: connectors # Connect to Dynamics 365 from workflows in Azure Logic Apps (Deprecated) |
connectors | Connectors Create Api Db2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-db2.md | ms.suite: integration Last updated 01/04/2024-tags: connectors # Access and manage IBM DB2 resources by using Azure Logic Apps |
connectors | Connectors Create Api Ftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md | ms.suite: integration Last updated 01/04/2024-tags: connectors # Connect to an FTP server from workflows in Azure Logic Apps |
connectors | Connectors Create Api Informix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-informix.md | -tags: connectors # Manage IBM Informix database resources by using Azure Logic Apps |
connectors | Connectors Create Api Mq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md | -tags: connectors # Connect to an IBM MQ server from a workflow in Azure Logic Apps |
connectors | Connectors Create Api Office365 Outlook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-office365-outlook.md | ms.suite: integration Last updated 01/10/2024-tags: connectors # Connect to Office 365 Outlook from Azure Logic Apps |
connectors | Connectors Create Api Oracledatabase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-oracledatabase.md | ms.suite: integration Last updated 01/04/2024-tags: connectors # Connect to Oracle Database from Azure Logic Apps |
connectors | Connectors Create Api Servicebus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md | -tags: connectors # Connect to Azure Service Bus from workflows in Azure Logic Apps |
connectors | Connectors Create Api Smtp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-smtp.md | ms.suite: integration Last updated 01/04/2024-tags: connectors # Connect to your SMTP account from Azure Logic Apps |
connectors | Connectors Create Api Sqlazure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md | ms.suite: integration Last updated 01/10/2024-tags: connectors ## As a developer, I want to access my SQL database from my logic app workflow. |
connectors | Connectors Integrate Security Operations Create Api Microsoft Graph Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md | -tags: connectors # Improve threat protection by integrating security operations with Microsoft Graph Security & Azure Logic Apps |
connectors | Connectors Native Delay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-delay.md | ms.suite: integration Last updated 01/04/2024-tags: connectors # Delay running the next action in Azure Logic Apps |
connectors | Connectors Native Http Swagger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http-swagger.md | ms.suite: integration Last updated 12/13/2023-tags: connectors # Connect or call REST API endpoints from workflows in Azure Logic Apps |
connectors | Connectors Native Http | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http.md | ms.suite: integration Last updated 01/22/2024-tags: connectors # Call external HTTP or HTTPS endpoints from workflows in Azure Logic Apps |
connectors | Connectors Native Reqres | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md | ms.suite: integration ms.reviewers: estfan, azla Last updated 01/10/2024-tags: connectors # Receive and respond to inbound HTTPS calls to workflows in Azure Logic Apps |
connectors | Connectors Native Webhook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-webhook.md | ms.suite: integration Last updated 02/09/2024-tags: connectors # Subscribe and wait for events to run workflows using HTTP webhooks in Azure Logic Apps |
connectors | Integrate 3270 Apps Ibm Mainframe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/integrate-3270-apps-ibm-mainframe.md | -tags: connectors # Integrate 3270 screen-driven apps on IBM mainframes with Azure using Azure Logic Apps and IBM 3270 connector |
connectors | Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md | Last updated 01/04/2024 Managed connectors provide ways for you to access other services and systems where built-in connectors aren't available. You can use these triggers and actions to create workflows that integrate data, apps, cloud-based services, and on-premises systems. Different from built-in connectors, managed connectors are usually tied to a specific service or system such as Office 365, SharePoint, Azure Key Vault, Salesforce, Azure Automation, and so on. Managed by Microsoft and hosted in Azure, managed connectors usually require that you first create a connection from your workflow and authenticate your identity. -For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multi-tenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type, and not the other. +For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multitenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type, and not the other. -For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors. For more information, review [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). +For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors. For more information, review [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multitenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md). This article provides a general overview about managed connectors and the way they're organized in the Consumption workflow designer versus the Standard workflow designer with examples. For technical reference information about each managed connector in Azure Logic Apps, review [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors). For more information, review the following documentation: ## ISE connectors -In an integration service environment (ISE), these managed connectors also have [ISE versions](introduction.md#ise-and-connectors), which have different capabilities than their multi-tenant versions: +In an integration service environment (ISE), these managed connectors also have [ISE versions](introduction.md#ise-and-connectors), which have different capabilities than their multitenant versions: > [!NOTE] > For more information, see these topics: [azure-key-vault-icon]: ./media/apis-list/azure-key-vault.png [azure-ml-icon]: ./media/apis-list/azure-ml.png [azure-monitor-logs-icon]: ./media/apis-list/azure-monitor-logs.png-[azure-queues-icon]: ./media/apis-list/azure-queues.png +[azure-queues-icon]: ./media/apis-list/azure-queue-storage.png [azure-resource-manager-icon]: ./media/apis-list/azure-resource-manager.png [azure-service-bus-icon]: ./media/apis-list/azure-service-bus.png [azure-sql-data-warehouse-icon]: ./media/apis-list/azure-sql-data-warehouse.png |
container-apps | Scale App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md | If you define more than one scale rule, the container app begins to scale once t ## HTTP -With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. [Container Apps jobs](jobs.md) don't support HTTP scaling rules. +With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. Every 15 seconds, the number of concurrent requests is calculated as the number of requests in the past 15 seconds divided by 15. [Container Apps jobs](jobs.md) don't support HTTP scaling rules. In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling property is set to 100 concurrent requests per second. az containerapp create \ ## TCP -With a TCP scaling rule, you have control over the threshold of concurrent TCP connections that determines how your app scales. [Container Apps jobs](jobs.md) don't support TCP scaling rules. +With a TCP scaling rule, you have control over the threshold of concurrent TCP connections that determines how your app scales. Every 15 seconds, the number of concurrent connections is calculated as the number of connections in the past 15 seconds divided by 15. [Container Apps jobs](jobs.md) don't support TCP scaling rules. In the following example, the container app revision scales out up to five replicas and can scale in to zero. The scaling threshold is set to 100 concurrent connections per second. |
container-apps | Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/services.md | For more information on the service commands and arguments, see the - Add-ons are in public preview. - Any container app created before May 23, 2023 isn't eligible to use add-ons. - Add-ons come with minimal guarantees. For instance, they're automatically restarted if they crash, however there's no formal quality of service or high-availability guarantees associated with them. For production workloads, use Azure-managed services.+- If you use your own VNET, you must use a workload profiles environment. The Add-ons feature is not supported in consumption only environments that use custom VNETs. ## Next steps |
cosmos-db | Get Started Change Data Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md | First, create a straightforward [Azure Blob Storage](../storage/blobs/index.yml) :::image type="content" source="media/get-started-change-data-capture/sink-container-name.png" alt-text="Screenshot of the blob container named output set as the sink target."::: -1. Locate the **Update method** section and change the selections to only allow **delete** and **update** operations. Also, specify the **Key columns** as a **List of columns** using the field `_{rid}` as the unique identifier. +1. Locate the **Update method** section and change the selections to only allow **delete** and **update** operations. Also, specify the **Key columns** as a **List of columns** using the field `{_rid}` as the unique identifier. :::image type="content" source="media/get-started-change-data-capture/sink-methods-columns.png" alt-text="Screenshot of update methods and key columns being specified for the sink."::: |
cosmos-db | How To Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-private-link.md | After you create the private endpoint, you can integrate it with a private DNS z ```azurecli-interactive #Zone name differs based on the API type and group ID you are using. -zoneName="privatelink.mongocluster.azure.com" +zoneName="privatelink.mongocluster.cosmos.azure.com" az network private-dns zone create \ --resource-group $ResourceGroupName \ |
cost-management-billing | Migrate Ea Reporting Arm Apis Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reporting-arm-apis-overview.md | The following information describes the differences between the older Azure Ente | Use | Azure Enterprise Reporting APIs | Microsoft Cost Management APIs | | | | |-| Authentication | API key provisioned in the Enterprise Agreement (EA) portal | Microsoft Entra authentication using user tokens or service principals. Service principals take the place of API keys. | +| Authentication | API key provisioned in the [Azure portal](../manage/enterprise-rest-apis.md#api-key-generation) | Microsoft Entra authentication using user tokens or service principals. Service principals take the place of API keys. | | Scopes and permissions | All requests are at the enrollment scope. API Key permission assignments will determine whether data for the entire enrollment, a department, or a specific account is returned. No user authentication. | Users or service principals are assigned access to the enrollment, department, or account scope. | | URI Endpoint | `https://consumption.azure.com` | `https://management.azure.com` | | Development status | In maintenance mode. On the path to deprecation. | In active development | |
cost-management-billing | Analyze Cost Data Azure Cost Management Power Bi Template App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md | The following reports are available in the app. - Azure Marketplace charges - Overages and total charges -The Billing account overview page might show costs that differ from costs shown in the EA portal. >[!NOTE] >The **Select date range** selector doesnΓÇÖt affect or change overview tiles. Instead, the overview tiles show the costs for the current billing month. This behavior is intentional. Here's how values in the overview tiles are calculated. - The value shown in the **New purchase amount** tile is calculated as the sum of `newPurchases`. - The value shown in the **Total charges** tile is calculated as the sum of (`adjustments` + `ServiceOverage` + `chargesBilledseparately` + `azureMarketplaceServiceCharges`). -The EA portal doesn't show the Total charges column. The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure Marketplace service charges as Total charges. - -The Prepayment Usage shown in the EA portal isn't available in the Template app as part of the total charges. +The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure Marketplace service charges as Total charges. **Usage by Subscriptions and Resource Groups** - Provides a cost over time view and charts showing cost by subscription and resource group. |
cost-management-billing | Assign Access Acm Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md | Title: Assign access to Cost Management data -description: This article walks you though assigning permission to Cost Management data for various access scopes. +description: This article walks you through assigning permission to Cost Management data for various access scopes. Previously updated : 05/12/2023 Last updated : 02/13/2024 -For users with Azure Enterprise agreements, a combination of permissions granted in the Azure portal and the Enterprise (EA) portal define a user's level of access to Cost Management data. For users with other Azure account types, defining a user's level |