Updates from: 02/17/2024 02:14:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
Title: Enable Azure AD B2C custom domains
+ Title: Enable custom domains in Azure Active Directory B2C
-description: Learn how to enable custom domains in your redirect URLs for Azure Active Directory B2C.
+description: Learn how to enable custom domains in your redirect URLs for Azure Active Directory B2C, so that my users have a seamless experience.
Previously updated : 01/26/2024 Last updated : 02/14/2024
zone_pivot_groups: b2c-policy-type
#Customer intent: As a developer, I want to use my own domain name for the sign-in and sign-up experience, so that my users have a seamless experience.
-# Enable custom domains for Azure Active Directory B2C
+# Enable custom domains in Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
When using custom domains, consider the following:
- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Microsoft Entra service limits and restrictions](/entra/identity/users/directory-service-limits-restrictions) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-front-door-classic-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor).-- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#optional-block-access-to-the-default-domain-name).-- If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
+- If you've multiple applications, migrate all oft them to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used.
+- After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com*. You need to block access to the default domain so that attackers can't use it to access your apps or run distributed denial-of-service (DDoS) attacks. [Submit a support ticket](find-help-open-support-ticket.md) to request for the blocking of access to the default domain.
+
+> [!WARNING]
+> Don't request blocking of the default domain until your custom domain works properly.
+ ## Prerequisites
ai-services Build Enrollment App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/build-enrollment-app.md
- ignite-2023 Previously updated : 11/17/2020 Last updated : 02/14/2024 # Build a React Native app to add users to a Face service
-This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data.
+This guide will show you how to get started with a sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-quality face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data.
-When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
+When users launch the app, it shows a detailed consent screen. If the user gives consent, the app prompts them for a username and password and then captures a high-quality face image using the device's camera.
-The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
+The sample app is written using JavaScript and the React Native framework. It can be deployed on Android and iOS devices.
## Prerequisites
The sample app is written using JavaScript and the React Native framework. It ca
* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. * You'll need the key and endpoint from the resource you created to connect your application to Face API.
-### Important Security Considerations
-* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
-* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
-* As a best practice, consider having separate API keys for development and production.
+
+> [!IMPORTANT]
+> **Security considerations**
+>
+> * For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
+> * Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
+> * As a best practice, consider having separate API keys for development and production.
## Set up the development environment
The sample app is written using JavaScript and the React Native framework. It ca
## Customize the app for your business
-Now that you have set up the sample app, you can tailor it to your own needs.
+Now that you've set up the sample app, you can tailor it to your own needs.
For example, you may want to add situation-specific information on your consent page:
For example, you may want to add situation-specific information on your consent
* Face size (faces that are distant from the camera) * Face orientation (faces turned or tilted away from camera) * Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
- * Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
+ * Occlusion (partially hidden or obstructed faces), including accessories like hats or thick-rimmed glasses
* Blur (such as by rapid face movement when the photograph was taken). The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.
ai-services Storage Lab Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/storage-lab-tutorial.md
Previously updated : 12/29/2022 Last updated : 02/14/2024 ms.devlang: csharp
Next, you'll add the code that actually uses the Azure AI Vision service to crea
1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file: ```csharp
- using Azure.AI.Vision.Common;
+ using Azure;
using Azure.AI.Vision.ImageAnalysis;
+ using System;
``` 1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Azure AI Vision to generate a description for that image. The Azure AI Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on. ```csharp
- // Submit the image to the Azure AI Vision API
- var serviceOptions = new VisionServiceOptions(
- Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"]),
+ // create a new ImageAnalysisClient
+ ImageAnalysisClient client = new ImageAnalysisClient(
+ new Uri(Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"])),
new AzureKeyCredential(ConfigurationManager.AppSettings["SubscriptionKey"]));
- var analysisOptions = new ImageAnalysisOptions()
- {
- Features = ImageAnalysisFeature.Caption | ImageAnalysisFeature.Tags,
- Language = "en",
- GenderNeutralCaption = true
- };
+ VisualFeatures = visualFeatures = VisualFeatures.Caption | VisualFeatures.Tags;
- using var imageSource = VisionSource.FromUrl(
- new Uri(photo.Uri.ToString()));
+ ImageAnalysisOptions analysisOptions = new ImageAnalysisOptions()
+ {
+ GenderNeutralCaption = true,
+ Language = "en",
+ };
- using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions);
- var result = analyzer.Analyze();
+ Uri imageURL = new Uri(photo.Uri.ToString());
+
+ ImageAnalysisResult result = client.Analyze(imageURL,visualFeatures,analysisOptions);
// Record the image description and tags in blob metadata
- photo.Metadata.Add("Caption", result.Caption.ContentCaption.Content);
+ photo.Metadata.Add("Caption", result.Caption.Text);
- for (int i = 0; i < result.Tags.ContentTags.Count; i++)
+ for (int i = 0; i < result.Tags.Values.Count; i++)
{ string key = String.Format("Tag{0}", i);
- photo.Metadata.Add(key, result.Tags.ContentTags[i]);
+ photo.Metadata.Add(key, result.Tags.Values[i]);
} await photo.SetMetadataAsync();
In this section, you will add a search box to the home page, enabling users to d
} ```
- Observe that the **Index** method now accepts a parameter _id_ that contains the value the user typed into the search box. An empty or missing _id_ parameter indicates that all the photos should be displayed.
+ Observe that the **Index** method now accepts a parameter `id` that contains the value the user typed into the search box. An empty or missing `id` parameter indicates that all the photos should be displayed.
1. Add the following helper method to the **HomeController** class:
ai-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-detection.md
Attributes are a set of features that can optionally be detected by the [Face -
>[!NOTE] > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
-## Input data
+## Input requirements
Use the following tips to make sure that your input images give the most accurate detection results:
ai-services Concept Face Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-recognition.md
- ignite-2023 Previously updated : 12/27/2022 Last updated : 02/14/2024 # Face recognition
-This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the act of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
+This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the process of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
You can try out the capabilities of face recognition quickly and easily using Vision Studio. > [!div class="nextstepaction"]
The recognition operations use mainly the following data structures. These objec
See the [Face recognition data structures](./concept-face-recognition-data-structures.md) guide.
-## Input data
+## Input requirements
Use the following tips to ensure that your input images give the most accurate recognition results:
ai-services Concept Shelf Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-shelf-analysis.md
Previously updated : 05/03/2023 Last updated : 02/14/2024
Try out the capabilities of Product Recognition quickly and easily in your brows
## Product Recognition features
-### Shelf Image Composition
+### Shelf image composition
The [stitching and rectification APIs](./how-to/shelf-modify-images.md) let you modify images to improve the accuracy of the Product Understanding results. You can use these APIs to: * Stitch together multiple images of a shelf to create a single image. * Rectify an image to remove perspective distortion.
-### Shelf Product Recognition (pretrained model)
+### Shelf product recognition (pretrained model)
The [Product Understanding API](./how-to/shelf-analyze.md) lets you analyze a shelf image using the out-of-box pretrained model. This operation detects products and gaps in the shelf image and returns the bounding box coordinates of each product and gap, along with a confidence score for each.
The following JSON response illustrates what the Product Understanding API retur
} ```
-### Shelf Product Recognition - Custom (customized model)
+### Shelf product recognition (customized model)
The Product Understanding API can also be used with a [custom trained model](./how-to/shelf-model-customization.md) to detect your specific products. This operation returns the bounding box coordinates of each product and gap, along with the label of each product.
The following JSON response illustrates what the Product Understanding API retur
} ```
-### Shelf Planogram Compliance (preview)
+### Shelf planogram compliance
The [Planogram matching API](./how-to/shelf-planogram.md) lets you compare the results of the Product Understanding API to a planogram document. This operation matches each detected product and gap to its corresponding position in the planogram document.
It returns a JSON response that accounts for each position in the planogram docu
Get started with Product Recognition by trying out the stitching and rectification APIs. Then do basic analysis with the Product Understanding API. * [Prepare images for Product Recognition](./how-to/shelf-modify-images.md) * [Analyze a shelf image](./how-to/shelf-analyze.md)
+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)
ai-services Enrollment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/enrollment-overview.md
Previously updated : 09/27/2021 Last updated : 02/14/2024 # Best practices for adding users to a Face service
-In order to use the Azure AI Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar data structure. This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
+In order to use the Azure AI Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar [data structure](/azure/ai-services/computer-vision/concept-face-recognition-data-structures). This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
## Meaningful consent
Based on Microsoft user research, Microsoft's Responsible AI principles, and [ex
This section offers guidance for developing an enrollment application for facial recognition. This guidance has been developed based on Microsoft user research in the context of enrolling individuals in facial recognition for building entry. Therefore, these recommendations might not apply to all facial recognition solutions. Responsible use for Face API depends strongly on the specific context in which it's integrated, so the prioritization and application of these recommendations should be adapted to your scenario.
-> [!NOTE]
+> [!IMPORTANT]
> It is your responsibility to align your enrollment application with applicable legal requirements in your jurisdiction and accurately reflect all of your data collection and processing practices. ## Application development
Before you design an enrollment flow, think about how the application you're bui
|Category | Recommendations | ||| |Hardware | Consider the camera quality of the enrollment device. |
-|Recommended enrollment features | Include a log-on step with multi-factor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
+|Recommended enrollment features | Include a log-on step with multifactor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
|Security | Azure AI services follow [best practices](../cognitive-services-virtual-networks.md?tabs=portal) for encrypting user data at rest and in transit. The following are other practices that can help uphold the security promises you make to users during the enrollment experience. </br></br>Take security measures to ensure that no one has access to the person ID at any point during enrollment. Note: PersonID should be treated as a secret in the enrollment system. </br></br>Use [role-based access control](../../role-based-access-control/overview.md) with Azure AI services. </br></br>Use token-based authentication and/or shared access signatures (SAS) over keys and secrets to access resources like databases. By using request or SAS tokens, you can grant limited access to data without compromising your account keys, and you can specify an expiry time on the token. </br></br>Never store any secrets, keys, or passwords in your app. | |User privacy |Provide a range of enrollment options to address different levels of privacy concerns. Do not mandate that people use their personal devices to enroll into a facial recognition system. </br></br>Allow users to re-enroll, revoke consent, and delete data from the enrollment application at any time and for any reason. | |Accessibility |Follow accessibility standards (for example, [ADA](https://www.ada.gov/regs2010/2010ADAStandards/2010ADAstandards.htm) or [W3C](https://www.w3.org/TR/WCAG21/)) to ensure the application is usable by people with mobility or visual impairments. |
ai-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/add-faces.md
Title: "Example: Add faces to a PersonGroup - Face"
description: This guide demonstrates how to add a large number of persons and faces to a PersonGroup object with the Azure AI Face service. #-+ Previously updated : 04/10/2019- Last updated : 02/14/2024+ ms.devlang: csharp
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
-This guide demonstrates how to add a large number of persons and faces to a PersonGroup object. The same strategy also applies to LargePersonGroup, FaceList, and LargeFaceList objects. This sample is written in C# by using the Azure AI Face .NET client library.
+This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C# and uses the Azure AI Face .NET client library.
-## Step 1: Initialization
+## Initialization
-The following code declares several variables and implements a helper function to schedule the face add requests:
+The following code declares several variables and implements a helper function to schedule the **face add** requests:
- `PersonCount` is the total number of persons. - `CallLimitPerSecond` is the maximum calls per second according to the subscription tier.
static async Task WaitCallLimitPerSecondAsync()
} ```
-## Step 2: Authorize the API call
+## Authorize the API call
When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
-## Step 3: Create the PersonGroup
+## Create the PersonGroup
-A PersonGroup named "MyPersonGroup" is created to save the persons.
-The request time is enqueued to `_timeStampQueue` to ensure the overall validation.
+This code creates a **PersonGroup** named `"MyPersonGroup"` to save the persons. The request time is enqueued to `_timeStampQueue` to ensure the overall validation.
```csharp const string personGroupId = "mypersongroupid";
_timeStampQueue.Enqueue(DateTime.UtcNow);
await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName); ```
-## Step 4: Create the persons for the PersonGroup
+## Create the persons for the PersonGroup
-Persons are created concurrently, and `await WaitCallLimitPerSecondAsync()` is also applied to avoid exceeding the call limit.
+This code creates **Persons** concurrently, and uses `await WaitCallLimitPerSecondAsync()` to avoid exceeding the call rate limit.
```csharp Person[] persons = new Person[PersonCount];
Parallel.For(0, PersonCount, async i =>
}); ```
-## Step 5: Add faces to the persons
+## Add faces to the persons
-Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially.
-Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation.
+Faces added to different persons are processed concurrently. Faces added for one specific person are processed sequentially. Again, `await WaitCallLimitPerSecondAsync()` is invoked to ensure that the request frequency is within the scope of limitation.
```csharp Parallel.For(0, PersonCount, async i =>
Parallel.For(0, PersonCount, async i =>
In this guide, you learned the process of creating a PersonGroup with a massive number of persons and faces. Several reminders: -- This strategy also applies to FaceLists and LargePersonGroups.-- Adding or deleting faces to different FaceLists or persons in LargePersonGroups are processed concurrently.-- Adding or deleting faces to one specific FaceList or person in a LargePersonGroup are done sequentially.-- For simplicity, how to handle a potential exception is omitted in this guide. If you want to enhance more robustness, apply the proper retry policy.
+- This strategy also applies to **FaceLists** and **LargePersonGroups**.
+- Adding or deleting faces to different **FaceLists** or persons in **LargePersonGroups** are processed concurrently.
+- Adding or deleting faces to one specific **FaceList** or persons in a **LargePersonGroup** is done sequentially.
-The following features were explained and demonstrated:
--- Create PersonGroups by using the [PersonGroup - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244) API.-- Create persons by using the [PersonGroup Person - Create](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c) API.-- Add faces to persons by using the [PersonGroup Person - Add Face](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523b) API. ## Next steps
-In this guide, you learned how to add face data to a **PersonGroup**. Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data.
+Next, learn how to use the enhanced data structure **PersonDirectory** to do more with your face data.
- [Use the PersonDirectory structure (preview)](use-persondirectory.md)
ai-services Find Similar Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/find-similar-faces.md
Previously updated : 11/07/2022 Last updated : 02/14/2024
ai-services Identity Detect Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md
Previously updated : 12/27/2022 Last updated : 02/14/2024 ms.devlang: csharp
[!INCLUDE [Sensitive attributes notice](../includes/identity-sensitive-attributes.md)]
-This guide demonstrates how to use the face detection API to extract attributes like age, emotion, or head pose from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
+This guide demonstrates how to use the face detection API to extract attributes from a given image. You'll learn the different ways to configure the behavior of this API to meet your needs.
The code snippets in this guide are written in C# by using the Azure AI Face client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
ai-services Mitigate Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/mitigate-latency.md
If the image files you use are large, it affects the response time of the Face s
The quality of the input images affects both the accuracy and the latency of the Face service. Images with lower quality may result in erroneous results. Images of higher quality may enable more precise interpretations. However, images of higher quality also increase the network latency due to their larger file sizes. The service requires more time to receive the entire file from the client and to process it, in proportion to the file size. Above a certain level, further quality enhancements won't significantly improve the accuracy. To achieve the optimal balance between accuracy and speed, follow these tips to optimize your input data. -- For face detection and recognition operations, see [input data for face detection](../concept-face-detection.md#input-data) and [input data for face recognition](../concept-face-recognition.md#input-data).
+- For face detection and recognition operations, see [input data for face detection](../concept-face-detection.md#input-requirements) and [input data for face recognition](../concept-face-recognition.md#input-requirements).
- For liveness detection, see the [tutorial](../Tutorials/liveness.md#select-a-good-reference-image). #### Other file size tips
ai-services Shelf Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-analyze.md
Previously updated : 04/26/2023 Last updated : 02/14/2024
In this guide, you learned how to make a basic analysis call using the pretraine
> [Train a custom model for Product Recognition](../how-to/shelf-model-customization.md) * [Image Analysis overview](../overview-image-analysis.md)
+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)
ai-services Shelf Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-model-customization.md
Previously updated : 05/02/2023 Last updated : 02/14/2024
-# Shelf Product Recognition - Custom Model (preview)
+# Shelf product recognition - custom model (preview)
You can train a custom model to recognize specific retail products for use in a Product Recognition scenario. The out-of-box [Analyze](shelf-analyze.md) operation doesn't differentiate between products, but you can build this capability into your app through custom labeling and training.
When you go through the labeling workflow, create labels for each of the product
## Analyze shelves with a custom model
-When your custom model is trained and ready (you've completed the steps in the [Model customization guide](./model-customization.md)), you can use it through the Shelf Analyze operation. Set the _PRODUCT_CLASSIFIER_MODEL_ URL parameter to the name of your custom model (the _ModelName_ value you used in the creation step).
+When your custom model is trained and ready (you've completed the steps in the [Model customization guide](./model-customization.md)), you can use it through the Shelf Analyze operation.
The API call will look like this: ```bash
-curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
+curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/<your_model_name>/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
'url':'<your_url_string>' }" ```
curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: app
1. Make the following changes in the command where needed: 1. Replace the `<subscriptionKey>` with your Vision resource key. 1. Replace the `<endpoint>` with your Vision resource endpoint. For example: `https://YourResourceName.cognitiveservices.azure.com`.
- 2. Replace the `<your_run_name>` with your unique test run name for the task queue. It is an async API task queue name for you to be able retrieve the API response later. For example, `.../runs/test1?api-version...`
+ 1. Replace `<your_model_name>` with the name of your custom model (the _ModelName_ value you used in the creation step).
+ 1. Replace the `<your_run_name>` with your unique test run name for the task queue. It is an async API task queue name for you to be able retrieve the API response later. For example, `.../runs/test1?api-version...`
1. Replace the `<your_url_string>` contents with the blob URL of the image 1. Open a command prompt window. 1. Paste your edited `curl` command from the text editor into the command prompt window, and then run the command.
In this guide, you learned how to use a custom Product Recognition model to bett
> [Planogram matching](shelf-planogram.md) * [Image Analysis overview](../overview-image-analysis.md)
+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)
ai-services Shelf Planogram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-planogram.md
Previously updated : 05/02/2023 Last updated : 02/12/2024
-# Shelf Planogram Compliance (preview)
+# Shelf planogram compliance (preview)
A planogram is a diagram that indicates the correct placement of retail products on shelves. The Planogram Compliance API lets you compare analysis results from a photo to the store's planogram input. It returns an account of all the positions in the planogram, and whether a product was found in each position.
The X and Y coordinates are relative to a top-left origin, and the width and hei
Quantities in the planogram schema are in nonspecific units. They can correspond to inches, centimeters, or any other unit of measurement. The matching algorithm calculates the relationship between the photo analysis units (pixels) and the planogram units.
-### Planogram API Model
+### Planogram API model
Describes the planogram for planogram matching operations.
Describes the planogram for planogram matching operations.
| `fixtures` | [FixtureApiModel](#fixture-api-model) | List of fixtures in the planogram. | Yes | | `positions` | [PositionApiModel](#position-api-model)| List of positions in the planogram. | Yes |
-### Product API Model
+### Product API model
Describes a product in the planogram.
Describes a product in the planogram.
| `w` | double | Width of the product. | Yes | | `h` | double | Height of the fixture. | Yes |
-### Fixture API Model
+### Fixture API model
Describes a fixture (shelf or similar hardware) in a planogram.
Describes a fixture (shelf or similar hardware) in a planogram.
| `x` | double | Left offset from the origin, in units of in inches or centimeters. | Yes | | `y` | double | Top offset from the origin, in units of inches or centimeters. | Yes |
-### Position API Model
+### Position API model
Describes a product's position in a planogram.
A successful response is returned in JSON, showing the products (or gaps) detect
} ```
-### Planogram Matching Position API Model
+### Planogram matching position API model
Paired planogram position ID and corresponding detected object from product understanding result.
Paired planogram position ID and corresponding detected object from product unde
## Next steps * [Image Analysis overview](../overview-image-analysis.md)
+* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0a)
ai-services Specify Detection Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/specify-detection-model.md
Title: How to specify a detection model - Face
description: This article will show you how to choose which face detection model to use with your Azure AI Face application. #-+ Previously updated : 03/05/2021- Last updated : 02/14/2024+ ms.devlang: csharp
You should be familiar with the concept of AI face detection. If you aren't, see
The different face detection models are optimized for different tasks. See the following table for an overview of the differences.
-|**detection_01** |**detection_02** |**detection_03**
-||||
-|Default choice for all face detection operations. | Released in May 2019 and available optionally in all face detection operations. | Released in February 2021 and available optionally in all face detection operations.
-|Not optimized for small, side-view, or blurry faces. | Improved accuracy on small, side-view, and blurry faces. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations.
-|Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Does not return face attributes. | Returns mask and head pose attributes if they're specified in the detect call.
-|Returns face landmarks if they're specified in the detect call. | Does not return face landmarks. | Returns face landmarks if they're specified in the detect call.
+
+| Model | Description | Performance notes | Attributes | Landmarks |
+|||-|-|--|
+|**detection_01** | Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, age, emotion, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
+|**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Does not return face attributes. | Does not return face landmarks. |
+|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
+ The best way to compare the performances of the detection models is to use them on a sample dataset. We recommend calling the [Face - Detect] API on a variety of images, especially images of many faces or of faces that are difficult to see, using each detection model. Pay attention to the number of faces that each model returns.
ai-services Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-headpose.md
Previously updated : 02/23/2021 Last updated : 02/14/2024 ms.devlang: csharp
From here, you can use the returned **Face** objects in your display. The follow
## Next steps
-See the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles. Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements.
+* See the [Azure AI Face WPF](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples/Cognitive-Services-Face-WPF) app on GitHub for a working example of rotated face rectangles.
+* Or, see the [Face HeadPose Sample](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/tree/master/app-samples) app, which tracks the HeadPose attribute in real time to detect head movements.
ai-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md
Previously updated : 05/01/2019 Last updated : 02/14/2024 ms.devlang: csharp
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
-This guide shows you how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects, respectively. PersonGroups can hold up to 1000 persons in the free tier and 10,000 in the paid tier, while LargePersonGroups can hold up to one million persons in the paid tier.
+This guide shows you how to scale up from existing **PersonGroup** and **FaceList** objects to **LargePersonGroup** and **LargeFaceList** objects, respectively. **PersonGroups** can hold up to 1000 persons in the free tier and 10,000 in the paid tier, while **LargePersonGroups** can hold up to one million persons in the paid tier.
> [!IMPORTANT] > The newer data structure **PersonDirectory** is recommended for new development. It can hold up to 75 million identities and does not require manual training. For more information, see the [PersonDirectory guide](./use-persondirectory.md).
-This guide demonstrates the migration process. It assumes a basic familiarity with PersonGroup and FaceList objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
+This guide demonstrates the migration process. It assumes a basic familiarity with **PersonGroup** and **FaceList** objects, the [Train](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/599ae2d16ac60f11b48b5aa4) operation, and the face recognition functions. To learn more about these subjects, see the [face recognition](../concept-face-recognition.md) conceptual guide.
-LargePersonGroup and LargeFaceList are collectively referred to as large-scale operations. LargePersonGroup can contain up to 1 million persons, each with a maximum of 248 faces. LargeFaceList can contain up to 1 million faces. The large-scale operations are similar to the conventional PersonGroup and FaceList but have some differences because of the new architecture.
+**LargePersonGroup** and **LargeFaceList** are collectively referred to as large-scale operations. **LargePersonGroup** can contain up to 1 million persons, each with a maximum of 248 faces. **LargeFaceList** can contain up to 1 million faces. The large-scale operations are similar to the conventional **PersonGroup** and **FaceList** but have some differences because of the new architecture.
The samples are written in C# by using the Azure AI Face client library. > [!NOTE]
-> To enable Face search performance for Identification and FindSimilar in large scale, introduce a Train operation to preprocess the LargeFaceList and LargePersonGroup. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform Identification and FindSimilar if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed.
+> To enable Face search performance for **Identification** and **FindSimilar** in large-scale, introduce a **Train** operation to preprocess the **LargeFaceList** and **LargePersonGroup**. The training time varies from seconds to about half an hour based on the actual capacity. During the training period, it's possible to perform **Identification** and **FindSimilar** if a successful training operating was done before. The drawback is that the new added persons and faces don't appear in the result until a new post migration to large-scale training is completed.
## Step 1: Initialize the client object
When you use the Face client library, the key and subscription endpoint are pass
## Step 2: Code migration
-This section focuses on how to migrate PersonGroup or FaceList implementation to LargePersonGroup or LargeFaceList. Although LargePersonGroup or LargeFaceList differs from PersonGroup or FaceList in design and internal implementation, the API interfaces are similar for backward compatibility.
+This section focuses on how to migrate **PersonGroup** or **FaceList** implementation to **LargePersonGroup** or **LargeFaceList**. Although **LargePersonGroup** or **LargeFaceList** differs from **PersonGroup** or **FaceList** in design and internal implementation, the API interfaces are similar for backward compatibility.
-Data migration isn't supported. You re-create the LargePersonGroup or LargeFaceList instead.
+Data migration isn't supported. You re-create the **LargePersonGroup** or **LargeFaceList** instead.
### Migrate a PersonGroup to a LargePersonGroup
-Migration from a PersonGroup to a LargePersonGroup is simple. They share exactly the same group-level operations.
+Migration from a **PersonGroup** to a **LargePersonGroup** is simple. They share exactly the same group-level operations.
-For PersonGroup- or person-related implementation, it's necessary to change only the API paths or SDK class/module to LargePersonGroup and LargePersonGroup Person.
+For **PersonGroup** or person-related implementation, it's necessary to change only the API paths or SDK class/module to **LargePersonGroup** and **LargePersonGroup** **Person**.
-Add all of the faces and persons from the PersonGroup to the new LargePersonGroup. For more information, see [Add faces](add-faces.md).
+Add all of the faces and persons from the **PersonGroup** to the new **LargePersonGroup**. For more information, see [Add faces](add-faces.md).
### Migrate a FaceList to a LargeFaceList
-| FaceList APIs | LargeFaceList APIs |
+| **FaceList** APIs | **LargeFaceList** APIs |
|::|::| | Create | Create | | Delete | Delete |
Add all of the faces and persons from the PersonGroup to the new LargePersonGrou
| - | Train | | - | Get Training Status |
-The preceding table is a comparison of list-level operations between FaceList and LargeFaceList. As is shown, LargeFaceList comes with new operations, Train and Get Training Status, when compared with FaceList. Training the LargeFaceList is a precondition of the
-[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for FaceList. The following snippet is a helper function to wait for the training of a LargeFaceList:
+The preceding table is a comparison of list-level operations between **FaceList** and **LargeFaceList**. As is shown, **LargeFaceList** comes with new operations, **Train** and **Get Training Status**, when compared with **FaceList**. Training the **LargeFaceList** is a precondition of the
+[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237) operation. Training isn't required for **FaceList**. The following snippet is a helper function to wait for the training of a **LargeFaceList**:
```csharp /// <summary>
private static async Task TrainLargeFaceList(
} ```
-Previously, a typical use of FaceList with added faces and FindSimilar looked like the following:
+Previously, a typical use of **FaceList** with added faces and **FindSimilar** looked like the following:
```csharp // Create a FaceList.
using (Stream stream = File.OpenRead(QueryImagePath))
} ```
-When migrating it to LargeFaceList, it becomes the following:
+When migrating it to **LargeFaceList**, it becomes the following:
```csharp // Create a LargeFaceList.
using (Stream stream = File.OpenRead(QueryImagePath))
} ```
-As previously shown, the data management and the FindSimilar part are almost the same. The only exception is that a fresh preprocessing Train operation must complete in the LargeFaceList before FindSimilar works.
+As previously shown, the data management and the **FindSimilar** part are almost the same. The only exception is that a fresh preprocessing **Train** operation must complete in the **LargeFaceList** before **FindSimilar** works.
## Step 3: Train suggestions
-Although the Train operation speeds up [FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)
-and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239), the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
+Although the **Train** operation speeds up **[FindSimilar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237)**
+and **[Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239)**, the training time suffers, especially when coming to large scale. The estimated training time in different scales is listed in the following table.
| Scale for faces or persons | Estimated training time | |::|::|
To better utilize the large-scale feature, we recommend the following strategies
### Step 3a: Customize time interval
-As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList.
+As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For **LargeFaceList** with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the **LargeFaceList**.
-The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
+The same strategy also applies to **LargePersonGroup**. For example, when you train a **LargePersonGroup** with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval.
### Step 3b: Small-scale buffer
-Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
+Persons or faces in a **LargePersonGroup** or a **LargeFaceList** are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired.
-To mitigate this problem, use an extra small-scale LargePersonGroup or LargeFaceList as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master LargePersonGroup or LargeFaceList by running the master training on a sparser interval. Examples are in the middle of the night and daily.
+To mitigate this problem, use an extra small-scale **LargePersonGroup** or **LargeFaceList** as a buffer only for the newly added entries. This buffer takes a shorter time to train because of the smaller size. The immediate search capability on this temporary buffer should work. Use this buffer in combination with training on the master **LargePersonGroup** or **LargeFaceList** by running the master training on a sparser interval. Examples are in the middle of the night and daily.
An example workflow:
-1. Create a master LargePersonGroup or LargeFaceList, which is the master collection. Create a buffer LargePersonGroup or LargeFaceList, which is the buffer collection. The buffer collection is only for newly added persons or faces.
+1. Create a master **LargePersonGroup** or **LargeFaceList**, which is the master collection. Create a buffer **LargePersonGroup** or **LargeFaceList**, which is the buffer collection. The buffer collection is only for newly added persons or faces.
1. Add new persons or faces to both the master collection and the buffer collection. 1. Only train the buffer collection with a short time interval to ensure that the newly added entries take effect.
-1. Call Identification or FindSimilar against both the master collection and the buffer collection. Merge the results.
-1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection.
-1. Delete the old buffer collection after the Train operation finishes on the master collection.
+1. Call Identification or **FindSimilar** against both the master collection and the buffer collection. Merge the results.
+1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the **Train** operation on the master collection.
+1. Delete the old buffer collection after the **Train** operation finishes on the master collection.
### Step 3c: Standalone training
-If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency.
+If a relatively long latency is acceptable, it isn't necessary to trigger the **Train** operation right after you add new data. Instead, the **Train** operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the **Train** frequency.
-Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a LargePersonGroup by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is:
+Suppose there's a `TrainLargePersonGroup` function similar to `TrainLargeFaceList`. A typical implementation of the standalone training on a **LargePersonGroup** by invoking the [`Timer`](/dotnet/api/system.timers.timer) class in `System.Timers` is:
```csharp private static void Main()
For more information about data management and identification-related implementa
## Summary
-In this guide, you learned how to migrate the existing PersonGroup or FaceList code, not data, to the LargePersonGroup or LargeFaceList:
+In this guide, you learned how to migrate the existing **PersonGroup** or **FaceList** code, not data, to the **LargePersonGroup** or **LargeFaceList**:
-- LargePersonGroup and LargeFaceList work similar to PersonGroup or FaceList, except that the Train operation is required by LargeFaceList.-- Take the proper Train strategy to dynamic data update for large-scale data sets.
+- **LargePersonGroup** and **LargeFaceList** work similar to **PersonGroup** or **FaceList**, except that the **Train** operation is required by **LargeFaceList**.
+- Take the proper **Train** strategy to dynamic data update for large-scale data sets.
## Next steps
-Follow a how-to guide to learn how to add faces to a PersonGroup or write a script to do the Identify operation on a PersonGroup.
+Follow a how-to guide to learn how to add faces to a **PersonGroup** or write a script to do the **Identify** operation on a **PersonGroup**.
- [Add faces](add-faces.md) - [Face client library quickstart](../quickstarts-sdk/identity-client-library.md)
ai-services Studio Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/studio-quickstart.md
Previously updated : 04/27/2023 Last updated : 02/14/2024
ai-services Create Account Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/create-account-terraform.md
Title: 'Quickstart: Create an Azure AI services resource using Terraform' description: 'In this article, you create an Azure AI services resource using Terraform' keywords: Azure AI services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
-#
Last updated 4/14/2023
ai-services Getting Started Improving Your Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/getting-started-improving-your-classifier.md
Title: Improving your model - Custom Vision Service
+ Title: Improving your model - Custom Vision service
description: In this article you'll learn how the amount, quality and variety of data can improve the quality of your model in the Custom Vision service. #
Previously updated : 07/05/2022 Last updated : 02/14/2024 # How to improve your Custom Vision model
-In this guide, you'll learn how to improve the quality of your Custom Vision Service model. The quality of your [classifier](./getting-started-build-a-classifier.md) or [object detector](./get-started-build-detector.md) depends on the amount, quality, and variety of the labeled data you provide it and how balanced the overall dataset is. A good model has a balanced training dataset that is representative of what will be submitted to it. The process of building such a model is iterative; it's common to take a few rounds of training to reach expected results.
+In this guide, you'll learn how to improve the quality of your Custom Vision model. The quality of your [classifier](./getting-started-build-a-classifier.md) or [object detector](./get-started-build-detector.md) depends on the amount, quality, and variety of labeled data you provide and how balanced the overall dataset is. A good model has a balanced training dataset that is representative of what will be submitted to it. The process of building such a model is iterative; it's common to take a few rounds of training to reach expected results.
The following is a general pattern to help you train a more accurate model:
If you're using an image classifier, you may need to add _negative samples_ to h
Object detectors handle negative samples automatically, because any image areas outside of the drawn bounding boxes are considered negative. > [!NOTE]
-> The Custom Vision Service supports some automatic negative image handling. For example, if you are building a grape vs. banana classifier and submit an image of a shoe for prediction, the classifier should score that image as close to 0% for both grape and banana.
+> The Custom Vision service supports some automatic negative image handling. For example, if you are building a grape vs. banana classifier and submit an image of a shoe for prediction, the classifier should score that image as close to 0% for both grape and banana.
> > On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes.
When you use or test the model by submitting images to the prediction endpoint,
![screenshot of the predictions tab, with images in view](./media/getting-started-improving-your-classifier/predictions.png)
-2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
+1. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
To add an image to your existing training data, select the image, set the correct tag(s), and select __Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab. ![Screenshot of the tagging page.](./media/getting-started-improving-your-classifier/tag.png)
-3. Then use the __Train__ button to retrain the model.
+1. Then use the __Train__ button to retrain the model.
## Visually inspect predictions
ai-services Use Prediction Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/use-prediction-api.md
Previously updated : 12/27/2022 Last updated : 02/14/2024 ms.devlang: csharp
# Call the prediction API
-After you've trained your model, you can test images programmatically by submitting them to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs.
+After you've trained your model, you can test it programmatically by submitting images to the prediction API endpoint. In this guide, you'll learn how to call the prediction API to score an image. You'll learn the different ways you can configure the behavior of this API to meet your needs.
> [!NOTE] > This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Prediction_3.0/operations/5c82db60bf6a2b11a8247c15).
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
> [!NOTE] >
-> For extracting text from external images like labels, street signs, and posters, use the [Azure AI Vision v4.0 preview Read](../../ai-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
+> For extracting text from external images like labels, street signs, and posters, use the [Azure AI Image Analysis v4.0 Read](../../ai-services/Computer-vision/concept-ocr.md) feature optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
> Document Intelligence Read Optical Character Recognition (OCR) model runs at a higher resolution than Azure AI Vision Read and extracts print and handwritten text from PDF documents and scanned images. It also includes support for extracting text from Microsoft Word, Excel, PowerPoint, and HTML documents. It detects paragraphs, text lines, words, locations, and languages. The Read model is the underlying OCR engine for other Document Intelligence prebuilt models like Layout, General Document, Invoice, Receipt, Identity (ID) document, Health insurance card, W2 in addition to custom models.
ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-create-immersive-reader.md
Title: "Create an Immersive Reader Resource"
+ Title: Create an Immersive Reader resource
-description: This article shows you how to create a new Immersive Reader resource with a custom subdomain and then configure Microsoft Entra ID in your Azure tenant.
+description: Learn how to create a new Immersive Reader resource with a custom subdomain and then configure Microsoft Entra ID in your Azure tenant.
# Previously updated : 03/31/2023 Last updated : 02/12/2024 # Create an Immersive Reader resource and configure Microsoft Entra authentication
-In this article, we provide a script that creates an Immersive Reader resource and configure Microsoft Entra authentication. Each time an Immersive Reader resource is created, whether with this script or in the portal, it must also be configured with Microsoft Entra permissions.
+This article explains how to create an Immersive Reader resource by using the provided script. This script also configures Microsoft Entra authentication. Each time an Immersive Reader resource is created, whether with this script or in the portal, it must be configured with Microsoft Entra permissions.
-The script is designed to create and configure all the necessary Immersive Reader and Microsoft Entra resources for you all in one step. However, you can also just configure Microsoft Entra authentication for an existing Immersive Reader resource, if for instance, you happen to have already created one in the Azure portal.
+The script creates and configures all the necessary Immersive Reader and Microsoft Entra resources for you. However, you can also configure Microsoft Entra authentication for an existing Immersive Reader resource, if you already created one in the Azure portal. The script first looks for existing Immersive Reader and Microsoft Entra resources in your subscription, and creates them only if they don't already exist.
-For some customers, it may be necessary to create multiple Immersive Reader resources, for development vs. production, or perhaps for multiple different regions your service is deployed in. For those cases, you can come back and use the script multiple times to create different Immersive Reader resources and get them configured with the Microsoft Entra permissions.
-
-The script is designed to be flexible. It first looks for existing Immersive Reader and Microsoft Entra resources in your subscription, and creates them only as necessary if they don't already exist. If it's your first time creating an Immersive Reader resource, the script does everything you need. If you want to use it just to configure Microsoft Entra ID for an existing Immersive Reader resource that was created in the portal, it does that too.
-It can also be used to create and configure multiple Immersive Reader resources.
+For some customers, it might be necessary to create multiple Immersive Reader resources, for development versus production, or perhaps for different regions where your service is deployed. For those cases, you can come back and use the script multiple times to create different Immersive Reader resources and get them configured with Microsoft Entra permissions.
## Permissions
If you aren't an owner, the following scope-specific permissions are required:
* **Contributor**. You need to have at least a Contributor role associated with the Azure subscription:
- :::image type="content" source="media/contributor-role.png" alt-text="Screenshot of contributor built-in role description.":::
+ :::image type="content" source="media/contributor-role.png" alt-text="Screenshot of contributor built-in role description.":::
* **Application Developer**. You need to have at least an Application Developer role associated in Microsoft Entra ID:
- :::image type="content" source="media/application-developer-role.png" alt-text="{alt-text}":::
+ :::image type="content" source="media/application-developer-role.png" alt-text="Screenshot of the developer built-in role description.":::
-For more information, _see_ [Microsoft Entra built-in roles](../../active-directory/roles/permissions-reference.md#application-developer)
+For more information, see [Microsoft Entra built-in roles](../../active-directory/roles/permissions-reference.md#application-developer).
-## Set up PowerShell environment
+## Set up PowerShell resources
-1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
+1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to **PowerShell** in the upper-left hand dropdown or by typing `pwsh`.
1. Copy and paste the following code snippet into the shell.
For more information, _see_ [Microsoft Entra built-in roles](../../active-direct
Write-Host "Immersive Reader resource created successfully" }
- # Create an Azure Active Directory app if it doesn't already exist
+ # Create an Microsoft Entra app if it doesn't already exist
$clientId = az ad app show --id $AADAppIdentifierUri --query "appId" -o tsv if (-not $clientId) {
- Write-Host "Creating new Azure Active Directory app"
+ Write-Host "Creating new Microsoft Entra app"
$clientId = az ad app create --display-name $AADAppDisplayName --identifier-uris $AADAppIdentifierUri --query "appId" -o tsv if (-not $clientId) {
- throw "Error: Failed to create Azure Active Directory application"
+ throw "Error: Failed to create Microsoft Entra application"
}
- Write-Host "Azure Active Directory application created successfully."
+ Write-Host "Microsoft Entra application created successfully."
$clientSecret = az ad app credential reset --id $clientId --end-date "$AADAppClientSecretExpiration" --query "password" | % { $_.Trim('"') } if (-not $clientSecret) {
- throw "Error: Failed to create Azure Active Directory application client secret"
+ throw "Error: Failed to create Microsoft Entra application client secret"
}
- Write-Host "Azure Active Directory application client secret created successfully."
+ Write-Host "Microsoft Entra application client secret created successfully."
- Write-Host "NOTE: To manage your Active Directory application client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow
+ Write-Host "NOTE: To manage your Microsoft Entra application client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Microsoft Entra ID -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow
} # Create a service principal if it doesn't already exist
For more information, _see_ [Microsoft Entra built-in roles](../../active-direct
} Write-Host "Service principal access granted successfully"
- # Grab the tenant ID, which is needed when obtaining an Azure AD token
+ # Grab the tenant ID, which is needed when obtaining a Microsoft Entra token
$tenantId = az account show --query "tenantId" -o tsv
- # Collect the information needed to obtain an Azure AD token into one object
+ # Collect the information needed to obtain a Microsoft Entra token into one object
$result = @{} $result.TenantId = $tenantId $result.ClientId = $clientId
For more information, _see_ [Microsoft Entra built-in roles](../../active-direct
Write-Host "*****" if($clientSecret -ne $null) {
- Write-Host "This function has created a client secret (password) for you. This secret is used when calling Azure Active Directory to fetch access tokens."
- Write-Host "This is the only time you will ever see the client secret for your Azure Active Directory application, so save it now." -ForegroundColor Yellow
+ Write-Host "This function has created a client secret (password) for you. This secret is used when calling Microsoft Entra to fetch access tokens."
+ Write-Host "This is the only time you will ever see the client secret for your Microsoft Entra application, so save it now." -ForegroundColor Yellow
} else{
- Write-Host "You will need to retrieve the ClientSecret from your original run of this function that created it. If you don't have it, you will need to go create a new client secret for your Azure Active Directory application. Please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section." -ForegroundColor Yellow
+ Write-Host "You will need to retrieve the ClientSecret from your original run of this function that created it. If you don't have it, you will need to go create a new client secret for your Microsoft Entra application. Please visit https://portal.azure.com and go to Home -> Microsoft Entra ID -> App Registrations -> (your app) '$AADAppDisplayName' -> Certificates and Secrets blade -> Client Secrets section." -ForegroundColor Yellow
} Write-Host "*****`n" Write-Output (ConvertTo-Json $result)
For more information, _see_ [Microsoft Entra built-in roles](../../active-direct
1. Run the function `Create-ImmersiveReaderResource`, supplying the '<PARAMETER_VALUES>' placeholders with your own values as appropriate. ```azurepowershell-interactive
- Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
+ Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<MICROSOFT_ENTRA_DISPLAY_NAME>' -AADAppIdentifierUri '<MICROSOFT_ENTRA_IDENTIFIER_URI>' -AADAppClientSecretExpiration '<MICROSOFT_ENTRA_CLIENT_SECRET_EXPIRATION>'
```
- The full command looks something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. __Do not copy or use this command as-is.__ Copy and use the command with your own values. This example has dummy values for the '<PARAMETER_VALUES>'. Yours may be different, as you come up with your own names for these values.
+ The full command looks something like the following. Here we put each parameter on its own line for clarity, so you can see the whole command. __Do not copy or use this command as-is.__ Copy and use the command with your own values. This example has dummy values for the `<PARAMETER_VALUES>`. Yours might be different, as you come up with your own names for these values.
``` Create-ImmersiveReaderResource
For more information, _see_ [Microsoft Entra built-in roles](../../active-direct
| Parameter | Comments | | | | | SubscriptionName |Name of the Azure subscription to use for your Immersive Reader resource. You must have a subscription in order to create a resource. |
- | ResourceName | Must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters.|
- | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and may contain '-', as long as the '-' isn't the first or last character. Length may not exceed 63 characters. This parameter is optional if the resource already exists. |
- | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). Visit our [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. |
+ | ResourceName | Must be alphanumeric, and can contain `-`, as long as the `-` isn't the first or last character. Length can't exceed 63 characters.|
+ | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and can contain `-`, as long as the `-` isn't the first or last character. Length can't exceed 63 characters. This parameter is optional if the resource already exists. |
+ | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). To learn more about each available SKU, visit our [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/). This parameter is optional if the resource already exists. |
| ResourceLocation |Options: `australiaeast`, `brazilsouth`, `canadacentral`, `centralindia`, `centralus`, `eastasia`, `eastus`, `eastus2`, `francecentral`, `germanywestcentral`, `japaneast`, `japanwest`, `jioindiawest`, `koreacentral`, `northcentralus`, `northeurope`, `norwayeast`, `southafricanorth`, `southcentralus`, `southeastasia`, `swedencentral`, `switzerlandnorth`, `switzerlandwest`, `uaenorth`, `uksouth`, `westcentralus`, `westeurope`, `westus`, `westus2`, `westus3`. This parameter is optional if the resource already exists. | | ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group doesn't already exist, a new one with this name is created. | | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. | | AADAppDisplayName |The Microsoft Entra application display name. If an existing Microsoft Entra application isn't found, a new one with this name is created. This parameter is optional if the Microsoft Entra application already exists. | | AADAppIdentifierUri |The URI for the Microsoft Entra application. If an existing Microsoft Entra application isn't found, a new one with this URI is created. For example, `api://MyOrganizationImmersiveReaderAADApp`. Here we're using the default Microsoft Entra URI scheme prefix of `api://` for compatibility with the [Microsoft Entra policy of using verified domains](../../active-directory/develop/reference-breaking-changes.md#appid-uri-in-single-tenant-applications-will-require-use-of-default-scheme-or-verified-domains). |
- | AADAppClientSecretExpiration |The date or datetime after which your Microsoft Entra Application Client Secret (password) will expire (for example, '2020-12-31T11:59:59+00:00' or '2020-12-31'). This function creates a client secret for you. To manage Microsoft Entra application client secrets after you've created this resource, visit https://portal.azure.com and go to Home -> Microsoft Entra ID -> App Registrations -> (your app) `[AADAppDisplayName]` -> Certificates and Secrets section -> Client Secrets section (as shown in the "Manage your Microsoft Entra application secrets" screenshot).|
+ | AADAppClientSecretExpiration |The date or datetime after which your Microsoft Entra Application Client Secret (password) expires (for example, '2020-12-31T11:59:59+00:00' or '2020-12-31'). This function creates a client secret for you. |
- Manage your Microsoft Entra application secrets
+ To manage your Microsoft Entra application client secrets after you create this resource, visit the [Azure portal](https://portal.azure.com) and go to **Home** -> **Microsoft Entra ID** -> **App Registrations** -> (your app) `[AADAppDisplayName]` -> **Certificates and Secrets** section -> **Client Secrets** section.
- ![Azure portal Certificates and Secrets blade](./media/client-secrets-blade.png)
+ :::image type="content" source="media/client-secrets-blade.png" alt-text="Screenshot of the Azure portal Certificates and Secrets pane." lightbox="media/client-secrets-blade.png":::
1. Copy the JSON output into a text file for later use. The output should look like the following.
For more information, _see_ [Microsoft Entra built-in roles](../../active-direct
} ```
-## Next steps
+## Next step
-* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
-* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
-* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
-* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
-* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
+> [!div class="nextstepaction"]
+> [How to launch the Immersive Reader](how-to-launch-immersive-reader.md)
ai-services Use Native Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/use-native-documents.md
Azure AI Language is a cloud-based service that applies Natural Language Processing (NLP) features to text-based data. The native document support capability enables you to send API requests asynchronously, using an HTTP POST request body to send your data and HTTP GET request query string to retrieve the processed data.
-A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing prior to using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities:
+A native document refers to the file format used to create the original document such as Microsoft Word (docx) or a portable document file (pdf). Native document support eliminates the need for text preprocessing before using Azure AI Language resource capabilities. Currently, native document support is available for the following capabilities:
* [Personally Identifiable Information (PII)](../personally-identifiable-information/overview.md). The PII detection feature can identify, categorize, and redact sensitive information in unstructured text. The `PiiEntityRecognition` API supports native document processing.
A native document refers to the file format used to create the original document
## Supported document formats
- Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats:
+ Applications use native file formats to create, save, or open native documents. Currently **PII** and **Document summarization** capabilities supports the following native document formats:
|File type|File extension|Description| ||--|--|
A native document refers to the file format used to create the original document
> [!NOTE] > The cURL package is pre-installed on most Windows 10 and Windows 11 and most macOS and Linux distributions. You can check the package version with the following commands:
- > Windows: `curl.exe -V`.
+ > Windows: `curl.exe -V`
> macOS `curl -V` > Linux: `curl --version`
A native document refers to the file format used to create the original document
* [Windows](https://curl.haxx.se/windows/). * [Mac or Linux](https://learn2torials.com/thread/how-to-install-curl-on-mac-or-linux-(ubuntu)-or-windows).
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
Your Language resource needs granted access to your storage account before it ca
* [**Shared access signature (SAS) tokens**](shared-access-signatures.md). User delegation SAS tokens are secured with Microsoft Entra credentials. SAS tokens provide secure, delegated access to resources in your Azure storage account.
-* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources
+* [**Managed identity role-based access control (RBAC)**](managed-identities.md). Managed identities for Azure resources are service principals that create a Microsoft Entra identity and specific permissions for Azure managed resources.
For this project, we authenticate access to the `source location` and `target location` URLs with Shared Access Signature (SAS) tokens appended as query strings. Each token is assigned to a specific blob (file).
For this quickstart, you need a **source document** uploaded to your **source co
"language": "en-US", "id": "Output-excel-file", "source": {
- "location": "{your-source-container-with-SAS-URL}"
+ "location": "{your-source-blob-with-SAS-URL}"
}, "target": { "location": "{your-target-container-with-SAS-URL}"
For this quickstart, you need a **source document** uploaded to your **source co
{ "kind": "PiiEntityRecognition", "parameters":{
- "excludePiiCategoriesredac" : ["PersonType", "Category2", "Category3"],
- "redactionPolicy": "UseEntityTypeName"
+ "excludePiiCategories" : ["PersonType", "Category2", "Category3"],
+ "redactionPolicy": "UseRedactionCharacterWithRefId"
} } ]
For this project, you need a **source document** uploaded to your **source conta
"documents":[ { "source":{
- "location":"{your-source-container-SAS-URL}"
+ "location":"{your-source-blob-SAS-URL}"
}, "targets": {
ai-services Assistants Reference Threads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-threads.md
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads?api-version=2024
## Retrieve thread ```http
-GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads{thread_id}?api-version=2024-02-15-preview
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview
``` Retrieves a thread.
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-
| `id` | string | The identifier, which can be referenced in API endpoints.| | `object` | string | The object type, which is always thread. | | `created_at` | integer | The Unix timestamp (in seconds) for when the thread was created. |
-| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
+| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
ai-services Advanced Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/advanced-prompt-engineering.md
Title: Prompt engineering techniques with Azure OpenAI
-description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models
-
+description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models.
+ Previously updated : 04/20/2023 Last updated : 02/16/2024 keywords: ChatGPT, GPT-4, prompt engineering, meta prompts, chain of thought zone_pivot_groups: openai-prompt
While the principles of prompt engineering can be generalized across many differ
Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
-The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules. Technically the GPT-35-Turbo models can be used with either APIs, but we strongly recommend using the Chat Completion API for these models. To learn more, please consult our [in-depth guide on using these APIs](../how-to/chatgpt.md).
+The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules.
The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context#limitations), is just as important as understanding how to leverage their strengths.
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-with-vision.md
Enhancements let you incorporate other Azure AI services (such as Azure AI Visio
**Optical Character Recognition (OCR)**: Azure AI Vision complements GPT-4 Turbo with Vision by providing high-quality OCR results as supplementary information to the chat model. It allows the model to produce higher quality responses for images with dense text, transformed images, and numbers-heavy financial documents, and increases the variety of languages the model can recognize in text. > [!IMPORTANT]
-> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
:::image type="content" source="../media/concepts/gpt-v/receipts.png" alt-text="Photo of several receipts.":::
Enhancements let you incorporate other Azure AI services (such as Azure AI Visio
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RW1eHRf] > [!NOTE]
-> In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in the paid (S0) tier, in addition to your Azure OpenAI resource.
+> In order to use the video prompt enhancement, you need both an Azure AI Vision resource and an Azure Video Indexer resource, in the paid (S1) tier, in addition to your Azure OpenAI resource.
## Special pricing information
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
**<sup>1</sup>** This model will accept requests > 4,096 tokens. It is not recommended to exceed the 4,096 input token limit as the newer version of the model are capped at 4,096 tokens. If you encounter issues when exceeding 4,096 input tokens with this model this configuration is not officially supported.
-#### Azure Government regions
-
-The following GPT-3 models are available with [Azure Government](/azure/azure-government/documentation-government-welcome):
-
-|Model ID | Model Availability |
-|--|--|
-|`gpt-35-turbo` (1106) |US Gov Virginia<br>US Gov Arizona |
- ### Embeddings models These models can only be used with Embedding API requests.
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
The **Optical character recognition (OCR)** integration allows the model to prod
The **object grounding** integration brings a new layer to data analysis and user interaction, as the feature can visually distinguish and highlight important elements in the images it processes. > [!IMPORTANT]
-> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
> [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored
Follow these steps to set up a video retrieval system and integrate it with your AI chat model. > [!IMPORTANT]
-> To use Vision enhancement, you need an Azure AI Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use Vision enhancement, you need an Azure AI Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
> [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
Previously updated : 01/06/2023 Last updated : 02/16/2024
ai-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/work-with-code.md
Title: 'How to use the Codex models to work with code'
-description: Learn how to use the Codex models on Azure OpenAI to handle a variety of coding tasks
+description: Learn how to use the Codex models on Azure OpenAI to handle a variety of coding tasks.
# Previously updated : 06/24/2022-- Last updated : 02/15/2024++ # Codex models and Azure OpenAI Service
+> [!NOTE]
+> This article was authored and tested against the [legacy code generation models](/azure/ai-services/openai/concepts/legacy-models). These models use the completions API, and its prompt/completion style of interaction. If you wish to test the techniques described in this article verbatim we recommend using the `gpt-35-turbo-instruct` model which allows access to the completions API. However, for code generation the chat completions API and the latest GPT-4 models will generally yield the best results, but the prompts would need to be converted to the conversational style specific to interacting with those models.
+ The Codex model series is a descendant of our GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. You can use Codex for a variety of tasks including:
Codex understands dozens of different programming languages. Many share similar
### Prompt Codex with what you want it to do
-If you want Codex to create a webpage, placing the first line of code in an HTML document (`<!DOCTYPE html>`) after your comment tells Codex what it should do next. The same method works for creating a function from a comment (following the comment with a new line starting with func or def).
+If you want Codex to create a webpage, placing the initial line of code in an HTML document (`<!DOCTYPE html>`) after your comment tells Codex what it should do next. The same method works for creating a function from a comment (following the comment with a new line starting with func or def).
```html <!-- Create a web page with the title 'Kat Katman attorney at paw' -->
animals = [ {"name": "Chomper", "species": "Hamster"}, {"name":
### Lower temperatures give more precise results
-Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models may give you really random or erratic responses.
+Setting the API temperature to 0, or close to zero (such as 0.1 or 0.2) tends to give better results in most cases. Unlike GPT-3 models, where a higher temperature can provide useful creative and random results, higher temperatures with Codex models might produce random or erratic responses.
In cases where you need Codex to provide different potential results, start at zero and then increment upwards by 0.1 until you find suitable variation.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Previously updated : 10/16/2023 Last updated : 02/15/2024 recommendations: false
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-12-15-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` (required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. | **Supported versions**-- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/images/generations:sub
**Supported versions** -- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{oper
**Supported versions** -- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{o
**Supported versions** -- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) - `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
ai-services Captioning Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-concepts.md
Previously updated : 1/18/2024 Last updated : 2/16/2024 zone_pivot_groups: programming-languages-speech-sdk-cli
ai-services Captioning Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/captioning-quickstart.md
Previously updated : 04/23/2022 Last updated : 2/16/2024 ms.devlang: cpp
-# ms.devlang: cpp, csharp
zone_pivot_groups: programming-languages-speech-sdk-cli
ai-services Get Speech Recognition Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-speech-recognition-results.md
Previously updated : 06/13/2022 Last updated : 2/16/2024 ms.devlang: cpp
-# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
zone_pivot_groups: programming-languages-speech-sdk-cli keywords: speech to text, speech to text software
ai-services Get Started Intent Recognition Clu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition-clu.md
Previously updated : 02/22/2023 Last updated : 2/16/2024 zone_pivot_groups: programming-languages-set-thirteen keywords: intent recognition
ai-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-intent-recognition.md
Previously updated : 02/22/2023 Last updated : 2/16/2024 ms.devlang: cpp
-# ms.devlang: cpp, csharp, java, javascript, python
zone_pivot_groups: programming-languages-speech-services keywords: intent recognition
keywords: intent recognition
# Quickstart: Recognize intents with the Speech service and LUIS > [!IMPORTANT]
-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
+> LUIS will be retired on October 1st 2025. As of April 1st 2023 you can't create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
> > Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
ai-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speaker-recognition.md
Previously updated : 01/08/2022 Last updated : 2/16/2024 ms.devlang: cpp
-# ms.devlang: cpp, csharp, javascript
zone_pivot_groups: programming-languages-speech-services keywords: speaker recognition, voice biometry
ai-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-translation.md
Previously updated : 09/16/2022 Last updated : 2/16/2024 zone_pivot_groups: programming-languages-speech-services keywords: speech translation
ai-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/intent-recognition.md
Both a Speech resource and Language resource are required to use CLU with the Sp
For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](../language-service/conversational-language-understanding/overview.md). > [!IMPORTANT]
-> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
+> LUIS will be retired on October 1st 2025. As of April 1st 2023 you can't create new LUIS resources. We recommend [migrating your LUIS applications](../language-service/conversational-language-understanding/how-to/migrate-from-luis.md) to [conversational language understanding](../language-service/conversational-language-understanding/overview.md) to benefit from continued product support and multilingual capabilities.
> > Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
ai-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md
The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-
| Version | Path | |--|| | Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
-| 4.5.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:4.5.0-amd64` |
+| 4.6.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:4.6.0-amd64` |
All tags, except for `latest`, are in the following format and are case sensitive:
ai-services Speech Container Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-overview.md
The following table lists the Speech containers available in the Microsoft Conta
| Container | Features | Supported versions and locales | |--|--|--|
-| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.5.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
-| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [custom speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.5.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
+| [Speech to text](speech-container-stt.md) | Transcribes continuous real-time speech or batch audio recordings with intermediate results. | Latest: 4.6.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).|
+| [Custom speech to text](speech-container-cstt.md) | Using a custom model from the [custom speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | Latest: 4.6.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list). |
| [Speech language identification](speech-container-lid.md)<sup>1, 2</sup> | Detects the language spoken in audio files. | Latest: 1.12.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/language-detection/tags/list). | | [Neural text to speech](speech-container-ntts.md) | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | Latest: 3.0.0<br/><br/>For all supported versions and locales, see the [Microsoft Container Registry (MCR)](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/tags) and [JSON tags](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/neural-text-to-speech/tags/list). |
ai-services Speech Container Stt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-stt.md
The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-
| Version | Path | |--|| | Latest | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:latest`<br/><br/>The `latest` tag pulls the latest image for the `en-US` locale. |
-| 4.5.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:4.5.0-amd64-mr-in` |
+| 4.6.0 | `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text:4.6.0-amd64-mr-in` |
All tags, except for `latest`, are in the following format and are case sensitive:
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md
Previously updated : 11/06/2023 Last updated : 02/16/2024 # Translator language support
-**Translation - Cloud:** Cloud translation is available in all languages for the Translate operation of Text Translation and for Document Translation.
+**Translation - Cloud:** Cloud translation is available in all languages for the `Translate` operation of Text Translation and for Document Translation.
**Translation ΓÇô Containers:** Language support for Containers.
**Auto Language Detection:** Automatically detect the language of the source text while using Text Translation or Document Translation.
-**Dictionary:** Use the [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) or [Dictionary Examples](reference/v3-0-dictionary-examples.md) operations from the Text Translation feature to display alternative translations from or to English and examples of words in context.
+**Dictionary:** To display alternative translations from or to English and examples of words in context, use the [Dictionary Lookup](reference/v3-0-dictionary-lookup.md) or [Dictionary Examples](reference/v3-0-dictionary-examples.md) operations from the Text Translation feature.
## Translation
|Language|Language code|Cloud ΓÇô Text Translation and Document Translation|Containers ΓÇô Text Translation|Custom Translator|Auto Language Detection|Dictionary| |:-|:-|:-|:-|:-|:-|:-|
-|Afrikaans|af|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Albanian|sq|Γ£ö|Γ£ö| |Γ£ö| |
-|Amharic|am|Γ£ö|Γ£ö| |Γ£ö| |
-|Arabic|ar|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Armenian|hy|Γ£ö|Γ£ö| |Γ£ö| |
-|Assamese|as|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Azerbaijani (Latin)|az|Γ£ö|Γ£ö| |Γ£ö| |
-|Bangla|bn|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Bashkir|ba|Γ£ö|Γ£ö| |Γ£ö| |
-|Basque|eu|Γ£ö|Γ£ö| |Γ£ö| |
-|Bhojpuri|bho|Γ£ö|Γ£ö | | | |
+|Afrikaans|`af`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Albanian|`sq`|Γ£ö|Γ£ö| |Γ£ö| |
+|Amharic|`am`|Γ£ö|Γ£ö| |Γ£ö| |
+|Arabic|`ar`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Armenian|`hy`|Γ£ö|Γ£ö| |Γ£ö| |
+|Assamese|`as`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Azerbaijani (Latin)|`az`|Γ£ö|Γ£ö| |Γ£ö| |
+|Bangla|`bn`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Bashkir|`ba`|Γ£ö|Γ£ö| |Γ£ö| |
+|Basque|`eu`|Γ£ö|Γ£ö| |Γ£ö| |
+|Bhojpuri|`bho`|Γ£ö|Γ£ö | | | |
|Bodo|brx |Γ£ö|Γ£ö | | | |
-|Bosnian (Latin)|bs|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Bulgarian|bg|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Cantonese (Traditional)|yue|Γ£ö|Γ£ö| |Γ£ö| |
-|Catalan|ca|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Chinese (Literary)|lzh|Γ£ö|Γ£ö| | | |
-|Chinese Simplified|zh-Hans|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Chinese Traditional|zh-Hant|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|chiShona|sn|Γ£ö|Γ£ö| | | |
-|Croatian|hr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Czech|cs|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Danish|da|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Dari|prs|Γ£ö|Γ£ö| |Γ£ö| |
-|Divehi|dv|Γ£ö|Γ£ö| |Γ£ö| |
-|Dogri|doi|Γ£ö| | | | |
-|Dutch|nl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|English|en|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Estonian|et|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Faroese|fo|Γ£ö|Γ£ö| |Γ£ö| |
-|Fijian|fj|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Filipino|fil|Γ£ö|Γ£ö|Γ£ö| | |
-|Finnish|fi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|French|fr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|French (Canada)|fr-ca|Γ£ö|Γ£ö| | | |
-|Galician|gl|Γ£ö|Γ£ö| |Γ£ö| |
-|Georgian|ka|Γ£ö|Γ£ö| |Γ£ö| |
-|German|de|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Greek|el|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Gujarati|gu|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Haitian Creole|ht|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
-|Hausa|ha|Γ£ö|Γ£ö| |Γ£ö| |
-|Hebrew|he|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Hindi|hi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Hmong Daw (Latin)|mww|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
-|Hungarian|hu|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Icelandic|is|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Igbo|ig|Γ£ö|Γ£ö| |Γ£ö| |
-|Indonesian|id|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Inuinnaqtun|ikt|Γ£ö|Γ£ö| | | |
-|Inuktitut|iu|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Inuktitut (Latin)|iu-Latn|Γ£ö|Γ£ö| |Γ£ö| |
-|Irish|ga|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Italian|it|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Japanese|ja|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Kannada|kn|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Kashmiri|ks|Γ£ö|Γ£ö | | | |
-|Kazakh|kk|Γ£ö|Γ£ö| |Γ£ö| |
-|Khmer|km|Γ£ö|Γ£ö| |Γ£ö| |
-|Kinyarwanda|rw|Γ£ö|Γ£ö| |Γ£ö| |
-|Klingon|tlh-Latn|Γ£ö| | |Γ£ö|Γ£ö|
-|Klingon (plqaD)|tlh-Piqd|Γ£ö| | |Γ£ö| |
-|Konkani|gom|Γ£ö|Γ£ö| | | |
-|Korean|ko|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Kurdish (Central)|ku|Γ£ö|Γ£ö| |Γ£ö| |
-|Kurdish (Northern)|kmr|Γ£ö|Γ£ö| | | |
-|Kyrgyz (Cyrillic)|ky|Γ£ö|Γ£ö| |Γ£ö| |
-|Lao|lo|Γ£ö|Γ£ö| |Γ£ö| |
-|Latvian|lv|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Lithuanian|lt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Lingala|ln|Γ£ö|Γ£ö| | | |
-|Lower Sorbian|dsb|Γ£ö| | | | |
-|Luganda|lug|Γ£ö|Γ£ö| | | |
-|Macedonian|mk|Γ£ö|Γ£ö| |Γ£ö| |
-|Maithili|mai|Γ£ö|Γ£ö| | | |
-|Malagasy|mg|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Malay (Latin)|ms|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Malayalam|ml|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Maltese|mt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Maori|mi|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Marathi|mr|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Mongolian (Cyrillic)|mn-Cyrl|Γ£ö|Γ£ö| |Γ£ö| |
-|Mongolian (Traditional)|mn-Mong|Γ£ö|Γ£ö| | | |
-|Myanmar|my|Γ£ö|Γ£ö| |Γ£ö| |
-|Nepali|ne|Γ£ö|Γ£ö| |Γ£ö| |
-|Norwegian|nb|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Nyanja|nya|Γ£ö|Γ£ö| | | |
-|Odia|or|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Pashto|ps|Γ£ö|Γ£ö| |Γ£ö| |
-|Persian|fa|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Polish|pl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Portuguese (Brazil)|pt|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Portuguese (Portugal)|pt-pt|Γ£ö|Γ£ö| | | |
-|Punjabi|pa|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Queretaro Otomi|otq|Γ£ö|Γ£ö| |Γ£ö| |
-|Romanian|ro|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Rundi|run|Γ£ö|Γ£ö| | | |
-|Russian|ru|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Samoan (Latin)|sm|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Serbian (Cyrillic)|sr-Cyrl|Γ£ö|Γ£ö| |Γ£ö| |
-|Serbian (Latin)|sr-Latn|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Sesotho|st|Γ£ö|Γ£ö| | | |
-|Sesotho sa Leboa|nso|Γ£ö|Γ£ö| | | |
-|Setswana|tn|Γ£ö|Γ£ö| | | |
-|Sindhi|sd|Γ£ö|Γ£ö| |Γ£ö| |
-|Sinhala|si|Γ£ö|Γ£ö| |Γ£ö| |
-|Slovak|sk|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Slovenian|sl|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Somali (Arabic)|so|Γ£ö|Γ£ö| |Γ£ö| |
-|Spanish|es|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Swahili (Latin)|sw|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Swedish|sv|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Tahitian|ty|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Tamil|ta|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Tatar (Latin)|tt|Γ£ö|Γ£ö| |Γ£ö| |
-|Telugu|te|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Thai|th|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Tibetan|bo|Γ£ö|Γ£ö| |Γ£ö| |
-|Tigrinya|ti|Γ£ö|Γ£ö| |Γ£ö| |
-|Tongan|to|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
-|Turkish|tr|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Turkmen (Latin)|tk|Γ£ö|Γ£ö| |Γ£ö| |
-|Ukrainian|uk|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Upper Sorbian|hsb|Γ£ö|Γ£ö| |Γ£ö| |
-|Urdu|ur|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Uyghur (Arabic)|ug|Γ£ö|Γ£ö| |Γ£ö| |
-|Uzbek (Latin)|uz|Γ£ö|Γ£ö| |Γ£ö| |
-|Vietnamese|vi|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Welsh|cy|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Xhosa|xh|Γ£ö|Γ£ö| |Γ£ö| |
-|Yoruba|yo|Γ£ö|Γ£ö| |Γ£ö| |
-|Yucatec Maya|yua|Γ£ö|Γ£ö| |Γ£ö| |
-|Zulu|zu|Γ£ö|Γ£ö| |Γ£ö| |
+|Bosnian (Latin)|`bs`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Bulgarian|`bg`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Cantonese (Traditional)|`yue`|Γ£ö|Γ£ö| |Γ£ö| |
+|Catalan|`ca`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Chinese (Literary)|`lzh`|Γ£ö|Γ£ö| | | |
+|Chinese Simplified|`zh-Hans`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Chinese Traditional|`zh-Hant`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|chiShona|`sn`|Γ£ö|Γ£ö| | | |
+|Croatian|`hr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Czech|`cs`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Danish|`da`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Dari|`prs`|Γ£ö|Γ£ö| |Γ£ö| |
+|Divehi|`dv`|Γ£ö|Γ£ö| |Γ£ö| |
+|Dogri|`doi`|Γ£ö| | | | |
+|Dutch|`nl`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|English|`en`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Estonian|`et`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Faroese|`fo`|Γ£ö|Γ£ö| |Γ£ö| |
+|Fijian|`fj`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Filipino|`fil`|Γ£ö|Γ£ö|Γ£ö| | |
+|Finnish|`fi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|French|`fr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|French (Canada)|`fr-ca`|Γ£ö|Γ£ö|Γ£ö| | |
+|Galician|`gl`|Γ£ö|Γ£ö| |Γ£ö| |
+|Georgian|`ka`|Γ£ö|Γ£ö| |Γ£ö| |
+|German|`de`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Greek|`el`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Gujarati|`gu`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Haitian Creole|`ht`|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
+|Hausa|`ha`|Γ£ö|Γ£ö| |Γ£ö| |
+|Hebrew|`he`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Hindi|`hi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Hmong Daw (Latin)|`mww`|Γ£ö|Γ£ö| |Γ£ö|Γ£ö|
+|Hungarian|`hu`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Icelandic|`is`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Igbo|`ig`|Γ£ö|Γ£ö| |Γ£ö| |
+|Indonesian|`id`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Inuinnaqtun|`ikt`|Γ£ö|Γ£ö| | | |
+|Inuktitut|`iu`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Inuktitut (Latin)|`iu-Latn`|Γ£ö|Γ£ö| |Γ£ö| |
+|Irish|`ga`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Italian|`it`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Japanese|`ja`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Kannada|`kn`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Kashmiri|`ks`|Γ£ö|Γ£ö | | | |
+|Kazakh|`kk`|Γ£ö|Γ£ö| |Γ£ö| |
+|Khmer|`km`|Γ£ö|Γ£ö| |Γ£ö| |
+|Kinyarwanda|`rw`|Γ£ö|Γ£ö| |Γ£ö| |
+|Klingon|`tlh-Latn`|Γ£ö| | |Γ£ö|Γ£ö|
+|Klingon (plqaD)|`tlh-Piqd`|Γ£ö| | |Γ£ö| |
+|Konkani|`gom`|Γ£ö|Γ£ö| | | |
+|Korean|`ko`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Kurdish (Central)|`ku`|Γ£ö|Γ£ö| |Γ£ö| |
+|Kurdish (Northern)|`kmr`|Γ£ö|Γ£ö| | | |
+|Kyrgyz (Cyrillic)|`ky`|Γ£ö|Γ£ö| |Γ£ö| |
+|`Lao`|`lo`|Γ£ö|Γ£ö| |Γ£ö| |
+|Latvian|`lv`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Lithuanian|`lt`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Lingala|`ln`|Γ£ö|Γ£ö| | | |
+|Lower Sorbian|`dsb`|Γ£ö| | | | |
+|Luganda|`lug`|Γ£ö|Γ£ö| | | |
+|Macedonian|`mk`|Γ£ö|Γ£ö| |Γ£ö| |
+|Maithili|`mai`|Γ£ö|Γ£ö| | | |
+|Malagasy|`mg`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Malay (Latin)|`ms`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Malayalam|`ml`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Maltese|`mt`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Maori|`mi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Marathi|`mr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Mongolian (Cyrillic)|`mn-Cyrl`|Γ£ö|Γ£ö| |Γ£ö| |
+|Mongolian (Traditional)|`mn-Mong`|Γ£ö|Γ£ö| | | |
+|Myanmar|`my`|Γ£ö|Γ£ö| |Γ£ö| |
+|Nepali|`ne`|Γ£ö|Γ£ö| |Γ£ö| |
+|Norwegian|`nb`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Nyanja|`nya`|Γ£ö|Γ£ö| | | |
+|Odia|`or`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Pashto|`ps`|Γ£ö|Γ£ö| |Γ£ö| |
+|Persian|`fa`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Polish|`pl`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Portuguese (Brazil)|`pt`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Portuguese (Portugal)|pt-pt|Γ£ö|Γ£ö|Γ£ö| | |
+|Punjabi|`pa`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Queretaro Otomi|`otq`|Γ£ö|Γ£ö| |Γ£ö| |
+|Romanian|`ro`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Rundi|`run`|Γ£ö|Γ£ö| | | |
+|Russian|`ru`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Samoan (Latin)|`sm`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Serbian (Cyrillic)|`sr-Cyrl`|Γ£ö|Γ£ö| |Γ£ö| |
+|Serbian (Latin)|`sr-Latn`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Sesotho|`st`|Γ£ö|Γ£ö| | | |
+|Sesotho sa Leboa|`nso`|Γ£ö|Γ£ö| | | |
+|Setswana|`tn`|Γ£ö|Γ£ö| | | |
+|Sindhi|`sd`|Γ£ö|Γ£ö| |Γ£ö| |
+|Sinhala|`si`|Γ£ö|Γ£ö| |Γ£ö| |
+|Slovak|`sk`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Slovenian|`sl`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Somali (Arabic)|`so`|Γ£ö|Γ£ö| |Γ£ö| |
+|Spanish|`es`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Swahili (Latin)|`sw`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Swedish|`sv`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Tahitian|`ty`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Tamil|`ta`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Tatar (Latin)|`tt`|Γ£ö|Γ£ö| |Γ£ö| |
+|Telugu|`te`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Thai|`th`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Tibetan|`bo`|Γ£ö|Γ£ö| |Γ£ö| |
+|Tigrinya|`ti`|Γ£ö|Γ£ö| |Γ£ö| |
+|Tongan|`to`|Γ£ö|Γ£ö|Γ£ö|Γ£ö| |
+|Turkish|`tr`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Turkmen (Latin)|`tk`|Γ£ö|Γ£ö| |Γ£ö| |
+|Ukrainian|`uk`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Upper Sorbian|`hsb`|Γ£ö|Γ£ö| |Γ£ö| |
+|Urdu|`ur`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Uyghur (Arabic)|`ug`|Γ£ö|Γ£ö| |Γ£ö| |
+|Uzbek (Latin)|`uz`|Γ£ö|Γ£ö| |Γ£ö| |
+|Vietnamese|`vi`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Welsh|`cy`|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+|Xhosa|`xh`|Γ£ö|Γ£ö| |Γ£ö| |
+|Yoruba|`yo`|Γ£ö|Γ£ö| |Γ£ö| |
+|Yucatec Maya|`yua`|Γ£ö|Γ£ö| |Γ£ö| |
+|Zulu|`zu`|Γ£ö|Γ£ö| |Γ£ö| |
## Document Translation: scanned PDF support
|:-|:-:|:-:|:-:| |Afrikaans|`af`|Yes|Yes| |Albanian|`sq`|Yes|Yes|
-|Amharic|`am`|No|No|
+|Amharic|`am`|Yes|No|
|Arabic|`ar`|Yes|Yes|
-|Armenian|`hy`|No|No|
-|Assamese|`as`|No|No|
+|Armenian|`hy`|Yes|No|
+|Assamese|`as`|Yes|No|
|Azerbaijani (Latin)|`az`|Yes|Yes|
-|Bangla|`bn`|No|No|
-|Bashkir|`ba`|No|Yes|
+|Bangla|`bn`|Yes|No|
+|Bashkir|`ba`|Yes|Yes|
|Basque|`eu`|Yes|Yes| |Bosnian (Latin)|`bs`|Yes|Yes| |Bulgarian|`bg`|Yes|Yes|
-|Cantonese (Traditional)|`yue`|No|Yes|
+|Cantonese (Traditional)|`yue`|Yes|Yes|
|Catalan|`ca`|Yes|Yes|
-|Chinese (Literary)|`lzh`|No|Yes|
+|Chinese (Literary)|`lzh`|Yes|Yes|
|Chinese Simplified|`zh-Hans`|Yes|Yes| |Chinese Traditional|`zh-Hant`|Yes|Yes| |Croatian|`hr`|Yes|Yes| |Czech|`cs`|Yes|Yes| |Danish|`da`|Yes|Yes|
-|Dari|`prs`|No|No|
-|Divehi|`dv`|No|No|
+|Dari|`prs`|Yes|No|
+|Divehi|`dv`|Yes|No|
|Dutch|`nl`|Yes|Yes| |English|`en`|Yes|Yes| |Estonian|`et`|Yes|Yes|
|French|`fr`|Yes|Yes| |French (Canada)|`fr-ca`|Yes|Yes| |Galician|`gl`|Yes|Yes|
-|Georgian|`ka`|No|No|
+|Georgian|`ka`|Yes|No|
|German|`de`|Yes|Yes|
-|Greek|`el`|No|No|
-|Gujarati|`gu`|No|No|
+|Greek|`el`|Yes|No|
+|Gujarati|`gu`|Yes|No|
|Haitian Creole|`ht`|Yes|Yes|
-|Hebrew|`he`|No|No|
+|Hebrew|`he`|Yes|No|
|Hindi|`hi`|Yes|Yes| |Hmong Daw (Latin)|`mww`|Yes|Yes| |Hungarian|`hu`|Yes|Yes| |Icelandic|`is`|Yes|Yes| |Indonesian|`id`|Yes|Yes| |Interlingua|`ia`|Yes|Yes|
-|Inuinnaqtun|`ikt`|No|Yes|
-|Inuktitut|`iu`|No|No|
+|Inuinnaqtun|`ikt`|Yes|Yes|
+|Inuktitut|`iu`|Yes|No|
|Inuktitut (Latin)|`iu-Latn`|Yes|Yes| |Irish|`ga`|Yes|Yes| |Italian|`it`|Yes|Yes| |Japanese|`ja`|Yes|Yes|
-|Kannada|`kn`|No|Yes|
+|Kannada|`kn`|Yes|Yes|
|Kazakh (Cyrillic)|`kk`, `kk-cyrl`|Yes|Yes| |Kazakh (Latin)|`kk-latn`|Yes|Yes|
-|Khmer|`km`|No|No|
-|Klingon|`tlh-Latn`|No|No|
-|Klingon (plqaD)|`tlh-Piqd`|No|No|
+|Khmer|`km`|Yes|No|
+|Klingon|`tlh-Latn`|Yes|No|
+|Klingon (plqaD)|`tlh-Piqd`|Yes|No|
|Korean|`ko`|Yes|Yes|
-|Kurdish (Arabic) (Central)|`ku-arab`,`ku`|No|No|
+|Kurdish (Arabic) (Central)|`ku-arab`,`ku`|Yes|No|
|Kurdish (Latin) (Northern)|`ku-latn`, `kmr`|Yes|Yes| |Kyrgyz (Cyrillic)|`ky`|Yes|Yes|
-|Lao|`lo`|No|No|
-|Latvian|`lv`|No|Yes|
+|`Lao`|`lo`|Yes|No|
+|Latvian|`lv`|Yes|Yes|
|Lithuanian|`lt`|Yes|Yes|
-|Macedonian|`mk`|No|Yes|
-|Malagasy|`mg`|No|Yes|
+|Macedonian|`mk`|Yes|Yes|
+|Malagasy|`mg`|Yes|Yes|
|Malay (Latin)|`ms`|Yes|Yes|
-|Malayalam|`ml`|No|Yes|
+|Malayalam|`ml`|Yes|Yes|
|Maltese|`mt`|Yes|Yes| |Maori|`mi`|Yes|Yes| |Marathi|`mr`|Yes|Yes| |Mongolian (Cyrillic)|`mn-Cyrl`|Yes|Yes|
-|Mongolian (Traditional)|`mn-Mong`|No|No|
-|Myanmar (Burmese)|`my`|No|No|
+|Mongolian (Traditional)|`mn-Mong`|Yes|No|
+|Myanmar (Burmese)|`my`|Yes|No|
|Nepali|`ne`|Yes|Yes| |Norwegian|`nb`|Yes|Yes|
-|Odia|`or`|No|No|
-|Pashto|`ps`|No|No|
-|Persian|`fa`|No|No|
+|Odia|`or`|Yes|No|
+|Pashto|`ps`|Yes|No|
+|Persian|`fa`|Yes|No|
|Polish|`pl`|Yes|Yes| |Portuguese (Brazil)|`pt`, `pt-br`|Yes|Yes| |Portuguese (Portugal)|`pt-pt`|Yes|Yes|
-|Punjabi|`pa`|No|Yes|
-|Queretaro Otomi|`otq`|No|Yes|
+|Punjabi|`pa`|Yes|Yes|
+|Queretaro Otomi|`otq`|Yes|Yes|
|Romanian|`ro`|Yes|Yes| |Russian|`ru`|Yes|Yes| |Samoan (Latin)|`sm`|Yes|Yes|
-|Serbian (Cyrillic)|`sr-Cyrl`|No|Yes|
+|Serbian (Cyrillic)|`sr-Cyrl`|Yes|Yes|
|Serbian (Latin)|`sr`, `sr-latn`|Yes|Yes| |Slovak|`sk`|Yes|Yes| |Slovenian|`sl`|Yes|Yes|
-|Somali|`so`|No|Yes|
+|Somali|`so`|Yes|Yes|
|Spanish|`es`|Yes|Yes| |Swahili (Latin)|`sw`|Yes|Yes| |Swedish|`sv`|Yes|Yes|
-|Tahitian|`ty`|No|Yes|
-|Tamil|`ta`|No|Yes|
+|Tahitian|`ty`|Yes|Yes|
+|Tamil|`ta`|Yes|Yes|
|Tatar (Latin)|`tt`|Yes|Yes|
-|Telugu|`te`|No|Yes|
-|Thai|`th`|No|No|
-|Tibetan|`bo`|No|No|
-|Tigrinya|`ti`|No|No|
+|Telugu|`te`|Yes|Yes|
+|Thai|`th`|Yes|No|
+|Tibetan|`bo`|Yes|No|
+|Tigrinya|`ti`|Yes|No|
|Tongan|`to`|Yes|Yes| |Turkish|`tr`|Yes|Yes| |Turkmen (Latin)|`tk`|Yes|Yes|
-|Ukrainian|`uk`|No|Yes|
+|Ukrainian|`uk`|Yes|Yes|
|Upper Sorbian|`hsb`|Yes|Yes|
-|Urdu|`ur`|No|No|
-|Uyghur (Arabic)|`ug`|No|No|
+|Urdu|`ur`|Yes|No|
+|Uyghur (Arabic)|`ug`|Yes|No|
|Uzbek (Latin)|`uz`|Yes|Yes|
-|Vietnamese|`vi`|No|Yes|
+|Vietnamese|`vi`|Yes|Yes|
|Welsh|`cy`|Yes|Yes| |Yucatec Maya|`yua`|Yes|Yes| |Zulu|`zu`|Yes|Yes|
ai-services Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/use-key-vault.md
Previously updated : 09/13/2022 Last updated : 02/14/2024 zone_pivot_groups: programming-languages-set-twenty-eight # Develop Azure AI services applications with Key Vault
-Use this article to learn how to develop Azure AI services applications securely by using [Azure Key Vault](../key-vault/general/overview.md).
+Learn how to develop Azure AI services applications securely by using [Azure Key Vault](../key-vault/general/overview.md).
-Key Vault reduces the chances that secrets may be accidentally leaked, because you won't store security information in your application.
+Key Vault reduces the risk that secrets may be accidentally leaked, because you avoid storing security information in your application.
## Prerequisites
ai-studio Rbac Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 02/14/2024
In this article, you learn how to manage access (authorization) to an Azure AI h
> Applying some roles might limit UI functionality in Azure AI Studio for other users. For example, if a user's role does not have the ability to create a compute instance, the option to create a compute instance will not be available in studio. This behavior is expected, and prevents the user from attempting operations that would return an access denied error. ## Azure AI hub resource vs Azure AI project+ In the Azure AI Studio, there are two levels of access: the Azure AI hub resource and the Azure AI project. The resource is home to the infrastructure (including virtual network setup, customer-managed keys, managed identities, and policies) as well as where you configure your Azure AI services. Azure AI hub resource access can allow you to modify the infrastructure, create new Azure AI hub resources, and create projects. Azure AI projects are a subset of the Azure AI hub resource that act as workspaces that allow you to build and deploy AI systems. Within a project you can develop flows, deploy models, and manage project assets. Project access lets you develop AI end-to-end while taking advantage of the infrastructure setup on the Azure AI hub resource. +
+One of the key benefits of the AI hub and AI project relationship is that developers can create their own projects that inherit the AI hub security settings. You might also have developers who are contributors to a project, and can't create new projects.
+ ## Default roles for the Azure AI hub resource The Azure AI Studio has built-in roles that are available by default. In addition to the Reader, Contributor, and Owner roles, the Azure AI Studio has a new role called Azure AI Developer. This role can be assigned to enable users to create connections, compute, and projects, but not let them create new Azure AI hub resources or change permissions of the existing Azure AI hub resource.
Here's a table of the built-in roles and their permissions for the Azure AI hub
The key difference between Contributor and Azure AI Developer is the ability to make new Azure AI hub resources. If you don't want users to make new Azure AI hub resources (due to quota, cost, or just managing how many Azure AI hub resources you have), assign the AI Developer role.
-Only the Owner and Contributor roles allow you to make an Azure AI hub resource. At this time, custom roles won't grant you permission to make Azure AI hub resources.
+Only the Owner and Contributor roles allow you to make an Azure AI hub resource. At this time, custom roles can't grant you permission to make Azure AI hub resources.
The full set of permissions for the new "Azure AI Developer" role are as follows:
Here's a table of the built-in roles and their permissions for the Azure AI proj
| Azure AI Developer | User can perform most actions, including create deployments, but can't assign permissions to project users. | | Reader | Read only access to the Azure AI project. |
-When a user gets access to a project, two more roles are automatically assigned to the project user. The first role is Reader on the Azure AI hub resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```.
+When a user is granted access to a project (for example, through the AI Studio permission management), two more roles are automatically assigned to the user. The first role is Reader on the Azure AI hub resource. The second role is the Inference Deployment Operator role, which allows the user to create deployments on the resource group that the project is in. This role is composed of these two permissions: ```"Microsoft.Authorization/*/read"``` and ```"Microsoft.Resources/deployments/*"```.
In order to complete end-to-end AI development and deployment, users only need these two autoassigned roles and either the Contributor or Azure AI Developer role on a *project*.
+The minimum permissions needed to create an AI project resource is a role that has the allowed action of `Microsoft.MachineLearningServices/workspaces/hubs/join` on the AI hub resource. The Azure AI Developer built-in role has this permission.
+
+## Dependency service RBAC permissions
+
+The Azure AI hub resource has dependencies on other Azure services. The following table lists the permissions required for these services when you create an Azure AI hub resource. These permissions are needed by the person that creates the AI hub. They aren't needed by the person who creates an AI project from the AI hub.
+
+| Permission | Purpose |
+||-|
+| `Microsoft.Storage/storageAccounts/write` | Create a storage account with the specified parameters or update the properties or tags or adds custom domain for the specified storage account. |
+| `Microsoft.KeyVault/vaults/write` | Create a new key vault or updates the properties of an existing key vault. Certain properties might require more permissions. |
+| `Microsoft.CognitiveServices/accounts/write` | Write API Accounts. |
+| `Microsoft.Insights/Components/Write` | Write to an application insights component configuration. |
+| `Microsoft.OperationalInsights/workspaces/write` | Create a new workspace or links to an existing workspace by providing the customer ID from the existing workspace. |
++ ## Sample enterprise RBAC setup
-Below is an example of how to set up role-based access control for your Azure AI Studio for an enterprise.
+The following is an example of how to set up role-based access control for your Azure AI Studio for an enterprise.
| Persona | Role | Purpose | | | | | | IT admin | Owner of the Azure AI hub resource | The IT admin can ensure the Azure AI hub resource is set up to their enterprise standards and assign managers the Contributor role on the resource if they want to enable managers to make new Azure AI hub resources or they can assign managers the Azure AI Developer role on the resource to not allow for new Azure AI hub resource creation. |
-| Managers | Contributor or Azure AI Developer on the Azure AI hub resource | Managers can create projects for their team and create shared resources (ex: compute and connections) for their group at the Azure AI hub resource level. |
-| Managers | Owner of the Azure AI Project | When managers create a project, they become the project owner. This allows them to add their team/developers to the project. Their team/developers can be added as Contributors or Azure AI Developers to allow them to develop in the project. |
+| Managers | Contributor or Azure AI Developer on the Azure AI hub resource | Managers can manage the AI hub, audit compute resources, audit connections, and create shared connections. |
+| Team lead/Lead developer | Azure AI Developer on the Azure AI hub resource | Lead developers can create projects for their team and create shared resources (ex: compute and connections) at the Azure AI hub resource level. After project creation, project owners can invite other members. |
| Team members/developers | Contributor or Azure AI Developer on the Azure AI Project | Developers can build and deploy AI models within a project and create assets that enable development such as computes and connections. | ## Access to resources created outside of the Azure AI hub resource
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
description: Learn how to deploy Llama 2 family of large language models with Az
Previously updated : 12/11/2023 Last updated : 02/09/2024 +
+#This functionality is also available in Azure Machine Learning: /azure/machine-learning/how-to-deploy-models-llama.md
# How to deploy Llama 2 family of large language models with Azure AI Studio
+In this article, you learn about the Llama 2 family of large language models (LLMs). You also learn how to use Azure AI Studio to deploy models from this set either as a service with pay-as you go billing or with hosted infrastructure in real-time endpoints.
-The Llama 2 family of large language models (LLMs) is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with Reinforcement Learning from Human Feedback (RLHF), called Llama-2-chat.
+The Llama 2 family of LLMs is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Llama-2-chat.
-Llama 2 can be deployed as a service with pay-as-you-go billing or with hosted infrastructure in real-time endpoints.
## Deploy Llama 2 models with pay-as-you-go Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
-Llama 2 models deployed as a service with pay-as-you-go are offered by Meta AI through the Azure Marketplace and they might add more terms of use and pricing.
+Llama 2 models deployed as a service with pay-as-you-go are offered by Meta AI through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
-> [!NOTE]
-> Pay-as-you-go offering is only available in projects created in East US 2 and West US 3 regions.
+### Azure Marketplace model offerings
-### Offerings
-
-The following models are available for Llama 2 when deployed as a service with pay-as-you-go:
+The following models are available in Azure Marketplace for Llama 2 when deployed as a service with pay-as-you-go:
* Meta Llama-2-7B (preview) * Meta Llama 2 7B-Chat (preview)
The following models are available for Llama 2 when deployed as a service with p
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-llama-2-models-to-real-time-endpoints) instead.
-### Create a new deployment
+### Prerequisites
-To create a deployment:
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md).
-1. Choose a model you want to deploy from the Azure AI Studio [model catalog](../how-to/model-catalog.md). Alternatively, you can initiate deployment by selecting **+ Create** from the **Deployments** options in the **Build** tab of your project.
+ > [!IMPORTANT]
+ > Pay-as-you-go model deployment offering is only available in AI hubs created in **East US 2** and **West US 3** regions.
-1. On the detail page, select **Deploy** and then **Pay-as-you-go**.
+- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
- :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go.png":::
+ - On the Azure subscriptionΓÇöto subscribe the Azure AI project to the Azure Marketplace offering, once for each project, per offering:
+ - `Microsoft.MarketplaceOrdering/agreements/offers/plans/read`
+ - `Microsoft.MarketplaceOrdering/agreements/offers/plans/sign/action`
+ - `Microsoft.MarketplaceOrdering/offerTypes/publishers/offers/plans/agreements/read`
+ - `Microsoft.Marketplace/offerTypes/publishers/offers/plans/agreements/read`
+ - `Microsoft.SaaS/register/action`
+
+ - On the resource groupΓÇöto create and use the SaaS resource:
+ - `Microsoft.SaaS/resources/read`
+ - `Microsoft.SaaS/resources/write`
+
+ - On the Azure AI projectΓÇöto deploy endpoints (the Azure AI Developer role contains these permissions already):
+ - `Microsoft.MachineLearningServices/workspaces/marketplaceModelSubscriptions/*`
+ - `Microsoft.MachineLearningServices/workspaces/serverlessEndpoints/*`
-1. Select the project where you want to create a deployment.
+ For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
-1. On the deployment wizard, you see the option to explore the more terms and conditions applied to the selected model and its pricing. Select **Azure Marketplace Terms** to learn about it.
- :::image type="content" source="../media/deploy-monitor/llama/deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/llama/deploy-marketplace-terms.png":::
+### Create a new deployment
-1. If this is the first time you deployed the model in the project, you have to sign up your project for the particular offering from the Azure Marketplace. Each project has its own connection to the marketplace's offering, which, allows you to control and monitor spending per project. Select **Subscribe and Deploy**.
+To create a deployment:
- > [!NOTE]
- > Subscribing a project to a particular offering from the Azure Marketplace requires **Contributor** or **Owner** access at the subscription level where the project is created.
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
-1. Once you sign up the project for the offering, subsequent deployments don't require signing up (neither subscription-level permissions). If this is your case, select **Continue to deploy**.
+ Alternatively, you can initiate deployment by starting from your project in AI Studio. From the **Build** tab of your project, select the **Deployments** option, then select **+ Create**.
- :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png":::
+1. On the model's **Details** page, select **Deploy** and then **Pay-as-you-go**.
-1. Give the deployment a name. Such name is part of the deployment API URL, which requires to be unique on each Azure region.
+ :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go.png":::
- :::image type="content" source="../media/deploy-monitor/llama/deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="../media/deploy-monitor/llama/deployment-name.png":::
+1. Select the project in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **West US 3** region.
+1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Llama-2-70b) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
-1. Select **Deploy**.
+ > [!NOTE]
+ > Subscribing a project to a particular Azure Marketplace offering (in this case, Llama-2-70b) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
-1. Once the deployment is ready, you're redirected to the deployments page.
+ :::image type="content" source="../media/deploy-monitor/llama/deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/llama/deploy-marketplace-terms.png":::
-1. Once your deployment is ready, you can select **Open in playground** to start interacting with the model.
+1. Once you sign up the project for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent deployments. If this scenario applies to you, select **Continue to deploy**.
-1. You can also take note of the **Endpoint** URL and the **Secret** to call the deployment and generate completions.
+ :::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png" alt-text="A screenshot showing a project that is already subscribed to the offering." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go-project.png":::
-1. Additionally, you can find the deployment details, URL, and access keys navigating to the tab **Build** > **Components** > **Deployments**.
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+ :::image type="content" source="../media/deploy-monitor/llama/deployment-name.png" alt-text="A screenshot showing how to indicate the name of the deployment you want to create." lightbox="../media/deploy-monitor/llama/deployment-name.png":::
+
+1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page.
+1. Select **Open in playground** to start interacting with the model.
+1. You can return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**, which you can use to call the deployment and generate completions.
+1. You can always find the endpoint's details, URL, and access keys by navigating to the **Build** tab and selecting **Deployments** from the Components section.
To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 2 models deployed as a service](#cost-and-quota-considerations-for-llama-2-models-deployed-as-a-service). ### Consume Llama 2 models as a service
-Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you have deployed.
+Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you deployed.
1. On the **Build** page, select **Deployments**.
Models deployed as a service can be consumed using either the chat or the comple
1. Select **Open in playground**.
-1. Select **View code** and copy the **Endpoint** URL and the **Key** token values.
+1. Select **View code** and copy the **Endpoint** URL and the **Key** value.
+
+1. Make an API request based on the type of model you deployed.
+
+ - For completions models, such as `Llama-2-7b`, use the [`/v1/completions`](#completions-api) API.
+ - For chat models, such as `Llama-2-7b-chat`, use the [`/v1/chat/completions`](#chat-api) API.
-1. Make an API request depending on the type of model you deployed. For completions models such as `Llama-2-7b` use the [`/v1/completions`](#completions-api) API, for chat models such as `Llama-2-7b-chat` use the [`/v1/chat/completions`](#chat-api) API. See the [reference](#reference-for-llama-2-models-deployed-as-a-service) section for more details with examples.
+ For more information on using the APIs, see the [reference](#reference-for-llama-2-models-deployed-as-a-service) section.
### Reference for Llama 2 models deployed as a service
Payload is a JSON formatted string containing the following parameters:
| Key | Type | Default | Description | ||--||-|
-| `prompt` | `string` | No default. This must be specified. | The prompt to send to the model. |
+| `prompt` | `string` | No default. This value must be specified. | The prompt to send to the model. |
| `stream` | `boolean` | `False` | Streaming allows the generated tokens to be sent as data-only server-sent events whenever they become available. | | `max_tokens` | `integer` | `16` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` can't exceed the model's context length. |
-| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering it or `temperature` but not both. |
-| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. It's recommend altering this or `top_p` but not both. |
-| `n` | `integer` | `1` | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. |
+| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering `top_p` or `temperature`, but not both. |
+| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. We recommend altering this or `top_p`, but not both. |
+| `n` | `integer` | `1` | How many completions to generate for each prompt. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota. |
| `stop` | `array` | `null` | String or a list of strings containing the word where the API stops generating further tokens. The returned text won't contain the stop sequence. |
-| `best_of` | `integer` | `1` | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota.|
-| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API returns a list of the 10 most likely tokens. the API always returns the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. |
+| `best_of` | `integer` | `1` | Generates `best_of` completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to returnΓÇô`best_of` must be greater than `n`. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota.|
+| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the `logprobs` most likely tokens and the chosen tokens. For example, if `logprobs` is 10, the API returns a list of the 10 most likely tokens. the API always returns the logprob of the sampled token, so there might be up to `logprobs`+1 elements in the response. |
| `presence_penalty` | `float` | `null` | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | | `ignore_eos` | `boolean` | `True` | Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. |
-| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of must > 1` and `temperature` must be `0`. |
-| `stop_token_ids` | `array` | `null` | List of tokens' ID that stops the generation when they're generated. The returned output contains the stop tokens unless the stop tokens are special tokens. |
+| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of` must be greater than `1` and `temperature` must be `0`. |
+| `stop_token_ids` | `array` | `null` | List of IDs for tokens that, when generated, stop further token generation. The returned output contains the stop tokens unless the stop tokens are special tokens. |
| `skip_special_tokens` | `boolean` | `null` | Whether to skip special tokens in the output. | #### Example
The response payload is a dictionary with the following fields.
| `choices` | `array` | The list of completion choices the model generated for the input prompt. | | `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. | | `model` | `string` | The model_id used for completion. |
-| `object` | `string` | The object type, which is always "text_completion". |
+| `object` | `string` | The object type, which is always `text_completion`. |
| `usage` | `object` | Usage statistics for the completion request. | > [!TIP]
-> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`.
+> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`.
-The `choice` object is a dictionary with the following fields.
+The `choices` object is a dictionary with the following fields.
| Key | Type | Description | ||--||
-| `index` | `integer` | Choice index. When best_of > 1, the index in this array might not be in order and might not be 0 to n-1. |
+| `index` | `integer` | Choice index. When `best_of` > 1, the index in this array might not be in order and might not be 0 to n-1. |
| `text` | `string` | Completion result. |
-| `finish_reason` | `string` | The reason the model stopped generating tokens: `stop`, model hit a natural stop point, or a provided stop sequence; `length`, if max number of tokens have been reached; `content_filter`, When RAI moderates and CMP forces moderation; `content_filter_error`, an error during moderation and wasn't able to make decision on the response; `null`, API response still in progress or incomplete. |
+| `finish_reason` | `string` | The reason the model stopped generating tokens: <br>- `stop`: model hit a natural stop point, or a provided stop sequence. <br>- `length`: if max number of tokens have been reached. <br>- `content_filter`: When RAI moderates and CMP forces moderation. <br>- `content_filter_error`: an error during moderation and wasn't able to make decision on the response. <br>- `null`: API response still in progress or incomplete. |
| `logprobs` | `object` | The log probabilities of the generated tokens in the output text. | - The `usage` object is a dictionary with the following fields. | Key | Type | Value |
The `usage` object is a dictionary with the following fields.
| `prompt_tokens` | `integer` | Number of tokens in the prompt. | | `completion_tokens` | `integer` | Number of tokens generated in the completion. | | `total_tokens` | `integer` | Total tokens. |
-
+ The `logprobs` object is a dictionary with the following fields: | Key | Type | Value | ||-|-| | `text_offsets` | `array` of `integers` | The position or index of each token in the completion output. |
-| `token_logprobs` | `array` of `float` | Selected logprobs from dictionary in top_logprobs array |
-| `tokens` | `array` of `string` | Selected tokens |
+| `token_logprobs` | `array` of `float` | Selected `logprobs` from dictionary in `top_logprobs` array. |
+| `tokens` | `array` of `string` | Selected tokens. |
| `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. | #### Example
Payload is a JSON formatted string containing the following parameters:
| Key | Type | Default | Description | |--|--|--|--|
-| `messages` | `string` | No default. This must be specified. | The message or history of messages to prompt the model with. |
+| `messages` | `string` | No default. This value must be specified. | The message or history of messages to use to prompt the model. |
| `stream` | `boolean` | `False` | Streaming allows the generated tokens to be sent as data-only server-sent events whenever they become available. | | `max_tokens` | `integer` | `16` | The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` can't exceed the model's context length. |
-| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering it or `temperature` but not both. |
-| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. It's recommend altering this or `top_p` but not both. |
-| `n` | `integer` | `1` | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. |
+| `top_p` | `float` | `1` | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering `top_p` or `temperature`, but not both. |
+| `temperature` | `float` | `1` | The sampling temperature to use, between 0 and 2. Higher values mean the model samples more broadly the distribution of tokens. Zero means greedy sampling. We recommend altering this or `top_p`, but not both. |
+| `n` | `integer` | `1` | How many completions to generate for each prompt. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota. |
| `stop` | `array` | `null` | String or a list of strings containing the word where the API stops generating further tokens. The returned text won't contain the stop sequence. |
-| `best_of` | `integer` | `1` | Generates best_of completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with n, best_of controls the number of candidate completions and n specifies how many to return ΓÇô best_of must be greater than n. Note: Because this parameter generates many completions, it can quickly consume your token quota.|
-| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API returns a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. |
+| `best_of` | `integer` | `1` | Generates `best_of` completions server-side and returns the "best" (the one with the lowest log probability per token). Results can't be streamed. When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to returnΓÇö`best_of` must be greater than `n`. <br>Note: Because this parameter generates many completions, it can quickly consume your token quota.|
+| `logprobs` | `integer` | `null` | A number indicating to include the log probabilities on the `logprobs` most likely tokens and the chosen tokens. For example, if `logprobs` is 10, the API returns a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to `logprobs`+1 elements in the response. |
| `presence_penalty` | `float` | `null` | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. | | `ignore_eos` | `boolean` | `True` | Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. |
-| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of must > 1` and `temperature` must be `0`. |
-| `stop_token_ids` | `array` | `null` | List of token IDs that stop the generation when they are generated. The returned output contains the stop tokens unless the stop tokens are special tokens. |
+| `use_beam_search` | `boolean` | `False` | Whether to use beam search instead of sampling. In such case, `best_of` must be greater than `1` and `temperature` must be `0`. |
+| `stop_token_ids` | `array` | `null` | List of IDs for tokens that, when generated, stop further token generation. The returned output contains the stop tokens unless the stop tokens are special tokens.|
| `skip_special_tokens` | `boolean` | `null` | Whether to skip special tokens in the output. |
-The `message` object has the following fields:
+The `messages` object has the following fields:
| Key | Type | Value | |--|--||
The response payload is a dictionary with the following fields.
| `choices` | `array` | The list of completion choices the model generated for the input messages. | | `created` | `integer` | The Unix timestamp (in seconds) of when the completion was created. | | `model` | `string` | The model_id used for completion. |
-| `object` | `string` | The object type, which is always "chat.completion". |
+| `object` | `string` | The object type, which is always `chat.completion`. |
| `usage` | `object` | Usage statistics for the completion request. | > [!TIP]
-> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. In each `choice` object, the key for `message` is changed by `delta`.
+> In the streaming mode, for each chunk of response, `finish_reason` is always `null`, except from the last one which is terminated by a payload `[DONE]`. In each `choices` object, the key for `messages` is changed by `delta`.
-The `choice` object is a dictionary with the following fields.
+The `choices` object is a dictionary with the following fields.
| Key | Type | Description | ||--|--|
-| `index` | `integer` | Choice index. When best_of > 1, the index in this array might not be in order and might not be 0 to n-1. |
-| `message` or `delta` | `string` | Chat completion result in `message` object. When streaming mode is used, `delta` key is used. |
-| `finish_reason` | `string` | The reason the model stopped generating tokens: `stop`, model hit a natural stop point, or a provided stop sequence; `length`, if max number of tokens have been reached; `content_filter`, When RAI moderates and CMP forces moderation; `content_filter_error`, an error during moderation and wasn't able to make decision on the response; `null`, API response still in progress or incomplete. |
+| `index` | `integer` | Choice index. When `best_of` > 1, the index in this array might not be in order and might not be `0` to `n-1`. |
+| `messages` or `delta` | `string` | Chat completion result in `messages` object. When streaming mode is used, `delta` key is used. |
+| `finish_reason` | `string` | The reason the model stopped generating tokens: <br>- `stop`: model hit a natural stop point or a provided stop sequence. <br>- `length`: if max number of tokens have been reached. <br>- `content_filter`: When RAI moderates and CMP forces moderation <br>- `content_filter_error`: an error during moderation and wasn't able to make decision on the response <br>- `null`: API response still in progress or incomplete.|
| `logprobs` | `object` | The log probabilities of the generated tokens in the output text. |
The `usage` object is a dictionary with the following fields.
| `prompt_tokens` | `integer` | Number of tokens in the prompt. | | `completion_tokens` | `integer` | Number of tokens generated in the completion. | | `total_tokens` | `integer` | Total tokens. |
-
+ The `logprobs` object is a dictionary with the following fields: | Key | Type | Value | ||-|| | `text_offsets` | `array` of `integers` | The position or index of each token in the completion output. |
-| `token_logprobs` | `array` of `float` | Selected logprobs from dictionary in top_logprobs array |
-| `tokens` | `array` of `string` | Selected tokens |
+| `token_logprobs` | `array` of `float` | Selected `logprobs` from dictionary in `top_logprobs` array. |
+| `tokens` | `array` of `string` | Selected tokens. |
| `top_logprobs` | `array` of `dictionary` | Array of dictionary. In each dictionary, the key is the token and the value is the prob. | #### Example
The following is an example response:
## Deploy Llama 2 models to real-time endpoints
-Llama 2 models can be deployed to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about on the infrastructure running the model including the virtual machines used to run it and the number of instances to handle the load you're expecting. Models deployed in this modality consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints.
+Apart from deploying with the pay-as-you-go managed service, you can also deploy Llama 2 models to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to real-time endpoints consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints.
### Create a new deployment # [Studio](#tab/azure-studio)
-Follow the steps below to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
+Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
-1. Choose a model you want to deploy from AI Studio [model catalog](./model-catalog.md). Alternatively, you can initiate deployment by selecting **Create** from `your project`>`deployments`
+1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
-1. On the detail page, select **Deploy** and then **Real-time endpoint**.
+ Alternatively, you can initiate deployment by starting from your project in AI Studio. From the **Build** tab of your project, select the **Deployments** option, then select **+ Create**.
-1. Select if you want to enable **Azure AI Content Safety (preview)**.
+1. On the model's **Details** page, select **Deploy** and then **Real-time endpoint**.
- > [!TIP]
- > Deploying Llama 2 models with Azure AI Content Safety (preview) is currently only supported using the Python SDK.
+ :::image type="content" source="../media/deploy-monitor/llama/deploy-real-time-endpoint.png" alt-text="A screenshot showing how to deploy a model with the real-time endpoint option." lightbox="../media/deploy-monitor/llama/deploy-real-time-endpoint.png":::
-1. Select **Proceed**.
-
-1. Select the project where you want to create a deployment.
+1. On the **Deploy with Azure AI Content Safety (preview)** page, select **Skip Azure AI Content Safety** so that you can continue to deploy the model using the UI.
> [!TIP]
- > If you don't have enough quota available in the selected project, you can use the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours**.
+ > In general, we recommend that you select **Enable Azure AI Content Safety (Recommended)** for deployment of the Llama model. This deployment option is currently only supported using the Python SDK and it happens in a notebook.
-1. Select the **Virtual machine** and the instance count you want to assign to the deployment.
+1. Select **Proceed**.
+1. Select the project where you want to create a deployment.
-1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resources configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys.
+ > [!TIP]
+ > If you don't have enough quota available in the selected project, you can use the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours**.
+
+1. Select the **Virtual machine** and the **Instance count** that you want to assign to the deployment.
+1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resource configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys.
+
1. Indicate if you want to enable **Inferencing data collection (preview)**.
-1. Select **Deploy**.
+1. Select **Deploy**. After a few moments, the endpoint's **Details** page opens up.
-1. You land on the deployment details page. Select **Consume** to obtain code samples that can be used to consume the deployed model in your application.
+1. Wait for the endpoint creation and deployment to finish. This step can take a few minutes.
+1. Select the **Consume** tab of the deployment to obtain code samples that can be used to consume the deployed model in your application.
# [Python SDK](#tab/python)
-You can use the Azure AI Generative SDK to deploy an open model. In this example, you deploy a `Llama-2-7b-chat` model.
+Follow these steps to deploy an open model such as `Llama-2-7b-chat` to a real-time endpoint, using the Azure AI Generative SDK.
-```python
-# Import the libraries
-from azure.ai.resources.client import AIClient
-from azure.ai.resources.entities.deployment import Deployment
-from azure.ai.resources.entities.models import PromptflowModel
-from azure.identity import DefaultAzureCredential
-```
+1. Import required libraries
-Credential info can be found under your project settings on Azure AI Studio. You can go to Settings by selecting the gear icon on the bottom of the left navigation UI.
+ ```python
+ # Import the libraries
+ from azure.ai.resources.client import AIClient
+ from azure.ai.resources.entities.deployment import Deployment
+ from azure.ai.resources.entities.models import PromptflowModel
+ from azure.identity import DefaultAzureCredential
+ ```
-```python
-credential = DefaultAzureCredential()
-client = AIClient(
- credential=credential,
- subscription_id="<xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>",
- resource_group_name="<YOUR_RESOURCE_GROUP_NAME>",
- project_name="<YOUR_PROJECT_NAME>",
-)
-```
+1. Provide your credentials. Credentials can be found under your project settings in Azure AI Studio. You can go to Settings by selecting the gear icon on the bottom of the left navigation UI.
-Define the model and the deployment. `The model_id` can be found on the model card in the Azure AI Studio [model catalog](../how-to/model-catalog.md).
+ ```python
+ credential = DefaultAzureCredential()
+ client = AIClient(
+ credential=credential,
+ subscription_id="<xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx>",
+ resource_group_name="<YOUR_RESOURCE_GROUP_NAME>",
+ project_name="<YOUR_PROJECT_NAME>",
+ )
+ ```
-```python
-model_id = "azureml://registries/azureml/models/Llama-2-7b-chat/versions/12"
-deployment_name = "my-llam27bchat-deployment"
+1. Define the model and the deployment. `The model_id` can be found on the model card in the Azure AI Studio [model catalog](../how-to/model-catalog.md).
-deployment = Deployment(
- name=deployment_name,
- model=model_id,
-)
-```
+ ```python
+ model_id = "azureml://registries/azureml/models/Llama-2-7b-chat/versions/12"
+ deployment_name = "my-llam27bchat-deployment"
+
+ deployment = Deployment(
+ name=deployment_name,
+ model=model_id,
+ )
+ ```
-Deploy the model.
+1. Deploy the model.
+
+ ```python
+ client.deployments.create_or_update(deployment)
+ ```
-```python
-client.deployments.create_or_update(deployment)
-```
-### Consuming Llama 2 models deployed to real-time endpoints
+### Consume Llama 2 models deployed to real-time endpoints
-For reference about how to invoke Llama 2 models deployed to real-time endpoints, see the model card in the Azure AI Studio [model catalog](../how-to/model-catalog.md).
+For reference about how to invoke Llama 2 models deployed to real-time endpoints, see the model's card in the Azure AI Studio [model catalog](../how-to/model-catalog.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
## Cost and quotas
For reference about how to invoke Llama 2 models deployed to real-time endpoints
Llama models deployed as a service are offered by Meta through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying or [fine-tuning the models](./fine-tune-model-llama.md).
-Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine tuning, However, multiple meters are available to track each scenario independently.
+Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference and fine-tuning; however, multiple meters are available to track each scenario independently.
-See [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace) to learn more about how to track costs.
+For more information on how to track costs, see [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
:::image type="content" source="../media/cost-management/marketplace/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="../media/cost-management/marketplace/costs-model-as-service-cost-details.png":::
-Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits don't suffice your scenarios.
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
### Cost and quota considerations for Llama 2 models deployed as real-time endpoints
-Deploying Llama models and inferencing with real-time endpoints can be done by consuming Virtual Machine (VM) core quota that is assigned to your subscription a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once that happens, you can request for quota increase.
+For deployment and inferencing of Llama models with real-time endpoints, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
## Content filtering
-Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. When deployed to real-time endpoints, you can opt out for this capability. Both the prompt and completion are run through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md).
+Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. When deployed to real-time endpoints, you can opt out of this capability. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md).
## Next steps -- Learn more about what you can do in [Azure AI Studio](../what-is-ai-studio.md)-- Get answers to frequently asked questions in the [Azure AI FAQ article](../faq.yml)
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Fine-tune a Llama 2 model in Azure AI Studio](fine-tune-model-llama.md)
+- [Azure AI FAQ article](../faq.yml)
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
Title: Azure Kubernetes Service (AKS) managed nginx Ingress with the application routing add-on
+ Title: Azure Kubernetes Service (AKS) managed NGINX ingress with the application routing add-on
description: Use the application routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).
Last updated 11/21/2023
-# Managed nginx Ingress with the application routing add-on
+# Managed NGINX ingress with the application routing add-on
-One way to route Hypertext Transfer Protocol (HTTP) and secure (HTTPS) traffic to applications running on an Azure Kubernetes Service (AKS) cluster is to use the [Kubernetes Ingress object][kubernetes-ingress-object-overview]. When you create an Ingress object that uses the application routing add-on nginx Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster.
+One way to route Hypertext Transfer Protocol (HTTP) and secure (HTTPS) traffic to applications running on an Azure Kubernetes Service (AKS) cluster is to use the [Kubernetes Ingress object][kubernetes-ingress-object-overview]. When you create an Ingress object that uses the application routing add-on NGINX Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster.
This article shows you how to deploy and configure a basic Ingress controller in your AKS cluster.
-## Application routing add-on with nginx features
+## Application routing add-on with NGINX features
-The application routing add-on with nginx delivers the following:
+The application routing add-on with NGINX delivers the following:
-* Easy configuration of managed nginx Ingress controllers based on [Kubernetes nginx Ingress controller][kubernetes-nginx-ingress].
+* Easy configuration of managed NGINX Ingress controllers based on [Kubernetes NGINX Ingress controller][kubernetes-nginx-ingress].
* Integration with [Azure DNS][azure-dns-overview] for public and private zone management * SSL termination with certificates stored in Azure Key Vault.
With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the
- The application routing add-on supports up to five Azure DNS zones. - All global Azure DNS zones integrated with the add-on have to be in the same resource group. - All private Azure DNS zones integrated with the add-on have to be in the same resource group.-- Editing any resources in the `app-routing-system` namespace, including the Ingress-nginx ConfigMap isn't supported.
+- Editing any resources in the `app-routing-system` namespace, including the Ingress-nginx ConfigMap, isn't supported.
## Enable application routing using Azure CLI
az aks approuting enable -g <ResourceGroupName> -n <ClusterName>
The following add-ons are required to support this configuration:
-* **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx Ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS).
+* **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the NGINX Ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS).
### Enable on a new cluster
The application routing add-on uses annotations on Kubernetes Ingress objects to
app: aks-helloworld ```
-### Create the Ingress
+### Create the Ingress object
The application routing add-on creates an Ingress class on the cluster named *webapprouting.kubernetes.azure.com*. When you create an Ingress object with this class, it activates the add-on.
The application routing add-on creates an Ingress class on the cluster named *we
app: aks-helloworld ```
-### Create the Ingress
+### Create the Ingress object
The application routing add-on creates an Ingress class on the cluster called *webapprouting.kubernetes.azure.com*. When you create an Ingress object with this class, it activates the add-on. The `kubernetes.azure.com/use-osm-mtls: "true"` annotation on the Ingress object creates an Open Service Mesh (OSM) [IngressBackend][ingress-backend] to configure a backend service to accept Ingress traffic from trusted sources.
aks Azure Hpc Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hpc-cache.md
Previously updated : 06/22/2023 Last updated : 02/13/2024 #Customer intent: As a cluster operator or developer, I want to learn how to integrate HPC Cache with AKS
Last updated 06/22/2023
## Before you begin
-* This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [Azure portal][aks-quickstart-portal].
- > [!IMPORTANT]
- > Your AKS cluster must be [in a region that supports Azure HPC Cache][hpc-cache-regions].
+* AKS cluster must be in a region that [supports Azure HPC Cache][hpc-cache-regions].
+* You need Azure CLI version 2.7 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* Register the `hpc-cache` extension in your Azure subscription. For more information on using HPC Cache with Azure CLI, see the [HPC Cache CLI prerequisites][hpc-cache-cli-prerequisites].
+* Review the [HPC Cache prerequisites][hpc-cache-prereqs]. You need to satisfy the following before you can run an HPC Cache:
-* You need Azure CLI version 2.7 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. For more information on using HPC Cache with Azure CLI, see the [HPC Cache CLI prerequisites][hpc-cache-cli-prerequisites].
-* Install the `hpc-cache` Azure CLI extension using the [`az extension add --upgrade -n hpc-cache][az-extension-add]` command.
-* Review the [HPC Cache prerequisites][hpc-cache-prereqs]. You need to satisfy these prerequisites before you can run an HPC Cache. Important prerequisites include the following:
* The cache requires a *dedicated* subnet with at least 64 IP addresses available. * The subnet must not host other VMs or containers. * The subnet must be accessible from the AKS nodes.
+* If you need to run your application as a user without root access, you may need to disable root squashing by using the change owner (chown) command to change directory ownership to another user. The user without root access needs to own a directory to access the file system. For the user to own a directory, the root user must chown a directory to that user, but if the HPC Cache is squashing root, this operation is denied because the root user (UID 0) is being mapped to the anonymous user. For more information about root squashing and client access policies, see [HPC Cache access policies][hpc-cache-access-policies].
+
+### Install the `hpc-cache` Azure CLI extension
++
+To install the hpc-cache extension, run the following command:
+
+```azurecli-interactive
+az extension add --name hpc-cache
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli-interactive
+az extension update --name hpc-cache
+```
+
+### Register the StorageCache feature flag
+
+Register the *Microsoft.StorageCache* resource provider using the [`az provider register`][az-provider-register] command.
+
+```azurecli
+az provider register --namespace Microsoft.StorageCache --wait
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.StorageCache"
+```
+ ## Create the Azure HPC Cache 1. Get the node resource group using the [`az aks show`][az-aks-show] command with the `--query nodeResourceGroup` query parameter.
Last updated 06/22/2023
MC_myResourceGroup_myAKSCluster_eastus ```
-2. Create the dedicated HPC Cache subnet using the [`az network vnet subnet create`][az-network-vnet-subnet-create] command.
+1. Create a dedicated HPC Cache subnet using the [`az network vnet subnet create`][az-network-vnet-subnet-create] command. First define the environment variables for `RESOURCE_GROUP`, `VNET_NAME`, `VNET_ID`, and `SUBNET_NAME`. Copy the output from the previous step for `RESOURCE_GROUP`, and specify a value for `SUBNET_NAME`.
```azurecli RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv) VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv) SUBNET_NAME=MyHpcCacheSubnet
+ ```
+ ```azurecli-interactive
az network vnet subnet create \ --resource-group $RESOURCE_GROUP \ --vnet-name $VNET_NAME \
Last updated 06/22/2023
--address-prefixes 10.0.0.0/26 ```
-3. Register the *Microsoft.StorageCache* resource provider using the [`az provider register`][az-provider-register] command.
+1. Create an HPC Cache in the same node resource group and region. First define the environment variable `SUBNET_ID`.
- ```azurecli
- az provider register --namespace Microsoft.StorageCache --wait
+ ```azurecli-interactive
+ SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv)
```
- > [!NOTE]
- > The resource provider registration can take some time to complete.
-
-4. Create an HPC Cache in the same node resource group and region using the [`az hpc-cache create`][az-hpc-cache-create].
-
- > [!NOTE]
- > The HPC Cache takes approximately 20 minutes to be created.
-
- ```azurecli
- RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
- VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
- VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
- SUBNET_NAME=MyHpcCacheSubnet
- SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv)
+ Create the HPC Cache using the [`az hpc-cache create`][az-hpc-cache-create] command. The following example creates the HPC Cache in the East US region with a Standard 2G cache type named *MyHpcCache*. Specify a value for **--location**, **--sku-name**, and **--name**.
+ ```azurecli-interactive
az hpc-cache create \ --resource-group $RESOURCE_GROUP \ --cache-size-gb "3072" \
Last updated 06/22/2023
--name MyHpcCache ```
+ > [!NOTE]
+ > Creation of the HPC Cache can take up to 20 minutes.
+ ## Create and configure Azure storage
-> [!IMPORTANT]
-> You need to select a unique storage account name. Replace `uniquestorageaccount` with something unique for you. Storage account names must be *between 3 and 24 characters in length* and *can contain only numbers and lowercase letters*.
+1. Create a storage account using the [`az storage account create`][az-storage-account-create] command. First define the environment variable `STORAGE_ACCOUNT_NAME`.
-1. Create a storage account using the [`az storage account create`][az-storage-account-create] command.
+ > [!IMPORTANT]
+ > You need to select a unique storage account name. Replace `uniquestorageaccount` with your specified name. Storage account names must be *between 3 and 24 characters in length* and *can contain only numbers and lowercase letters*.
- ```azurecli
- RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
- STORAGE_ACCOUNT_NAME=uniquestorageaccount
+ ```azurecli
+ STORAGE_ACCOUNT_NAME=uniquestorageaccount
+ ```
+
+ The following example creates a storage account in the East US region with the Standard_LRS SKU. Specify a value for **--location** and **--sku**.
+ ```azurecli-interactive
az storage account create \
- -n $STORAGE_ACCOUNT_NAME \
- -g $RESOURCE_GROUP \
- -l eastus \
+ --name $STORAGE_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location eastus \
--sku Standard_LRS ```
-2. Assign the "Storage Blob Data Contributor Role" on your subscription using the [`az role assignment create`][az-role-assignment-create] command.
+1. Assign the **Storage Blob Data Contributor Role** on your subscription using the [`az role assignment create`][az-role-assignment-create] command. First, define the environment variables `STORAGE_ACCOUNT_ID` and `AD_USER`.
- ```azurecli
- STORAGE_ACCOUNT_NAME=uniquestorageaccount
+ ```azurecli-interactive
STORAGE_ACCOUNT_ID=$(az storage account show --name $STORAGE_ACCOUNT_NAME --query "id" -o tsv) AD_USER=$(az ad signed-in-user show --query objectId -o tsv)
- CONTAINER_NAME=mystoragecontainer
+ ```
+ ```azurecli-interactive
az role assignment create --role "Storage Blob Data Contributor" --assignee $AD_USER --scope $STORAGE_ACCOUNT_ID ```
-3. Create the Blob container within the storage account using the [`az storage container create`][az-storage-container-create] command.
+1. Create the Blob container within the storage account using the [`az storage container create`][az-storage-container-create] command. First, define the environment variable `CONTAINER_NAME` and replace the name for the Blob container.
```azurecli
+ CONTAINER_NAME=mystoragecontainer
+ ```
+
+ ```azurecli-interactive
az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --auth-mode login ```
-4. Provide permissions to the Azure HPC Cache service account to access your storage account and Blob container using the following [`az role assignment`][az-role-assignment-create] commands.
+1. Provide permissions to the Azure HPC Cache service account to access your storage account and Blob container using the [`az role assignment`][az-role-assignment-create] commands. First, define the environment variables `HPC_CACHE_USER` and `HPC_CACHE_ID`.
```azurecli HPC_CACHE_USER="StorageCache Resource Provider" HPC_CACHE_ID=$(az ad sp list --display-name "${HPC_CACHE_USER}" --query "[].objectId" -o tsv)
+ ```
+ ```azurecli-interactive
az role assignment create --role "Storage Account Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID- az role assignment create --role "Storage Blob Data Contributor" --assignee $HPC_CACHE_ID --scope $STORAGE_ACCOUNT_ID ```
-5. Add the blob container to your HPC Cache as a storage target using the [`az hpc-cache blob-storage-target add`][az-hpc-cache-blob-storage-target-add] command.
-
- ```azurecli
- CONTAINER_NAME=mystoragecontainer
+1. Add the blob container to your HPC Cache as a storage target using the [`az hpc-cache blob-storage-target add`][az-hpc-cache-blob-storage-target-add] command. The following example creates a blob container named *MyStorageTarget* to the HPC Cache *MyHpcCache*. Specify a value for **--name**, **--cache-name**, and **--virtual-namespace-path**.
+ ```azurecli-interactive
az hpc-cache blob-storage-target add \ --resource-group $RESOURCE_GROUP \ --cache-name MyHpcCache \
Last updated 06/22/2023
## Set up client load balancing
-1. Create an Azure Private DNS Zone for the client-facing IP addresses using the [`az network private-dns zone create`][az-network-private-dns-zone-create] command.
+1. Create an Azure Private DNS zone for the client-facing IP addresses using the [`az network private-dns zone create`][az-network-private-dns-zone-create] command. First define the environment variable `PRIVATE_DNS_ZONE` and specify a name for the zone.
```azurecli PRIVATE_DNS_ZONE="myhpccache.local"
+ ```
+ ```azurecli-interactive
az network private-dns zone create \
- -g $RESOURCE_GROUP \
- -n $PRIVATE_DNS_ZONE
+ --resource-group $RESOURCE_GROUP \
+ --name $PRIVATE_DNS_ZONE
```
-2. Create a DNS link between the Azure Private DNS Zone and the VNet using the [`az network private-dns link vnet create`][az-network-private-dns-link-vnet-create] command.
+2. Create a DNS link between the Azure Private DNS Zone and the VNet using the [`az network private-dns link vnet create`][az-network-private-dns-link-vnet-create] command. Replace the value for **--name**.
- ```azurecli
+ ```azurecli-interactive
az network private-dns link vnet create \
- -g $RESOURCE_GROUP \
- -n MyDNSLink \
- -z $PRIVATE_DNS_ZONE \
- -v $VNET_NAME \
- -e true
+ --resource-group $RESOURCE_GROUP \
+ --name MyDNSLink \
+ --zone-name $PRIVATE_DNS_ZONE \
+ --virtual-network $VNET_NAME \
+ --registration-enabled true
```
-3. Create the round-robin DNS name for the client-facing IP addresses using the [`az network private-dns record-set a create`][az-network-private-dns-record-set-a-create] command.
+3. Create the round-robin DNS name for the client-facing IP addresses using the [`az network private-dns record-set a create`][az-network-private-dns-record-set-a-create] command. First, define the environment variables `DNS_NAME`, `HPC_MOUNTS0`, `HPC_MOUNTS1`, and `HPC_MOUNTS2`. Replace the value for the property `DNS_NAME`.
```azurecli DNS_NAME="server" HPC_MOUNTS0=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[0]" -o tsv | tr --delete '\r') HPC_MOUNTS1=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[1]" -o tsv | tr --delete '\r') HPC_MOUNTS2=$(az hpc-cache show --name "MyHpcCache" --resource-group $RESOURCE_GROUP --query "mountAddresses[2]" -o tsv | tr --delete '\r')
+ ```
+ ```azurecli-interactive
az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS0-
+
az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS1 az network private-dns record-set a add-record -g $RESOURCE_GROUP -z $PRIVATE_DNS_ZONE -n $DNS_NAME -a $HPC_MOUNTS2
Last updated 06/22/2023
## Create a persistent volume
-1. Create a `pv-nfs.yaml` file to define a [persistent volume][persistent-volume].
+1. Create a file named `pv-nfs.yaml` to define a [persistent volume][persistent-volume] and then paste in the following manifest. Replace the values for the property `server` and `path`.
```yaml
Last updated 06/22/2023
path: / ```
-2. Get the credentials for your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+1. Get the credentials for your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-3. Update the *server* and *path* to the values of your NFS (Network File System) volume you created in the previous step.
-4. Create the persistent volume using the [`kubectl apply`][kubectl-apply] command.
+1. Create the persistent volume using the [`kubectl apply`][kubectl-apply] command.
- ```console
+ ```bash
kubectl apply -f pv-nfs.yaml ```
-5. Verify the status of the persistent volume is **Available** using the [`kubectl describe`][kubectl-describe] command.
+1. Verify the status of the persistent volume is **Available** using the [`kubectl describe`][kubectl-describe] command.
- ```console
+ ```bash
kubectl describe pv pv-nfs ``` ## Create the persistent volume claim
-1. Create a `pvc-nfs.yaml` to define a [persistent volume claim][persistent-volume-claim].
+1. Create a file named `pvc-nfs.yaml`to define a [persistent volume claim][persistent-volume-claim], and then paste the following manifest.
```yaml apiVersion: v1
Last updated 06/22/2023
2. Create the persistent volume claim using the [`kubectl apply`][kubectl-apply] command.
- ```console
+ ```bash
kubectl apply -f pvc-nfs.yaml ``` 3. Verify the status of the persistent volume claim is **Bound** using the [`kubectl describe`][kubectl-describe] command.
- ```console
+ ```bash
kubectl describe pvc pvc-nfs ``` ## Mount the HPC Cache with a pod
-1. Create a `nginx-nfs.yaml` file to define a pod that uses the persistent volume claim.
+1. Create a file named `nginx-nfs.yaml` to define a pod that uses the persistent volume claim, and then paste the following manifest.
```yaml kind: Pod
Last updated 06/22/2023
2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
- ```console
+ ```bash
kubectl apply -f nginx-nfs.yaml ``` 3. Verify the pod is running using the [`kubectl describe`][kubectl-describe] command.
- ```console
+ ```bash
kubectl describe pod nginx-nfs ```
-4. Verify your volume is mounted in the pod using the [`kubectl exec`][kubectl-exec] command to connect to the pod, then `df -h` to check if the volume is mounted.
+4. Verify your volume is mounted in the pod using the [`kubectl exec`][kubectl-exec] command to connect to the pod.
- ```console
+ ```bash
kubectl exec -it nginx-nfs -- sh ```
+ To check if the volume is mounted, run `df` in its human-readable format using the `--human-readable` (`-h` for short) option.
+
+ ```bash
+ df -h
+ ```
+
+ The following example resembles output returned from the command:
+ ```output
- / # df -h
Filesystem Size Used Avail Use% Mounted on ... server.myhpccache.local:/myfilepath 8.0E 0 8.0E 0% /mnt/azure/myfilepath ... ```
-## Frequently asked questions (FAQ)
-
-### Running applications as non-root
-
-If you need to run an application as a non-root user, you may need to disable root squashing to chown a directory to another user. The non-root user needs to own a directory to access the file system. For the user to own a directory, the root user must chown a directory to that user, but if the HPC Cache is squashing root, this operation is denied because the root user (UID 0) is being mapped to the anonymous user. For more information about root squashing and client access policies, see [HPC Cache access policies][hpc-cache-access-policies].
- ## Next steps * For more information on Azure HPC Cache, see [HPC Cache overview][hpc-cache]. * For more information on using NFS with AKS, see [Manually create and use a Network File System (NFS) Linux Server volume with AKS][aks-nfs].
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+<!-- EXTERNAL LINKS -->
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
+[hpc-cache-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache&regions=all
+<!-- INTERNAL LINKS -->
[aks-nfs]: azure-nfs-volume.md- [hpc-cache]: ../hpc-cache/hpc-cache-overview.md- [hpc-cache-access-policies]: ../hpc-cache/access-policies.md-
-[hpc-cache-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache&regions=all
- [hpc-cache-cli-prerequisites]: ../hpc-cache/az-cli-prerequisites.md- [hpc-cache-prereqs]: ../hpc-cache/hpc-cache-prerequisites.md- [az-hpc-cache-create]: /cli/azure/hpc-cache#az_hpc_cache_create- [az-aks-show]: /cli/azure/aks#az_aks_show-
+[az-feature-show]: /cli/azure/feature#az-feature-show
[install-azure-cli]: /cli/azure/install-azure-cli-
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-
-[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
- [persistent-volume]: concepts-storage.md#persistent-volumes- [persistent-volume-claim]: concepts-storage.md#persistent-volume-claims- [az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create- [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials- [az-provider-register]: /cli/azure/provider#az_provider_register- [az-storage-account-create]: /cli/azure/storage/account#az_storage_account_create- [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create- [az-storage-container-create]: /cli/azure/storage/container#az_storage_container_create- [az-hpc-cache-blob-storage-target-add]: /cli/azure/hpc-cache/blob-storage-target#az_hpc_cache_blob_storage_target_add- [az-network-private-dns-zone-create]: /cli/azure/network/private-dns/zone#az_network_private_dns_zone_create- [az-network-private-dns-link-vnet-create]: /cli/azure/network/private-dns/link/vnet#az_network_private_dns_link_vnet_create-
-[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create
--
+[az-network-private-dns-record-set-a-create]: /cli/azure/network/private-dns/record-set/a#az_network_private_dns_record_set_a_create
aks Azure Linux Aks Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-linux-aks-partner-solutions.md
+
+ Title: Azure Linux AKS Container Host partner solutions
+
+description: Discover partner-tested solutions that enable you to build, test, deploy, manage, and monitor your AKS environment using Azure Linux Container Host.
+++ Last updated : 02/16/2024++
+# Azure Linux AKS Container Host partner solutions
+
+Microsoft collaborates with partners to ensure your build, test, deployment, configuration, and monitoring of your applications perform optimally with Azure Linux Container Host on AKS.
+
+Our third party partners featured in this article have introduction guides to help you start using their solutions with your applications running on Azure Linux Container Host on AKS.
+
+| Solutions | Partners |
+|--||
+| DevOps | [Advantech](#advantech) <br> [Hashicorp](#hashicorp) <br> [Akuity](#akuity) <br> [Kong](#kong) |
+| Networking | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Tetrate](#tetrate) |
+| Observability | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Dynatrace](#dynatrace) |
+| Security | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Kong](#kong) <br> [Tetrate](#tetrate) |
+| Storage | [Veeam](#veeam) |
+| Config Management | [Corent](#corent) |
+| Migration | [Catalogic](#catalogic) |
+
+## DevOps
+
+DevOps streamlines the delivery process, improves collaboration across teams, and enhances software quality, ensuring swift, reliable, and continuous deployment of your applications.
+
+### Advantech
++
+| Solution | Categories |
+|-||
+| iFactoryEHS | DevOps |
+
+The right EHS management system can strengthen organizations behind the scenes and enable them to continuously put their best foot forward. iFactoryEHS solution is designed to help manufacturers manage employee health, improve safety, and analyze environmental footprints while ensuring operational continuity.
+
+For more information, see [Advantech & iFactoryEHS](https://page.advantech.com/en/global/solutions/ifactory/ifactory_ehs).
+
+### Hashicorp
++
+| Solution | Categories |
+|-||
+| Terraform | DevOps |
+
+At HashiCorp, we believe infrastructure enables innovation, and we're helping organizations to operate that infrastructure in the cloud.
+
+<details> <summary> See more </summary><br>
+
+Our suite of multicloud infrastructure automation products, built on projects with source code freely available at their core, underpin the most important applications for the largest enterprises in the world. As part of the once-in-a-generation shift to the cloud, organizations of all sizes, from well-known brands to ambitious start-ups, rely on our solutions to provision, secure, connect, and run their business-critical applications so they can deliver essential services, communications tools, and entertainment platforms worldwide.
+
+</details>
+
+For more information, see [Hashicorp solutions](https://hashicorp.com/) and [Hasicorp on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/hashicorp-4665790.terraform-azure-saas?tab=overview).
+
+### Akuity
++
+| Solution | Categories |
+|-||
+| Akuity Platform | DevOps |
+
+The Akuity Platform is a managed solution for Argo CD from the creators of Argo open source project.
+
+<details> <summary> See more </summary><br>
+
+Argo Project is a suite of open source tools for deploying and running applications and workloads on Kubernetes. It extends the Kubernetes APIs and unlocks new and powerful capabilities in application deployment, container orchestration, event automation, progressive delivery, and more.
+
+Akuity is rooted in Argo, extending its capabilities and using the same familiar user interface. The platform solves real-life DevOps use cases using battle-tested patterns packaged into a product with the best possible developer experience.
+
+</details>
+
+For more information, see [Akuity Solutions](https://akuity.io/).
+
+### Kong
++
+| Solution | Categories |
+|-||
+| Kong Connect | DevOps <br> Security |
+
+Kong Konnect is the unified cloud-native API lifecycle platform to optimize any environment. It reduces operational complexity, promotes federated governance, and provides robust security by seamlessly managing Kong Gateway, Kong Ingress Controller and Kong Mesh with a single management console, delivering API configuration, portal, service catalog, and analytics capabilities.
+
+<details> <summary> See more </summary><br>
+
+A unified Konnect control plane empowers businesses to:
+
+* Define a collection of API Data Plane Nodes that share the same configuration.
+* Provide a single control plane to catalog, connect to, and monitor the status of all control planes and instances and manage group configuration.
+* Browse APIs, reference documentation, test endpoints, and create applications using specific APIs through a customizable and unified API portal for developers.
+* Create a single source of truth by cataloging all services with the Service Hub.
+* Access key statistics, monitor vital signs, and spot patterns in real time to see how your APIs and gateways are performing.
+* Deliver a fully Kubernetes-centric operational lifecycle model through the integration of DevOps-ready config-driven API management layer and KICΓÇÖs unrivaled runtime performance.
+
+KongΓÇÖs extensive ecosystem of community and enterprise plugins delivers critical functionality, including authentication, authorization, rate limiting, request enforcement, and caching, without increasing API platformΓÇÖs footprint.
+
+</details>
+
+For more information, see [Kong Solutions](https://konghq.com/) and [Kong on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/konginc1581527938760.kong-enterprise?tab=Overview).
+
+## Networking
+
+Ensure efficient traffic management, enhanced security, and optimal network performance.
+
+### Buoyant
++
+| Solution | Categories |
+|-||
+| Managed Linkerd with Buoyant Cloud | Networking <br> Security <br> Observability |
+
+Managed Linkerd with Buoyant Cloud automatically keeps your Linkerd control plane data plane up to date with latest versions, and handles installs, trust anchor rotation, and more.
+
+For more information, see [Buoyant Solutions](https://buoyant.io/cloud) and [Buoyant on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/buoyantinc1658330371653.buoyant?tab=Overview).
+
+### Isovalent
++
+| Solution | Categories |
+|-||
+| Isovalent Enterprise for Cilium | Networking <br> Security <br> Observability |
+
+Isovalent Enterprise for Cilium provides advanced network policy capabilities, including DNS-aware policy, L7 policy, and deny policy, enabling fine-grained control over network traffic for micro-segmentation and improved security.
+
+<details> <summary> See more </summary><br>
+
+Isovalent also provides multi-cluster connectivity via Cluster Mesh, seamless networking and security across multiple clouds, including public cloud providers like AWS, Azure, and Google Cloud Platform, as well as on-premises environments. With free service-to-service communication and advanced load balancing, Isovalent makes it easy to deploy and manage complex microservices architectures.
+
+The Hubble flow observability + User Interface feature provides real-time network traffic flow and policy visualization, as well as a powerful User Interface for easy troubleshooting and network management. Tetragon provides advanced security capabilities such as protocol enforcement, IP and port allowlists, and automatic application-aware policy generation to protect against the most sophisticated threats. Tetragon is built on eBPF, enabling scaling to meet the needs of the most demanding cloud-native environments with ease.
+
+Isovalent provides enterprise-grade support from their experienced team of experts, ensuring that any issues are resolved in a timely and efficient manner. Additionally, professional services help organizations deploy and manage Cilium in production environments.
+
+</details>
+
+For more information, see [Isovalent Solutions](https://isovalent.com/blog/post/isovalent-azure-linux/) and [Isovalent on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/isovalentinc1662143158090.isovalent-cilium-enterprise?tab=overview).
+
+## Observability
+
+Observability provides deep insights into your systems, enabling rapid issue detection and resolution to enhance your applicationΓÇÖs reliability and performance.
+
+### Dynatrace
++
+| Solution | Categories |
+|-||
+| Dynatrace Azure Monitoring | Observability |
+
+Fully automated, AI-assisted observability across Azure environments Dynatrace is a single source of truth for your cloud platforms, allowing you to monitor the health of your entire Azure infrastructure.
+
+For more information, see [Dynatrace Solutions](https://www.dynatrace.com/technologies/azure-monitoring/) and [Dynatrace on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview).
+
+## Security
+
+Ensure the integrity and confidentiality of applications and foster trust and compliance across your infrastructure.
+
+### Tetrate
++
+| Solution | Categories |
+|-||
+| Tetrate Istio Distro (TID) | Security <br> Networking |
+
+Tetrate Istio Distro (TID) is a simple, safe enterprise-grade Istio distro, providing the easiest way of installing, operating, and upgrading.
+
+<details> <summary> See more </summary><br>
+
+TID enforces fetching certified versions of Istio and enables only compatible versions of Istio installation. It includes a FIPS-compliant flavor, delivers platform-based Istio configuration validations by integrating validation libraries from multiple sources, uses various cloud provider certificate management systems to create Istio CA certs that are used for signing service mesh managed workloads, and provides multiple additional integration points with cloud providers.
+
+</details>
+
+For more information, see [Tetrate Solutions](https://istio.tetratelabs.io/download/) and [Tetrate on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tetrate1598353087553.tetrateistio?tab=Overview).
+
+## Storage
+
+Storage enables standardized and seamless storage interactions, ensuring high application performance and data consistency.
+
+### Veeam
++
+| Solution | Categories |
+|-||
+| Kasten K10 by Veeam | Storage |
+
+Kasten K10 by Veeam is the #1 Kubernetes data management product, providing an easy-to-use, scalable, and secure system for backup and restore, mobility, and DR.
+
+For more information, see [Veeam Solutions](https://www.kasten.io/partner-microsoft) and [Veeam on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.kasten_k10_by_veeam_byol?tab=overview).
+
+## Config management
+
+Automate and standardize the system settings across your environments to enhance efficiency, reduce errors, and ensuring system stability and compliance.
+
+### Corent
++
+| Solution | Categories |
+|-||
+| Corent MaaS | Config Management |
+
+Corent MaaS provides scanning to identify workloads that can be containerized, and automatically containerizes on AKS.
+
+For more information, see [Corent Solutions](https://www.corenttech.com/SurPaaS_MaaS_Product.html) and [Corent on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/corent-technology-pvt.surpaas_maas?tab=Overview).
+
+## Migration
+
+Migrate workloads to Azure Linux Container Host on AKS with confidence.
+
+### Catalogic
++
+| Solution | Categories |
+|-||
+| CloudCasa | Migration |
+
+CloudCasa is a Kubernetes backup, recovery, and migration solution that is fully compatible with AKS, as well as all other major Kubernetes distributions and managed services.
+
+<details> <summary> See more </summary><br>
+
+Install the CloudCasa agent and let it do all the hard work of protecting and recovering your cluster resources and persistent data from human error, security breaches, and service failures, including providing the business continuity and compliance that your business requires.
+
+From a single dashboard, CloudCasa makes cross-cluster, cross-tenant, cross-region, and cross-cloud recoveries easy. Recovery and migration from backups includes recovering an entire cluster along with your vNETs, add-ons, load balancers and more. During recovery, users can migrate to Azure Linux, and migrate storage resources from Azure Disk to Azure Container Storage.
+
+CloudCasa can also centrally manage Azure Backup or Velero backup installations across multiple clusters and cloud providers, with migration of resources to different environments.
+
+</details>
+
+For more information, see [Catalogic Solutions](https://cloudcasa.io/) and [Catalogic on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/catalogicsoftware1625626770507.cloudcasa-aks-app).
+
+## Next steps
+
+[Learn more about Azure Linux Container Host on AKS](../azure-linux/intro-azure-linux.md).
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
The following limitations apply when you create AKS clusters that support multip
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. This feature isn't supported with Basic SKU load balancers. * The AKS cluster must use Virtual Machine Scale Sets for the nodes. * The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter.
- * For Linux node pools, the length must be between 1-11 characters.
+ * For Linux node pools, the length must be between 1-12 characters.
* For Windows node pools, the length must be between 1-6 characters. * All node pools must reside in the same virtual network. * When you create multiple node pools at cluster creation time, the Kubernetes versions for the node pools must match the version set for the control plane.
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
The AKS Linux Extension is an Azure VM extension that installs and configures mo
- [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the cluster's API server using Events and NodeConditions.-- [ig](https://inspektor-gadget.io/docs/latest/ig/): An eBPF-powered open-source framework for debugging and observing Linux and Kubernetes systems. It provides a set of tools (or gadgets) designed to gather relevant information, allowing users to identify the cause of performance issues, crashes, or other anomalies. Notably, its independence from Kubernetes enables users to employ it also for debugging control plane issues.
+- [ig](https://go.microsoft.com/fwlink/p/?linkid=2260320): An eBPF-powered open-source framework for debugging and observing Linux and Kubernetes systems. It provides a set of tools (or gadgets) designed to gather relevant information, allowing users to identify the cause of performance issues, crashes, or other anomalies. Notably, its independence from Kubernetes enables users to employ it also for debugging control plane issues.
These tools help provide observability around many node health related problems, such as:
aks Manage Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md
az role assignment create --role "Azure Kubernetes Service RBAC Admin" --assigne
> az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name> > ```
+> [!NOTE]
+> In Azure portal, after creating role assignments scoped to a desired namespace, you won't be able to see "role assignments" for namespace [at a scope][list-role-assignments-at-a-scope-at-portal]. You can find it by using the [`az role assignment list`][az-role-assignment-list] command, or [list role assignments for a user or group][list-role-assignments-for-a-user-or-group-at-portal], which you assigned the role to.
+>
+> ```azurecli-interactive
+> az role assignment list --scope $AKS_ID/namespaces/<namespace-name>
+> ```
+ ## Create custom roles definitions The following example custom role definition allows a user to only read deployments and nothing else. For the full list of possible actions, see [Microsoft.ContainerService operations](../role-based-access-control/resource-provider-operations.md#microsoftcontainerservice).
To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azur
<!-- LINKS - Internal --> [aks-support-policies]: support-policies.md [aks-faq]: faq.md
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli
-[az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[az-group-create]: /cli/azure/group#az_group_create
-[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[list-role-assignments-at-a-scope-at-portal]: ../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-at-a-scope
+[list-role-assignments-for-a-user-or-group-at-portal]: ../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group
+[az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create
+[az-role-assignment-list]: /cli/azure/role/assignment#az-role-assignment-list
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[az-group-create]: /cli/azure/group#az-group-create
+[az-aks-update]: /cli/azure/aks#az-aks-update
[managed-aad]: ./managed-azure-ad.md [install-azure-cli]: /cli/azure/install-azure-cli
-[az-role-definition-create]: /cli/azure/role/definition#az_role_definition_create
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get-credentials
+[az-role-definition-create]: /cli/azure/role/definition#az-role-definition-create
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
[kubernetes-rbac]: /azure/aks/concepts-identity#azure-rbac-for-kubernetes-authorization
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-identity.md
Title: Best practices for managing identity
+ Title: Best practices for managing authentication and authorization
description: Learn the cluster operator best practices for how to manage authentication and authorization for clusters in Azure Kubernetes Service (AKS) Previously updated : 04/14/2023 Last updated : 02/16/2024 # Best practices for authentication and authorization in Azure Kubernetes Service (AKS)
For more information about cluster operations in AKS, see the following best pra
<!-- INTERNAL LINKS --> [aks-concepts-identity]: concepts-identity.md [azure-ad-integration]: managed-azure-ad.md
-[aks-aad]: azure-ad-integration-cli.md
-[managed-identities]: ../active-directory/managed-identities-azure-resources/overview.md
+[aks-aad]: enable-authentication-microsoft-entra-id.md
[aks-best-practices-scheduler]: operator-best-practices-scheduler.md [aks-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md [aks-best-practices-cluster-isolation]: operator-best-practices-cluster-isolation.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | |--|-|--||-|--|
-| 1.24 | Apr 2022 | May 2022 | Jul 2022 | Jul 2023 | Until 1.28 GA |
| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 14, 2024 | Until 1.29 GA | | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA |
Note the following important changes before you upgrade to any of the available
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-|| | 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
-| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes |None
-| 1.27 | Azure policy 1.1.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
-| 1.28 | Azure policy 1.2.1<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.2<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes|None
-
+| 1.26 | Azure policy 1.3.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.1<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0<br>azurefile-csi-driver 1.26.10<br>| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.26.10 |None
+| 1.27 | Azure policy 1.3.0<br>azuredisk-csi driver v1.28.5<br>azurefile-csi driver v1.28.7<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>Keda 2.11.2<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>azurefile-csi-driver 1.28.7<br>KMS 0.5.0<br>CSI Secret store driver 1.3.4-1<br>|Cilium 1.13.10-1<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.11.2<br>Cilium 1.13.10-1<br>azurefile-csi-driver 1.28.7<br>azuredisk-csi driver v1.28.5<br>blob-csi v1.22.4<br>csi-attacher v4.3.0<br>csi-resizer v1.8.0<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2|Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
+| 1.28 | Azure policy 1.3.0<br>azurefile-csi-driver 1.29.2<br>csi-node-driver-registrar v2.9.0<br>csi-livenessprobe 2.11.0<br>azuredisk-csi-linux v1.29.2<br>azuredisk-csi-windows v1.29.2<br>csi-provisioner v3.6.2<br>csi-attacher v4.5.0<br>csi-resizer v1.9.3<br>csi-snapshotter v6.2.2<br>snapshot-controller v6.2.2<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>CSI Secret store driver 1.3.4-1<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.10-1<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br>Tigera-Operator 1.28.13| OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|azurefile-csi-driver 1.29.2<br>csi-resizer v1.9.3<br>csi-attacher v4.4.2<br>csi-provisioner v4.4.2<br>blob-csi v1.23.2<br>azurefile-csi driver v1.29.2<br>azuredisk-csi driver v1.29.2<br>csi-livenessprobe v2.11.0<br>csi-node-driver-registrar v2.9.0|None
+| 1.29 | Azure policy 1.3.0<br>csi-provisioner v4.0.0<br>csi-attacher v4.5.0<br>csi-snapshotter v6.3.3<br>snapshot-controller v6.3.3<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.3<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1<br>CSI Secret store driver 1.3.4-1<br>azurefile-csi-driver 1.29.3<br>|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br>Tigera-Operator 1.30.7<br>| OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Tigera-Operator 1.30.7<br>csi-provisioner v4.0.0<br>csi-attacher v4.5.0<br>csi-snapshotter v6.3.3<br>snapshot-controller v6.3.3 |None
## Alias minor version > [!NOTE]
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
In the Developer, Basic, Standard, and Premium tiers of API Management, the publ
* The service subscription is disabled or warned (for example, for nonpayment) and then reinstated. [Learn more about subscription states](/azure/cost-management-billing/manage/subscription-states) * (Developer and Premium tiers) Azure Virtual Network is added to or removed from the service. * (Developer and Premium tiers) API Management service is switched between external and internal VNet deployment mode.
-* (Developer and Premium tiers) API Management service is moved to a different subnet.
+* (Developer and Premium tiers) API Management service is moved to a different subnet, or [migrated](migrate-stv1-to-stv2.md) from the `stv1` to the `stv2` compute platform..
* (Premium tier) [Availability zones](../reliability/migrate-api-mgt.md) are enabled, added, or removed. * (Premium tier) In [multi-regional deployments](api-management-howto-deploy-multi-region.md), the regional IP address changes if a region is vacated and then reinstated.
api-management Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quickstart-arm-template.md
description: Use this quickstart to create an Azure API Management instance in t
-tags: azure-resource-manager
api-management Quota By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/quota-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expressions aren't allowed. | Either `calls`, `bandwidth`, or both together must be specified. | N/A | | counter-key | The key to use for the `quota policy`. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed. | Yes | N/A | | increment-condition | The Boolean expression specifying if the request should be counted towards the quota (`true`). Policy expressions are allowed. | No | N/A |
-| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. When `renewal-period` is set to `0`, the period is set to infinite. Policy expressions aren't allowed. | Yes | N/A |
+| renewal-period | The length in seconds of the fixed window after which the quota resets. The start of each period is calculated relative to `first-period-start`. Minimum period: 300 seconds. When `renewal-period` is set to 0, the period is set to infinite. Policy expressions aren't allowed. | Yes | N/A |
| first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. Policy expressions aren't allowed. | No | `0001-01-01T00:00:00Z` |
For more information and examples of this policy, see [Advanced request throttli
* [API Management access restriction policies](api-management-access-restriction-policies.md)
api-management Set Backend Service Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-backend-service-policy.md
# Set backend service
-Use the `set-backend-service` policy to redirect an incoming request to a different backend than the one specified in the API settings for that operation. This policy changes the backend service base URL of the incoming request to the one specified in the policy.
+Use the `set-backend-service` policy to redirect an incoming request to a different backend than the one specified in the API settings for that operation. This policy changes the backend service base URL of the incoming request to a URL or [backend](backends.md) specified in the policy.
> [!NOTE] > Backend entities can be managed via [Azure portal](how-to-configure-service-fabric-backend.md), management [API](/rest/api/apimanagement), and [PowerShell](https://www.powershellgallery.com/packages?q=apimanagement).
Use the `set-backend-service` policy to redirect an incoming request to a differ
| Attribute | Description | Required | Default | | -- | | -- | - | |base-url|New backend service base URL. Policy expressions are allowed.|One of `base-url` or `backend-id` must be present.|N/A|
-|backend-id|Identifier (name) of the backend to route primary or secondary replica of a partition. Policy expressions are allowed. |One of `base-url` or `backend-id` must be present.|N/A|
+|backend-id|Identifier (name) of the [backend](backends.md) to route primary or secondary replica of a partition. Policy expressions are allowed. |One of `base-url` or `backend-id` must be present.|N/A|
|sf-resolve-condition|Only applicable when the backend is a Service Fabric service. Condition identifying if the call to Service Fabric backend has to be repeated with new resolution. Policy expressions are allowed.|No|N/A| |sf-service-instance-name|Only applicable when the backend is a Service Fabric service. Allows changing service instances at runtime. Policy expressions are allowed. |No|N/A| |sf-partition-key|Only applicable when the backend is a Service Fabric service. Specifies the partition key of a Service Fabric service. Policy expressions are allowed. |No|N/A|
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
Title: App Service Environment networking
description: App Service Environment networking details Previously updated : 10/02/2023 Last updated : 01/31/2024
If you use a smaller subnet, be aware of the following limitations:
- For any App Service plan OS/SKU combination used in your App Service Environment like I1v2 Windows, one standby instance is created for every 20 active instances. The standby instances also require IP addresses. - When scaling App Service plans in the App Service Environment up/down, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. - Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic.-- After scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this can be up to 12 hours.
+- After scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. In rare cases, this operation can be up to 12 hours.
- If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure. >[!NOTE]
You can find details in the **IP Addresses** portion of the portal, as shown in
As you scale your App Service plans in your App Service Environment, you use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time.
+### Bring your own inbound address
+
+You can bring your own inbound address to your App Service Environment. If you create an App Service Environment with an internal VIP, you can specify a static IP address in the subnet. If you create an App Service Environment with an external VIP, you can use your own Azure Public IP address by specifying the resource ID of the Public IP address. The following are limitations for bringing your own inbound address:
+
+- For App Service Environment with external VIP, the Azure Public IP address resource must be in the same subscription as the App Service Environment.
+- The inbound address can't be changed after the App Service Environment is created.
+ ## Ports and network restrictions For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports, you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This port is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet.
For more information about Private Endpoint and Web App, see [Azure Web App Priv
## DNS
-The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix. Note that for App Service Environment domains, the site name will be truncated at 40 characters because of DNS limits. If you have a slot, the slot name will be truncated at 19 characters.
+The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix. For App Service Environment domains, the site name is truncated at 40 characters because of DNS limits. If you have a slot, the slot name is truncated at 19 characters.
### DNS configuration to your App Service Environment
In addition to setting up DNS, you also need to enable it in the [App Service En
### DNS configuration from your App Service Environment
-The apps in your App Service Environment uses the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server.
+The apps in your App Service Environment use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server.
## More resources
application-gateway Create Vmss Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-cli.md
Title: Azure CLI Script Sample - Manage web traffic | Microsoft Docs
description: Azure CLI Script Sample - Manage web traffic with an application gateway and a virtual machine scale set.
-tags: azure-resource-manager
vm-windows
application-gateway Create Vmss Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-powershell.md
Title: Azure PowerShell Script Sample - Manage web traffic | Microsoft Docs
description: Azure PowerShell Script Sample - Manage web traffic with an application gateway and a virtual machine scale set.
-tags: azure-resource-manager
vm-windows
application-gateway Create Vmss Waf Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-waf-cli.md
Title: Azure CLI Script Sample - Restrict web traffic | Microsoft Docs
description: Azure CLI Script Sample - Create an application gateway with a web application firewall and a virtual machine scale set that uses OWASP rules to restrict traffic.
-tags: azure-resource-manager
vm-windows
application-gateway Create Vmss Waf Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/scripts/create-vmss-waf-powershell.md
Title: Azure PowerShell Script Sample - Restrict web traffic | Microsoft Docs
description: Azure PowerShell Script Sample - Create an application gateway with a web application firewall and a virtual machine scale set that uses OWASP rules to restrict traffic.
-tags: azure-resource-manager
vm-windows
automation Runtime Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runtime-environment-overview.md
Title: Runtime environment in Azure Automation
description: This article provides an overview on Runtime environment in Azure Automation. Previously updated : 01/24/2024 Last updated : 02/16/2024
You can't edit these Runtime environments. However, any changes that are made in
- Existing runbooks that are automatically moved from old experience to Runtime environment experience would be able to execute as both cloud and hybrid job. - When the runbook is [updated](manage-runtime-environment.md) and linked to a different Runtime environment, it can be executed as cloud job only. - PowerShell Workflow, Graphical PowerShell, and Graphical PowerShell Workflow runbooks only work with System-generated PowerShell-5.1 Runtime environment.
+- Runbooks created in Runtime environment experience with Runtime version PowerShell 7.2 would show as PowerShell 5.1 runbooks in old experience.
- RBAC permissions cannot be assigned to Runtime environment. - Runtime environment can't be configured through Azure Automation extension for Visual Studio Code. - Deleted Runtime environments cannot be recovered.
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
App Configuration offers the option to bulk [import](./howto-import-export-data.
If your application is deployed in multiple regions, we recommend that you [enable geo-replication](./howto-geo-replication.md) of your App Configuration store. You can let your application primarily connect to the replica matching the region where instances of your application are deployed and allow them to fail over to replicas in other regions. This setup minimizes the latency between your application and App Configuration, spreads the load as each replica has separate throttling quotas, and enhances your application's resiliency against transient and regional outages. See [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) for more information.
+## Building applications with high resiliency
+
+Applications often rely on configuration to start, making Azure App Configuration's high availability critical. For improved resiliency, applications should leverage App Configuration's reliability features and consider taking the following measures based on your specific requirements.
+
+* **Provision in regions with Azure availability zone support.** Availability zones allow applications to be resilient to data center outages. App Configuration offers zone redundancy for all customers without any extra charges. Creating your App Configuration store in regions with support for availability zones is recommended. You can find [a list of regions](./faq.yml#how-does-app-configuration-ensure-high-data-availability) where App Configuration has enabled availability zone support.
+* **[Enable geo-replication](./howto-geo-replication.md) and allow your application to failover among replicas.** This setup gives you a model for scalability and enhanced resiliency against transient failures and regional outages. See [Resiliency and Disaster Recovery](./concept-disaster-recovery.md) for more information.
+* **Deploy configuration with [safe deployment practices](/azure/well-architected/operational-excellence/safe-deployments).** Incorrect or accidental configuration changes can frequently cause application downtime. You should avoid making configuration changes that impact the production directly from, for example, the Azure portal whenever possible. In safe deployment practices (SDP), you use a progressive exposure deployment model to minimize the potential blast radius of deployment-caused issues. If you adopt SDP, you can build and test a [configuration snapshot](./howto-create-snapshots.md) before deploying it to production. During the deployment, you can update instances of your application to progressively pick up the new snapshot. If issues are detected, you can roll back the change by redeploying the last-known-good (LKG) snapshot. The snapshot is immutable, guaranteeing consistency throughout all deployments. You can utilize snapshots along with dynamic configuration. Use a snapshot for your foundational configuration and dynamic configuration for emergency configuration overrides and feature flags.
+* **Include configuration with your application.** If you want to ensure that your application always has access to a copy of the configuration, or if you prefer to avoid a runtime dependency on App Configuration altogether, you can pull the configuration from App Configuration during build or release time and include it with your application. To learn more, check out examples of integrating App Configuration with your [CI/CD pipeline](./integrate-ci-cd-pipeline.md) or [Kubernetes deployment](./integrate-kubernetes-deployment-helm.md).
+* **Use App Configuration providers.** Applications play a critical part in achieving high resiliency because they can account for issues arising during their runtime, such as networking problems, and respond to failures more quickly. The App Configuration providers offer a range of built-in resiliency features, including automatic replica discovery, replica failover, startup retries with customizable timeouts, configuration caching, and adaptive strategies for reliable configuration refresh. It's highly recommended that you use App Configuration providers to benefit from these features. If that's not an option, you should consider implementing similar features in your custom solution to achieve the highest level of resiliency.
+ ## Client applications in App Configuration When you use App Configuration in client applications, ensure that you consider two major factors. First, if you're using the connection string in a client application, you risk exposing the access key of your App Configuration store to the public. Second, the typical scale of a client application might cause excessive requests to your App Configuration store, which can result in overage charges or throttling. For more information about throttling, see the [FAQ](./faq.yml#are-there-any-limits-on-the-number-of-requests-made-to-app-configuration).
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
In this tutorial, you will learn how to:
## Set up feature management
-To access the .NET feature manager, your app must have references to the `Microsoft.FeatureManagement.AspNetCore` NuGet package.
+To access the .NET feature manager, your app must have references to the `Microsoft.Azure.AppConfiguration.AspNetCore` and `Microsoft.FeatureManagement.AspNetCore` NuGet packages.
The .NET feature manager is configured from the framework's native configuration system. As a result, you can define your application's feature flag settings by using any configuration source that .NET supports, including the local `appsettings.json` file or environment variables.
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
-## February 12, 2024
+## February 13, 2024
-**Image tag**:`v1.27.0_2023-02-13`
+**Image tag**:`v1.27.0_2024-02-13`
-For complete release version information, review [Version log](version-log.md#february-12-2024).
+For complete release version information, review [Version log](version-log.md#february-13-2024).
## December 12, 2023
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
-## February 12, 2024
+## February 13, 2024
|Component|Value| |--|--|
-|Container images tag |`v1.27.0_2023-02-13`|
+|Container images tag |`v1.27.0_2024-02-13`|
|**CRD names and version:**| | |`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2| |`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5|
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 01/10/2024 Last updated : 02/15/2024
Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernete
- Azure Arc-enabled App services - Azure Arc-enabled Machine Learning - Azure Arc-enabled data services (direct connectivity mode only)
+- Azure Arc resource bridge
[!INCLUDE [network-requirements](kubernetes/includes/network-requirements.md)]
For more information, see [Connectivity modes and requirements](data/connectivit
Connectivity to Arc-enabled server endpoints is required for: - SQL Server enabled by Azure Arc-- Azure Arc-enabled VMware vSphere (preview) <sup>*</sup>-- Azure Arc-enabled System Center Virtual Machine Manager (preview) <sup>*</sup>-- Azure Arc-enabled Azure Stack (HCI) (preview) <sup>*</sup>
+- Azure Arc-enabled VMware vSphere <sup>*</sup>
+- Azure Arc-enabled System Center Virtual Machine Manager <sup>*</sup>
+- Azure Arc-enabled Azure Stack (HCI) <sup>*</sup>
<sup>*</sup>Only required for guest management enabled.
For more information, see [Connected Machine agent network requirements](servers
## Azure Arc resource bridge
-This section describes additional networking requirements specific to deploying Azure Arc resource bridge in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere (preview) and Azure Arc-enabled System Center Virtual Machine Manager (preview).
+This section describes additional networking requirements specific to deploying Azure Arc resource bridge in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere and Azure Arc-enabled System Center Virtual Machine Manager.
[!INCLUDE [network-requirements](resource-bridge/includes/network-requirements.md)]
Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) also requires:
| | | | | | | SCVMM management Server | 443 | URL of the SCVMM management server | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. |
-For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager (preview)](system-center-virtual-machine-manager/overview.md).
+For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md).
## Azure Arc-enabled VMware vSphere
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
Title: Azure Arc resource bridge network requirements description: Learn about network requirements for Azure Arc resource bridge including URLs that must be allowlisted. Previously updated : 11/03/2023 Last updated : 02/15/2024 # Azure Arc resource bridge network requirements
Arc resource bridge communicates outbound securely to Azure Arc over TCP port 44
[!INCLUDE [network-requirements](includes/network-requirements.md)]
-## Additional network requirements
+In addition, Arc resource bridge requires connectivity to the Arc-enabled Kubernetes endpoints shown here.
-In addition, Arc resource bridge requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud).
> [!NOTE] > The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md).
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
## Version 1.38 - February 2024
-Download for [Windows](https://download.microsoft.com/download/e/#installing-a-specific-version-of-the-agent)
+Download for [Windows](https://download.microsoft.com/download/0/9/8/0981cd23-37aa-4cb3-8965-368586ab9fd8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### Known issues
+
+Windows machines that try to upgrade to version 1.38 via Microsoft Update and encounter an error might fail to roll back to the previously installed version. As a result, the machine will appear "Disconnected" and won't be manageable from Azure. The update has been removed from the Microsoft Update Catalog while Microsoft investigates this behavior. Manual installations of the agent on new and existing machines aren't affected.
+
+If your machine was affected by this issue, you can repair the agent by downloading and installing the agent again. The agent will automatically discover the existing configuration and restore connectivity with Azure. You don't need to run `azcmagent connect`.
### New features
azure-cache-for-redis Create Manage Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-cache.md
Title: Create, query, and delete an Azure Cache for Redis - Azure CLI description: This Azure CLI code sample shows how to create an Azure Cache for Redis instance using the command az redis create. It then gets details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, it deletes the cache.
-tags: azure-service-management
ms.devlang: azurecli
azure-cache-for-redis Create Manage Premium Cache Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-premium-cache-cluster.md
Title: Create, query, and delete a Premium Azure Cache for Redis with clustering
description: This Azure CLI code sample shows how to create a 6 GB Premium tier Azure Cache for Redis with clustering enabled and two shards. It then gets details of an Azure Cache for Redis instance, including provisioning status, the hostname, ports, and keys for an Azure Cache for Redis instance. Finally, it deletes the cache.
-tags: azure-service-management
ms.devlang: azurecli
azure-functions Configure Networking How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-networking-how-to.md
Title: How to configure Azure Functions with a virtual network
-description: Article that shows you how to perform certain virtual networking tasks for Azure Functions.
+ Title: How to use a secured storage account with Azure Functions
+description: Article that shows you how to use a secured storage account in a virtual network as the default storage account for a function app in Azure Functions.
Previously updated : 06/23/2023 Last updated : 01/31/2024
-# How to configure Azure Functions with a virtual network
+# How to use a secured storage account with Azure Functions
-This article shows you how to perform tasks related to configuring your function app to connect to and run on a virtual network. For an in-depth tutorial on how to secure your storage account, refer to the [Connect to a Virtual Network tutorial](functions-create-vnet.md). To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
+This article shows you how to connect your function app to a secured storage account. For an in-depth tutorial on how to create your function app with inbound and outbound access restrictions, refer to the [Integrate with a virtual network](functions-create-vnet.md) tutorial. To learn more about Azure Functions and networking, see [Azure Functions networking options](functions-networking-options.md).
## Restrict your storage account to a virtual network
-When you create a function app, you either create a new storage account or link to an existing storage account. During function app creation, you can secure a new storage account behind a virtual network and integrate the function app with this network. At this time, you can't secure an existing storage account being used by your function app in the same way.
+When you create a function app, you either create a new storage account or link to an existing one. Currently, only [ARM template and Bicep deployments](functions-infrastructure-as-code.md#secured-deployments) support function app creation with an existing secured storage account.
> [!NOTE] > Securing your storage account is supported for all tiers in both Dedicated (App Service) and Elastic Premium plans. Consumption plans currently don't support virtual networks. For a list of all restrictions on storage accounts, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
-### During function app creation
+## Secure storage during function app creation
-You can create a new function app along with a new storage account secured behind a virtual network. The following links show you how to create these resources by using either the Azure portal or by using deployment templates:
+You can create a function app along with a new storage account secured behind a virtual network that is accessible via private endpoints. The following links show you how to create these resources by using either the Azure portal or by using deployment templates:
-# [Azure portal](#tab/portal)
+### [Azure portal](#tab/portal)
Complete the following tutorial to create a new function app a secured storage account: [Use private endpoints to integrate Azure Functions with a virtual network](functions-create-vnet.md).
-# [Deployment templates](#tab/templates)
+### [Deployment templates](#tab/templates)
Use Bicep files or Azure Resource Manager (ARM) templates to create a secured function app and storage account resources. When you create a secured storage account in an automated deployment, you must also specifically set the `WEBSITE_CONTENTSHARE` setting and create the file share as part of your deployment. For more information, including links to example deployments, see [Secured deployments](functions-infrastructure-as-code.md#secured-deployments).
-### Existing function app
+## Secure storage for an existing function app
-When you have an existing function app, you can't directly secure the storage account currently being used by the app. You must instead swap-out the existing storage account for a new, secured storage account.
+When you have an existing function app, you can't directly secure the storage account currently being used by the app. You must instead swap-out the existing storage account for a new, secured storage account.
-To secure the storage for an existing function app:
+### 1. Enable virtual network integration
+
+As a prerequisite, you need to enable virtual network integration for your function app.
1. Choose a function app with a storage account that doesn't have service endpoints or private endpoints enabled. 1. [Enable virtual network integration](./functions-networking-options.md#enable-virtual-network-integration) for your function app.
-1. Create or configure a second storage account. This is going to be the secured storage account that your function app uses instead.
+### 2. Create a secured storage account
+
+Set up a secured storage account for your function app:
+
+1. [Create a second storage account](../storage/common/storage-account-create.md). This is going to be the secured storage account that your function app will use instead. You can also use an existing storage account not already being used by Functions.
+
+1. Copy the connection string for this storage account. You need this string for later.
-1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account.
+1. [Create a file share](../storage/files/storage-how-to-create-file-share.md#create-a-file-share) in the new storage account. Try to use the same name as the file share in the existing storage account. Otherwise, you'll need to copy the name of the new file share to configure an app setting later.
1. Secure the new storage account in one of the following ways:
- * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). When using private endpoint connections, the storage account must have private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints.
+ * [Create a private endpoint](../storage/common/storage-private-endpoints.md#creating-a-private-endpoint). When you set up private endpoint connections, create private endpoints for the `file` and `blob` subresources. For Durable Functions, you must also make `queue` and `table` subresources accessible through private endpoints. If you're using a custom or on-premises DNS server, make sure you [configure your DNS server](../storage/common/storage-private-endpoints.md#dns-changes-for-private-endpoints) to resolve to the new private endpoints.
+
+ * [Restrict traffic to specific subnets](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). Ensure that one of the allowed subnets is the one your function app is network integrated with. Double check that the subnet has a service endpoint to Microsoft.Storage.
+
+1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share. [AzCopy](../storage/common/storage-use-azcopy-blobs-copy.md) and [Azure Storage Explorer](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-move-azure-storage-blobs-between/ba-p/3545304) are common methods. If you use Azure Storage Explorer, you may need to allow your client IP address into your storage account's firewall.
+
+Now you're ready to configure your function app to communicate with the newly secured storage account.
+
+### 3. Enable application and configuration routing
+
+You should now route your function app's traffic to go through the virtual network.
+
+1. Enable [application routing](../app-service/overview-vnet-integration.md#application-routing) to route your app's traffic into the virtual network.
+
+ * Navigate to the **Networking** tab of your function app. Under **Outbound traffic configuration**, select the subnet associated with your virtual network integration.
+
+ * In the new page, check the box for **Outbound internet traffic** under **Application routing**.
+
+1. Enable [content share routing](../app-service/overview-vnet-integration.md#content-share) to have your function app communicate with your new storage account through its virtual network.
- * [Enable a service endpoint from the virtual network](../storage/common/storage-network-security.md#grant-access-from-a-virtual-network). When using service endpoints, enable the subnet dedicated to your function apps for storage accounts on the firewall.
+ * In the same page, check the box for **Content storage** under **Configuration routing**.
-1. Copy the file and blob content from the current storage account used by the function app to the newly secured storage account and file share.
+### 4. Update application settings
-1. Copy the connection string for this storage account.
+Finally, you need to update your application settings to point at the new secure storage account.
-1. Update the **Application Settings** under **Configuration** for the function app to the following:
+1. Update the **Application Settings** under the **Configuration** tab of your function app to the following:
| Setting name | Value | Comment | |-|-|-|
- | `AzureWebJobsStorage`| Storage connection string | This is the connection string for a secured storage account. |
- | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | Storage connection string | This is the connection string for a secured storage account. This setting is required for Consumption and Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
- | `WEBSITE_CONTENTSHARE` | File share | The name of the file share created in the secured storage account where the project deployment files reside. This setting is required for Consumption and Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions. |
- | `WEBSITE_CONTENTOVERVNET` | 1 | A value of 1 enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. |
+ | [`AzureWebJobsStorage`](./functions-app-settings.md#azurewebjobsstorage)<br>[`WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`](./functions-app-settings.md#website_contentazurefileconnectionstring) | Storage connection string | Both settings contain the connection string for the new secured storage account, which you saved earlier. |
+ | [`WEBSITE_CONTENTSHARE`](./functions-app-settings.md#website_contentshare) | File share | The name of the file share created in the secured storage account where the project deployment files reside. |
1. Select **Save** to save the application settings. Changing app settings causes the app to restart.
azure-functions Azfd0011 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0011.md
+
+ Title: "AZFD0011: The FUNCTIONS_WORKER_RUNTIME setting is required"
+
+description: "Learn how to troubleshoot the event 'AZFD0011: The FUNCTIONS_WORKER_RUNTIME setting is required' in Azure Functions."
+ Last updated : 01/24/2024+++
+# AZFD0011: The FUNCTIONS_WORKER_RUNTIME setting is required
+
+This event occurs when a function app doesn't have the `FUNCTIONS_WORKER_RUNTIME` application setting, which is required.
+
+| | Value |
+|-|-|
+| **Event ID** |AZFD0011|
+| **Severity** |Warning|
+
+## Event description
+
+The `FUNCTIONS_WORKER_RUNTIME` application setting indicates the language or language stack on which the function app runs, such as `python`. For more information on valid values, see the [`FUNCTIONS_WORKER_RUNTIME`](../../functions-app-settings.md#functions_worker_runtime) reference.
+
+While not currently required, you should always specify `FUNCTIONS_WORKER_RUNTIME` for your function apps. When you don't have this setting and the Functions host can't determine the correct language or language stack, you might see exceptions or unexpected behaviors.
+
+Because `FUNCTIONS_WORKER_RUNTIME` is likely to become a required setting, you should explicitly set it in all of your existing function apps and deployment scripts to prevent any downtime in the future.
+
+## How to resolve the event
+
+In a production application, add `FUNCTIONS_WORKER_RUNTIME` to the [application settings](../../functions-how-to-use-azure-function-app-settings.md#settings).
+
+When running locally in Azure Functions Core Tools, also add `FUNCTIONS_WORKER_RUNTIME` to the [local.settings.json file](../../functions-develop-local.md#local-settings-file).
+
+## When to suppress the event
+
+This event shouldn't be suppressed.
azure-functions Functions Dotnet Dependency Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-dependency-injection.md
This example uses the [Microsoft.Extensions.Http](https://www.nuget.org/packages
A series of registration steps run before and after the runtime processes the startup class. Therefore, keep in mind the following items: -- *The startup class is meant for only setup and registration.* Avoid using services registered at startup during the startup process. For instance, don't try to log a message in a logger that is being registered during startup. This point of the registration process is too early for your services to be available for use. After the `Configure` method is run, the Functions runtime continues to register additional dependencies, which can affect how your services operate.
+- *The startup class is meant for only setup and registration.* Avoid using services registered at startup during the startup process. For instance, don't try to log a message in a logger that is being registered during startup. This point of the registration process is too early for your services to be available for use. After the `Configure` method is run, the Functions runtime continues to register other dependencies, which can affect how your services operate.
-- *The dependency injection container only holds explicitly registered types*. The only services available as injectable types are what are setup in the `Configure` method. As a result, Functions-specific types like `BindingContext` and `ExecutionContext` aren't available during setup or as injectable types.
+- *The dependency injection container only holds explicitly registered types*. The only services available as injectable types are what are set up in the `Configure` method. As a result, Functions-specific types like `BindingContext` and `ExecutionContext` aren't available during setup or as injectable types.
+
+- *Configuring ASP.NET authentication isn't supported*. The Functions host configures ASP.NET authentication services to properly expose APIs for core lifecycle operations. Other configurations in a custom `Startup` class can override this configuration, causing unintended consequences. For example, calling `builder.Services.AddAuthentication()` can break authentication between the portal and the host, leading to messages such as [Azure Functions runtime is unreachable](./functions-recover-storage-account.md#aspnet-authentication-overrides).
## Use injected dependencies
-Constructor injection is used to make your dependencies available in a function. The use of constructor injection requires that you do not use static classes for injected services or for your function classes.
+Constructor injection is used to make your dependencies available in a function. The use of constructor injection requires that you don't use static classes for injected services or for your function classes.
The following sample demonstrates how the `IMyService` and `HttpClient` dependencies are injected into an HTTP-triggered function.
Application Insights is added by Azure Functions automatically.
### ILogger\<T\> and ILoggerFactory
-The host injects `ILogger<T>` and `ILoggerFactory` services into constructors. However, by default these new logging filters are filtered out of the function logs. You need to modify the `host.json` file to opt-in to additional filters and categories.
+The host injects `ILogger<T>` and `ILoggerFactory` services into constructors. However, by default these new logging filters are filtered out of the function logs. You need to modify the `host.json` file to opt in to extra filters and categories.
The following example demonstrates how to add an `ILogger<HttpTrigger>` with logs that are exposed to the host.
Overriding services provided by the host is currently not supported. If there a
Values defined in [app settings](./functions-how-to-use-azure-function-app-settings.md#settings) are available in an `IConfiguration` instance, which allows you to read app settings values in the startup class.
-You can extract values from the `IConfiguration` instance into a custom type. Copying the app settings values to a custom type makes it easy test your services by making these values injectable. Settings read into the configuration instance must be simple key/value pairs. Please note that, the functions running on Elastic Premium SKU has this constraint "App setting names can only contain letters, numbers (0-9), periods ("."), colons (":") and underscores ("_")"
+You can extract values from the `IConfiguration` instance into a custom type. Copying the app settings values to a custom type makes it easy test your services by making these values injectable. Settings read into the configuration instance must be simple key/value pairs. For functions running in an Elastic Premium plan, application setting names can only contain letters, numbers (`0-9`), periods (`.`), colons (`:`) and underscores (`_`). For more information, see [App setting considerations](functions-app-settings.md#app-setting-considerations).
Consider the following class that includes a property named consistent with an app setting:
public class HttpTrigger
} ```
-Refer to [Options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options) for more details regarding working with options.
+For more information, see [Options pattern in ASP.NET Core](/aspnet/core/fundamentals/configuration/options).
## Using ASP.NET Core user secrets
-When developing locally, ASP.NET Core provides a [Secret Manager tool](/aspnet/core/security/app-secrets#secret-manager) that allows you to store secret information outside the project root. It makes it less likely that secrets are accidentally committed to source control. Azure Functions Core Tools (version 3.0.3233 or later) automatically reads secrets created by the ASP.NET Core Secret Manager.
+When you develop your app locally, ASP.NET Core provides a [Secret Manager tool](/aspnet/core/security/app-secrets#secret-manager) that allows you to store secret information outside the project root. It makes it less likely that secrets are accidentally committed to source control. Azure Functions Core Tools (version 3.0.3233 or later) automatically reads secrets created by the ASP.NET Core Secret Manager.
To configure a .NET Azure Functions project to use user secrets, run the following command in the project root.
To access user secrets values in your function app code, use `IConfiguration` or
## Customizing configuration sources
-To specify additional configuration sources, override the `ConfigureAppConfiguration` method in your function app's `StartUp` class.
+To specify other configuration sources, override the `ConfigureAppConfiguration` method in your function app's `StartUp` class.
-The following sample adds configuration values from a base and an optional environment-specific app settings files.
+The following sample adds configuration values from both base and optional environment-specific app settings files.
```csharp using System.IO;
Add configuration providers to the `ConfigurationBuilder` property of `IFunction
A `FunctionsHostBuilderContext` is obtained from `IFunctionsConfigurationBuilder.GetContext()`. Use this context to retrieve the current environment name and resolve the location of configuration files in your function app folder.
-By default, configuration files such as *appsettings.json* are not automatically copied to the function app's output folder. Update your *.csproj* file to match the following sample to ensure the files are copied.
+By default, configuration files such as `appsettings.json` aren't automatically copied to the function app's output folder. Update your `.csproj` file to match the following sample to ensure the files are copied.
```xml <None Update="appsettings.json">
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Keep the following considerations in mind when working with slot deployments:
:::zone pivot="premium-plan,dedicated-plan" ## Secured deployments
-You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. You function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md).
+You can create your function app in a deployment where one or more of the resources have been secured by integrating with virtual networks. Virtual network integration for your function app is defined by a `Microsoft.Web/sites/networkConfig` resource. This integration depends on both the referenced function app and virtual network resources. Your function app might also depend on other private networking resources, such as private endpoints and routes. For more information, see [Azure Functions networking options](functions-networking-options.md).
When creating a deployment that uses a secured storage account, you must both explicitly set the `WEBSITE_CONTENTSHARE` setting and create the file share resource named in this setting. Make sure you create a `Microsoft.Storage/storageAccounts/fileServices/shares` resource using the value of `WEBSITE_CONTENTSHARE`, as shown in this example ([ARM template](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/azuredeploy.json#L467)|[Bicep file](https://github.com/Azure-Samples/function-app-arm-templates/blob/main/function-app-private-endpoints-storage-private-endpoints/main.bicep#L351)).
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md
For function apps that run on Linux in a container, the `Azure Functions runtime
1. Check for any logged errors that indicate that the container is unable to start successfully.
-### Container image unavailable
+## Container image unavailable
Errors can occur when the container image being referenced is unavailable or fails to start correctly. Check for any logged errors that indicate that the container is unable to start successfully. You need to correct any errors that prevent the container from starting for the function app run correctly.
-When the container image can't be found, you'll see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli#manual-version-updates-on-linux) to change the container image being referenced. If you've deployed a [custom container image](./functions-how-to-custom-container.md), you need to fix the image and redeploy the updated version to the referenced registry.
+When the container image can't be found, you see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli#manual-version-updates-on-linux) to change the container image being referenced. If you've deployed a [custom container image](./functions-how-to-custom-container.md), you need to fix the image and redeploy the updated version to the referenced registry.
-### App container has conflicting ports
+## App container has conflicting ports
Your function app might be in an unresponsive state due to conflicting port assignment upon startup. This can happen in the following cases:
Starting with version 3.x of the Functions runtime, [host ID collision](storage-
## Read-only app settings
-Changing any _read-only_ [App Service application settings](../app-service/reference-app-settings.md#app-environment) can put your function app into an unreachable state.
+Changing any _read-only_ [App Service application settings](../app-service/reference-app-settings.md#app-environment) can put your function app into an unreachable state.
+
+## ASP.NET authentication overrides
+_Applies only to C# apps running [in-process with the Functions host](./functions-dotnet-class-library.md)._
+
+Configuring ASP.NET authentication in a Functions startup class can override services that are required for the Azure portal to communicate with the host. This includes, but isn't limited to, any calls to `AddAuthentication()`. If the host's authentication services are overridden and the portal can't communicate with the host, it considers the app unreachable. This issue may result in errors such as: `No authentication handler is registered for the scheme 'ArmToken'.`.
## Next steps
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
# Install the Log Analytics agent on Linux computers+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
This article provides details on installing the Log Analytics agent on Linux computers hosted in other clouds or on-premises. [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Last updated 7/19/2023
-# Customer intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines.
+# Customer intent: As an IT manager, I want to understand the capabilities of Azure Monitor Agent to determine whether I can use the agent to collect the data I need from the operating systems of my virtual machines.
+ # Azure Monitor Agent overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Azure Monitor Agent (AMA) collects monitoring data from the guest operating system of Azure and hybrid virtual machines and delivers it to Azure Monitor for use by features, insights, and other services, such as [Microsoft Sentinel](../../sentintel/../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). Azure Monitor Agent replaces all of Azure Monitor's legacy monitoring agents. This article provides an overview of Azure Monitor Agent's capabilities and supported use cases.
-Here's a short **introduction to Azure Monitor agent video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
+Here's a short **introduction to Azure Monitor agent video**, which includes a quick demo of how to set up the agent from the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
## Benefits
-Using Azure Monitor agent, you get immediate benefits as shown below:
+Using Azure Monitor agent, you get immediate benefits as shown below:
:::image type="content" source="media/azure-monitor-agent-overview/azure-monitor-agent-benefits.png" lightbox="media/azure-monitor-agent-overview/azure-monitor-agent-benefits.png" alt-text="Snippet of the Azure Monitor Agent benefits at a glance. This is described in more details below."::: - **Cost savings** by [using data collection rules](data-collection-rule-azure-monitor-agent.md): - Enables targeted and granular data collection for a machine or subset(s) of machines, as compared to the "all or nothing" approach of legacy agents.
- - Allows filtering rules and data transformations to reduce the overall data volume being uploaded, thus lowering ingestion and storage costs significantly.
+ - Allows filtering rules and data transformations to reduce the overall data volume being uploaded, thus lowering ingestion and storage costs significantly.
- **Simpler management** including efficient troubleshooting: - Supports data uploads to multiple destinations (multiple Log Analytics workspaces, i.e. *multihoming* on Windows and Linux) including cross-region and cross-tenant data collection (using Azure LightHouse).
- - Centralized agent configuration "in the cloud" for enterprise scale throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
+ - Centralized agent configuration "in the cloud" for enterprise scale throughout the data collection lifecycle, from onboarding to deployment to updates and changes over time.
- Any change in configuration is rolled out to all agents automatically, without requiring a client side deployment. - Greater transparency and control of more capabilities and services, such as Microsoft Sentinel, Defender for Cloud, and VM Insights. - **Security and Performance**
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| On-premises servers (Azure Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing the [Azure Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent by using Azure extension framework, provided for on-premises by first installing [Azure Arc agent](../../azure-arc/servers/deployment-options.md). | | Windows 10, 11 desktops, workstations | [Client installer](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. | | Windows 10, 11 laptops | [Client installer](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent *isn't optimized yet* for battery or network consumption. |
-
+ 1. Define a data collection rule and associate the resource to the rule. The table below lists the types of data you can currently collect with the Azure Monitor Agent and where you can send that data.
The tables below provide a comparison of Azure Monitor Agent with the legacy the
## Supported operating systems
-The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system.
+The following tables list the operating systems that Azure Monitor Agent and the legacy agents support. All operating systems are assumed to be x64. x86 isn't supported for any operating system.
View [supported operating systems for Azure Arc Connected Machine agent](../../azure-arc/servers/prerequisites.md#supported-operating-systems), which is a prerequisite to run Azure Monitor agent on physical servers and virtual machines hosted outside of Azure (that is, on-premises) or in other clouds. ### Windows
-| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension |
+| Operating system | Azure Monitor agent | Log Analytics agent (legacy) | Diagnostics extension |
|:|::|::|::| | Windows Server 2022 | Γ£ô | Γ£ô | | | Windows Server 2022 Core | Γ£ô | | |
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Windows 11 Client and Pro | Γ£ô<sup>2</sup>, <sup>3</sup> | | | | Windows 11 Enterprise<br>(including multi-session) | Γ£ô | | | | Windows 10 1803 (RS4) and higher | Γ£ô<sup>2</sup> | | |
-| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only) | Γ£ô | Γ£ô | Γ£ô |
+| Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only) | Γ£ô | Γ£ô | Γ£ô |
| Windows 8 Enterprise and Pro<br>(Server scenarios only) | | Γ£ô<sup>1</sup> | | | Windows 7 SP1<br>(Server scenarios only) | | Γ£ô<sup>1</sup> | | | Azure Stack HCI | Γ£ô | Γ£ô | |
An agent is only required to collect data from the operating system and workload
### How can I be notified when data collection from the Log Analytics agent stops?
-Use the steps described in [Create a new log alert](../alerts/alerts-metric.md) to be notified when data collection stops. Use the following settings for the alert rule:
-
+Use the steps described in [Create a new log search alert](../alerts/alerts-metric.md) to be notified when data collection stops. Use the following settings for the alert rule:
+ - **Define alert condition**: Specify your Log Analytics workspace as the resource target. - **Alert criteria**: - **Signal Name**: *Custom log search*.
Use the steps described in [Create a new log alert](../alerts/alerts-metric.md)
- **Define alert details**: - **Name**: *Data collection stopped*. - **Severity**: *Warning*.
-
-Specify an existing or new [action group](../alerts/action-groups.md) so that when the log alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes.
-
+
+Specify an existing or new [action group](../alerts/action-groups.md) so that when the log search alert matches criteria, you're notified if you have a heartbeat missing for more than 15 minutes.
+ ### Will Azure Monitor Agent support data collection for the various Log Analytics solutions and Azure services like Microsoft Defender for Cloud and Microsoft Sentinel?
-Review the list of [Azure Monitor Agent extensions currently available in preview](#supported-services-and-features). These extensions are the same solutions and services now available by using the new Azure Monitor Agent instead.
+Review the list of [Azure Monitor Agent extensions currently available in preview](#supported-services-and-features). These extensions are the same solutions and services now available by using the new Azure Monitor Agent instead.
+
+You might see more extensions getting installed for the solution or service to collect extra data or perform transformation or processing as required for the solution or service. Then use Azure Monitor Agent to route the final data to Azure Monitor.
-You might see more extensions getting installed for the solution or service to collect extra data or perform transformation or processing as required for the solution or service. Then use Azure Monitor Agent to route the final data to Azure Monitor.
-
The following diagram explains the new extensibility architecture.
-
+ :::image type="content" source="./media/azure-monitor-agent/extensibility-arch-new.png" lightbox="./media/azure-monitor-agent/extensibility-arch-new.png" alt-text="Diagram that shows extensions architecture."::: ### Is Azure Monitor Agent at parity with the Log Analytics agents?
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
# Azure Monitor agent extension versions+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes the version details for the Azure Monitor agent virtual machine extension. This extension deploys the agent on virtual machines, scale sets, and Arc-enabled servers (on premise servers with Azure Arc agent installed). We strongly recommended to always update to the latest version, or opt in to the [Automatic Extension Update](../../virtual-machines/automatic-extension-upgrade.md) feature.
We strongly recommended to always update to the latest version, or opt in to the
| Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | None | | Feb 2023 | <ul><li>**Linux (hotfix)** Resolved potential data loss due to "Bad file descriptor" errors seen in the mdsd error log with previous version. Upgrade to hotfix version</li><li>**Windows** Reliability improvements in Fluentbit buffering to handle larger text files</li></ul> | 1.13.1 | 1.25.2<sup>Hotfix</sup> | | Jan 2023 | **Linux** <ul><li>RHEL 9 and Amazon Linux 2 support</li><li>Update to OpenSSL 1.1.1s and require TLS 1.2 or higher</li><li>Performance improvements</li><li>Improvements in Garbage Collection for persisted disk cache and handling corrupted cache files better</li><li>**Fixes** <ul><li>Set agent service memory limit for CentOS/RedHat 7 distros. Resolved MemoryMax parsing error</li><li>Fixed modifying rsyslog system-wide log format caused by installer on RedHat/Centos 7.3</li><li>Fixed permissions to config directory</li><li>Installation reliability improvements</li><li>Fixed permissions on default file so rpm verification doesn't fail</li><li>Added traceFlags setting to enable trace logs for agent</li></ul></li></ul> **Windows** <ul><li>Fixed issue related to incorrect *EventLevel* and *Task* values for Log Analytics *Event* table, to match Windows Event Viewer values</li><li>Added missing columns for IIS logs - *TimeGenerated, Time, Date, Computer, SourceSystem, AMA, W3SVC, SiteName*</li><li>Reliability improvements for metrics collection</li><li>Fixed machine restart issues on for Arc-enabled servers related to repeated calls to HIMDS service</li></ul> | 1.12.0 | 1.25.1 |
-| Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0 | None |
-| Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1 MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lock down write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 |
-| Sep 2022 | Reliability improvements | 1.9.0 | None |
+| Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0 | None |
+| Oct 2022 | **Windows** <ul><li>Increased reliability of data uploads</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1 MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lock down write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 |
+| Sep 2022 | Reliability improvements | 1.9.0 | None |
| August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last three days (72 hours) up from 60 minutes, for agent to collect data post interruption. Look back time is subject to default offline cache size of 10 Gb</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0 | 1.22.2 | | July 2022 | Fix for mismatch event timestamps for Sentinel Windows Event Forwarding | 1.7.0 | None | | June 2022 | Bug fixes with user assigned identity support, and reliability improvements | 1.6.0 | None |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Before you begin migrating from the Log Analytics agent to Azure Monitor Agent,
### Before you begin > [!div class="checklist"]
-> - **Check the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for installing Azure Monitor Agent.**<br>To monitor non-Azure and on-premises servers, you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md). You won't incur an additional cost for installing the Azure Arc agent and you don't necessarily need to use Azure Arc to manage your non-Azure virtual machines.
+> - **Check the [prerequisites](./azure-monitor-agent-manage.md#prerequisites) for installing Azure Monitor Agent.**<br>To monitor non-Azure and on-premises servers, you must [install the Azure Arc agent](../../azure-arc/servers/agent-overview.md). The Arc agent makes your on-premises servers visible as to Azure as a resource it can target. You won't incur any additional cost for installing the Azure Arc agent.
> - **Understand your current needs.**<br>Use the **Workspace overview** tab of the [AMA Migration Helper](./azure-monitor-agent-migration-tools.md#using-ama-migration-helper) to see connected agents and discover solutions enabled on your Log Analytics workspaces that use legacy agents, including per-solution migration recommendations. > - **Verify that Azure Monitor Agent can address all of your needs.**<br>Azure Monitor Agent is generally available for data collection and is used for data collection by various Azure Monitor features and other Azure services. For details, see [Supported services and features](#migrate-additional-services-and-features). > - **Consider installing Azure Monitor Agent together with a legacy agent for a transition period.**<br>Run Azure Monitor Agent alongside the legacy Log Analytics agent on the same machine to continue using existing functionality during evaluation or migration. Keep in mind that running two agents on the same machine doubles resource consumption, including but not limited to CPU, memory, storage space, and network bandwidth.<br>
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
# Syslog troubleshooting guide for Azure Monitor Agent for Linux
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC standards: - Azure Monitor Agent installs an output configuration for the system Syslog daemon during the installation process. The configuration file specifies the way events flow between the Syslog daemon and Azure Monitor Agent.
azure-monitor Data Collection Snmp Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-snmp-data.md
# Collect SNMP trap data with Azure Monitor Agent+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
Simple Network Management Protocol (SNMP) is a widely-deployed management protocol for monitoring and configuring Linux devices and appliances.
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
# Collect Syslog events with Azure Monitor Agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Syslog is an event logging protocol that's common to Linux. You can use the Syslog daemon that's built in to Linux devices and appliances to collect local events of the types you specify. Then you can have it send those events to a Log Analytics workspace. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Azure Monitor agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent when Syslog collection is enabled in [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). Azure Monitor Agent then sends the messages to an Azure Monitor or Log Analytics workspace where a corresponding Syslog record is created in a [Syslog table](/azure/azure-monitor/reference/tables/syslog).
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
# Collect Syslog data sources with the Log Analytics agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Syslog is an event logging protocol that's common to Linux. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the messages to Azure Monitor where a corresponding record is created. [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
azure-monitor Troubleshooter Ama Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-linux.md
Last updated 12/14/2023 - # Customer intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own. # How to use the Linux operating system (OS) Azure Monitor Agent Troubleshooter+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The Azure Monitor Agent Troubleshooter (AMA) is designed to help identify issues with the agent and perform general health assessments. It can perform various checks to ensure that the agent is properly installed and connected, and can also gather AMA-related logs from the machine being diagnosed. > [!Note]
If directory doesn't exist or the installation is failed, follow [Basic troubles
If the directory exists, proceed to [Run the Troubleshooter](#run-the-troubleshooter). ## Run the Troubleshooter
-On the machine to be diagnosed, run the Agent Troubleshooter.
+On the machine to be diagnosed, run the Agent Troubleshooter.
**Log Mode** enables the collection of logs, which can then be compressed into .tgz format for export or review. **Interactive Mode** allows users to actively engage in troubleshooting scenarios and view the output directly within the shell.
To start the Agent Troubleshooter in log mode, copy the following command and ru
```Bash cd /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}/ama_tst/
-sudo sh ama_troubleshooter.sh -L
+sudo sh ama_troubleshooter.sh -L
``` Enter a path to output logs to. For instance, you might use **/tmp**.
To start the Agent Troubleshooter in interactive mode, copy the following comman
```Bash cd /var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}/ama_tst/
-sudo sh ama_troubleshooter.sh -A
+sudo sh ama_troubleshooter.sh -A
``` It runs a series of scenarios and displays the results.
It runs a series of scenarios and displays the results.
### Can I copy the Troubleshooter from a newer agent to an older agent and run it on the older agent to diagnose issues with the older agent? It isn't possible to use the Troubleshooter to diagnose an older version of the agent by copying it. You must have an up-to-date version of the agent for the Troubleshooter to work properly.
-
+ ## Next Steps - [Troubleshooting guidance for the Azure Monitor agent](../agents/azure-monitor-agent-troubleshoot-linux-vm.md) on Linux virtual machines and scale sets - [Syslog troubleshooting guide for Azure Monitor Agent](../agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) for Linux
azure-monitor Alerts Common Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema.md
# Common alert schema
-The common alert schema standardizes the consumption of Azure Monitor alert notifications. Historically, activity log, metric, and log alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications.
+The common alert schema standardizes the consumption of Azure Monitor alert notifications. Historically, activity log, metric, and log search alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications.
Using a standardized schema helps minimize the number of integrations, which simplifies the process of managing and maintaining your integrations. The common schema enables a richer alert consumption experience in both the Azure portal and the Azure mobile app.
For sample alerts that use the common schema, see [Sample alert payloads](alerts
| signalType | Identifies the signal on which the alert rule was defined. Possible values are Metric, Log, or Activity Log. | | monitorCondition | When an alert fires, the alert's monitor condition is set to **Fired**. When the underlying condition that caused the alert to fire clears, the monitor condition is set to **Resolved**. | | monitoringService | The monitoring service or solution that generated the alert. The monitoring service determines which fields are in the alert context. |
-| alertTargetIDs | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
-| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the data, and not the workspace.<br><ul><li>In the log alerts API (Scheduled Query Rules) v2021-08-01, the `configurationItem` values are taken from explicitly defined dimensions in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li><li>In earlier versions of the log alerts API, the `configurationItem` values are taken implicitly from the results in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li></ul>In ITSM systems, the `configurationItems` field is used to correlate alerts to resources in a configuration management database. |
+| alertTargetIDs | The list of the Azure Resource Manager IDs that are affected targets of an alert. For a log search alert defined on a Log Analytics workspace or Application Insights instance, it's the respective workspace or application. |
+| configurationItems |The list of affected resources of an alert.<br>In some cases, the configuration items can be different from the alert targets. For example, in metric-for-log or log search alerts defined on a Log Analytics workspace, the configuration items are the actual resources sending the data, and not the workspace.<br><ul><li>In the log search alerts API (Scheduled Query Rules) v2021-08-01, the `configurationItem` values are taken from explicitly defined dimensions in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li><li>In earlier versions of the log search alerts API, the `configurationItem` values are taken implicitly from the results in this priority: `_ResourceId`, `ResourceId`, `Resource`, `Computer`.</li></ul>In ITSM systems, the `configurationItems` field is used to correlate alerts to resources in a configuration management database. |
| originAlertId | The ID of the alert instance, as generated by the monitoring service generating it. | | firedDateTime | The date and time when the alert instance was fired in Coordinated Universal Time (UTC). | | resolvedDateTime | The date and time when the monitor condition for the alert instance is set to **Resolved** in UTC. Currently only applicable for metric alerts.|
For sample alerts that use the common schema, see [Sample alert payloads](alerts
} ```
-## Alert context fields for Log alerts
+## Alert context fields for log search alerts
> [!NOTE]
-> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log alerts have these limitations regarding the common schema:
-> - The common schema is not supported for log alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations.
-> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included.
+> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log search alerts have these limitations regarding the common schema:
+> - The common schema is not supported for log search alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations.
+> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log search alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log search alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included.
|Field |Description | |||
For sample alerts that use the common schema, see [Sample alert payloads](alerts
-### Sample log alert when the monitoringService = Log Analytics
+### Sample log search alert when the monitoringService = Log Analytics
```json {
For sample alerts that use the common schema, see [Sample alert payloads](alerts
} } ```
-### Sample log alert when the monitoringService = Application Insights
+### Sample log search alert when the monitoringService = Application Insights
```json {
For sample alerts that use the common schema, see [Sample alert payloads](alerts
} } ```
-### Sample log alert when the monitoringService = Log Alerts V2
+### Sample log search alert when the monitoringService = Log Alerts V2
> [!NOTE]
-> Log alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
+> Log search alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log search alerts payload when you use this version. Use [dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
```json {
For sample alerts that use the common schema, see [Sample alert payloads](alerts
## Alert context fields for activity log alerts See [Azure activity log event schema](../essentials/activity-log-schema.md) for detailed information about the fields in activity log alerts.+ ### Sample activity log alert when the monitoringService = Activity Log - Administrative ```json
azure-monitor Alerts Create Log Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-log-alert-rule.md
Title: Create Azure Monitor log alert rules
-description: This article shows you how to create a new log alert rule.
+ Title: Create Azure Monitor log search alert rules
+description: This article shows you how to create a new log search alert rule.
Last updated 11/27/2023
-# Create or edit a log alert rule
+# Create or edit a log search alert rule
-This article shows you how to create a new log alert rule or edit an existing log alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md).
+This article shows you how to create a new log search alert rule or edit an existing log search alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md).
You create an alert rule by combining the resources to be monitored, the monitoring data from the resource, and the conditions that you want to trigger the alert. You can then define [action groups](./action-groups.md) and [alert processing rules](alerts-action-rules.md) to determine what happens when an alert is triggered.
Alerts triggered by these alert rules contain a payload that uses the [common al
1. On the **Logs** pane, write a query that returns the log events for which you want to create an alert. To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries. > [!NOTE]
- > Log alert rule queries do not support the 'bag_unpack()', 'pivot()' and 'narrow()' plugins.
+ > Log search alert rule queries do not support the 'bag_unpack()', 'pivot()' and 'narrow()' plugins.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log search alert rule.":::
1. (Optional) If you're querying an ADX or ARG cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example:
Alerts triggered by these alert rules contain a payload that uses the [common al
| project _ResourceId=tolower(id), tags ```
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log search alert rule.":::
- For sample log alert queries that query ARG or ADX, see [log alert query samples](./alerts-log-alert-query-samples.md)
+ For sample log search alert queries that query ARG or ADX, see [Log search alert query samples](./alerts-log-alert-query-samples.md)
1. Select **Run** to run the alert. 1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**. 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information. 1. In the **Measurement** section, select values for these fields:
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log search alert rule.":::
|Field |Description | |||
- |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. |
+ |Measure|Log search alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. |
|Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value by using the aggregation granularity. Examples are Total, Average, Minimum, or Maximum. | |Aggregation granularity| The interval for aggregating multiple records to one numeric value.| 1. <a name="dimensions"></a>(Optional) In the **Split by dimensions** section, you can use dimensions to help provide context for the triggered alert.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log search alert rule.":::
Dimensions are columns from your query results that contain additional data. When you use dimensions, the alert rule groups the query results by the dimension values and evaluates the results of each group separately. If the condition is met, the rule fires an alert for that group. The alert payload includes the combination that triggered the alert.
Alerts triggered by these alert rules contain a payload that uses the [common al
1. In the **Alert logic** section, select values for these fields:
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log search alert rule.":::
|Field |Description | |||
Alerts triggered by these alert rules contain a payload that uses the [common al
> * The query uses the **adx** pattern > * The query calls a function that calls other tables
- For sample log alert queries that query ARG or ADX, see [log alert query samples](./alerts-log-alert-query-samples.md)
+ For sample log search alert queries that query ARG or ADX, see [Log search alert query samples](./alerts-log-alert-query-samples.md)
1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log search alert rule.":::
Select values for these fields under **Number of violations to trigger the alert**:
Alerts triggered by these alert rules contain a payload that uses the [common al
1. Define the **Alert rule details**.
- :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule.":::
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log search alert rule.":::
1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.
- 1. <a name="managed-id"></a>In the **Identity** section, select which identity is used by the log alert rule to send the log query. This identity is used for authentication when the alert rule executes the log query.
+ 1. <a name="managed-id"></a>In the **Identity** section, select which identity is used by the log search alert rule to send the log query. This identity is used for authentication when the alert rule executes the log query.
Keep these things in mind when selecting an identity: - A managed identity is required if you're sending a query to Azure Data Explorer.
Alerts triggered by these alert rules contain a payload that uses the [common al
- Use a managed identity to help you avoid a case where the rule doesn't work as expected because the user that last edited the rule didn't have permissions for all the resources added to the scope of the rule. The identity associated with the rule must have these roles:
- - If the query is accessing a Log Analytics workspace, the identity must be assigned a **Reader role** for all workspaces accessed by the query. If you're creating resource-centric log alerts, the alert rule may access multiple workspaces, and the identity must have a reader role on all of them.
+ - If the query is accessing a Log Analytics workspace, the identity must be assigned a **Reader role** for all workspaces accessed by the query. If you're creating resource-centric log search alerts, the alert rule may access multiple workspaces, and the identity must have a reader role on all of them.
- If you are querying an ADX or ARG cluster you must add **Reader role** for all data sources accessed by the query. For example, if the query is resource centric, it needs a reader role on that resources. - If the query is [accessing a remote Azure Data Explorer cluster](../logs/azure-monitor-data-explorer-proxy.md), the identity must be assigned: - **Reader role** for all data sources accessed by the query. For example, if the query is calling a remote Azure Data Explorer cluster using the adx() function, it needs a reader role on that ADX cluster.
Alerts triggered by these alert rules contain a payload that uses the [common al
|Field |Description | ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
- |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met for a specific time range. The time range differs based on the frequency of the alert:<br>**1 minute**: The alert condition isn't met for 10 minutes.<br>**5-15 minutes**: The alert condition isn't met for three frequency periods.<br>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.<br>**11 to 12 hours**: The alert condition isn't met for one frequency period. <br><br>Note that stateful log alerts have these limitations:<br> - they can trigger up to 300 alerts per evaluation.<br> - you can have a maximum of 5000 alerts with the `fired` alert condition.|
+ |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met for a specific time range. The time range differs based on the frequency of the alert:<br>**1 minute**: The alert condition isn't met for 10 minutes.<br>**5-15 minutes**: The alert condition isn't met for three frequency periods.<br>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.<br>**11 to 12 hours**: The alert condition isn't met for one frequency period. <br><br>Note that stateful log search alerts have these limitations:<br> - they can trigger up to 300 alerts per evaluation.<br> - you can have a maximum of 5000 alerts with the `fired` alert condition.|
|Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
Alerts triggered by these alert rules contain a payload that uses the [common al
## Next steps-- [Log alert query samples](./alerts-log-alert-query-samples.md)
+- [Log search alert query samples](./alerts-log-alert-query-samples.md)
- [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Alerts Create Rule Cli Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use these. - To create a metric alert rule, use the [az monitor metrics alert create](/cli/azure/monitor/metrics/alert) command.
- - To create a log alert rule, use the [az monitor scheduled-query create](/cli/azure/monitor/scheduled-query) command.
+ - To create a log search alert rule, use the [az monitor scheduled-query create](/cli/azure/monitor/scheduled-query) command.
- To create an activity log alert rule, use the [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert) command. For example, to create a metric alert rule that monitors if average Percentage CPU on a VM is greater than 90:
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
- To create a metric alert rule using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet. > [!NOTE] > When you create a metric alert on a single resource, the syntax uses the `TargetResourceId`. When you create a metric alert on multiple resources, the syntax contains the `TargetResourceScope`, `TargetResourceType`, and `TargetResourceRegion`.-- To create a log alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.
+- To create a log search alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.
- To create an activity log alert rule using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet. ## Create a new alert rule using an ARM template
You can use an [Azure Resource Manager template (ARM template)](../../azure-reso
> - We recommend that you create the metric alert using the same resource group as your target resource. > - Metric alerts for an Azure Log Analytics workspace resource type (`Microsoft.OperationalInsights/workspaces`) are configured differently than other metric alerts. For more information, see [Resource Template for Metric Alerts for Logs](alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs). > - If you are creating a metric alert for a single resource, the template uses the `ResourceId` of the target resource. If you are creating a metric alert for multiple resources, the template uses the `scope`, `TargetResourceType`, and `TargetResourceRegion` for the target resources.
- - For log alerts: `Microsoft.Insights/scheduledQueryRules`
+ - For log search alerts: `Microsoft.Insights/scheduledQueryRules`
- For activity log, service health, and resource health alerts: `microsoft.Insights/activityLogAlerts` 1. Copy one of the templates from these sample ARM templates. - For metric alerts: [Resource Manager template samples for metric alert rules](resource-manager-alerts-metric.md)
- - For log alerts: [Resource Manager template samples for log alert rules](resource-manager-alerts-log.md)
+ - For log search alerts: [Resource Manager template samples for log search alert rules](resource-manager-alerts-log.md)
- For activity log alerts: [Resource Manager template samples for activity log alert rules](resource-manager-alerts-activity-log.md) - For service health alerts: [Resource Manager template samples for service health alert rules](resource-manager-alerts-service-health.md) - For resource health alerts: [Resource Manager template samples for resource health alert rules](resource-manager-alerts-resource-health.md)
azure-monitor Alerts Log Alert Query Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-alert-query-samples.md
Title: Samples of Azure Monitor log alert rule queries
-description: See examples of Azure monitor log alert rule queries.
+ Title: Samples of Azure Monitor log search alert rule queries
+description: See examples of Azure monitor log search alert rule queries.
Last updated 01/04/2024
-# Sample log alert queries that include ADX and ARG
+# Sample log search alert queries that include ADX and ARG
-A log alert rule monitors a resource by using a Log Analytics query to evaluate logs at a set frequency. You can include data from Azure Data Explorer and Azure Resource Graph in your log alert rule queries.
+A log search alert rule monitors a resource by using a Log Analytics query to evaluate logs at a set frequency. You can include data from Azure Data Explorer and Azure Resource Graph in your log search alert rule queries.
-This article provides examples of log alert rule queries that use Azure Data Explorer and Azure Resource Graph. For more information about creating a log alert rule, see [Create a log alert rule](./alerts-create-log-alert-rule.md).
+This article provides examples of log search alert rule queries that use Azure Data Explorer and Azure Resource Graph. For more information about creating a log search alert rule, see [Create a log search alert rule](./alerts-create-log-alert-rule.md).
## Queries that check virtual machine health
This query finds virtual machines marked as critical that had a heartbeat more t
``` ## Next steps-- [Learn more about creating a log alert rule](./alerts-create-log-alert-rule.md)-- [Learn how to optimize log alert queries](./alerts-log-query.md)
+- [Learn more about creating a log search alert rule](./alerts-create-log-alert-rule.md)
+- [Learn how to optimize log search alert queries](./alerts-log-query.md)
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
Title: Upgrade legacy rules management to the current Azure Monitor Log Alerts API
-description: Learn how to switch to the log alerts management to ScheduledQueryRules API
+ Title: Upgrade legacy rules management to the current Azure Monitor Scheduled Query Rules API
+description: Learn how to switch log search alert management to ScheduledQueryRules API.
Last updated 07/09/2023
-# Upgrade to the Log Alerts API from the legacy Log Analytics alerts API
+# Upgrade to the Scheduled Query Rules API from the legacy Log Analytics Alert API
> [!IMPORTANT]
-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log alerts by that date.
-> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits).
+> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics Alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log search alerts by that date.
+> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage log search alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits).
> Once you migrate rules to the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules), you cannot revert back to the older [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts).
-In the past, users used the [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts) to manage log alert rules. Currently workspaces use [ScheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) for new rules. This article describes the benefits and the process of switching legacy log alert rules management from the legacy API to the current API.
+In the past, users used the [legacy Log Analytics Alert API](/azure/azure-monitor/alerts/api-alerts) to manage log search alert rules. Currently workspaces use the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) for new rules. This article describes the benefits and the process of switching legacy log search alert rules management from the legacy API to the current API.
## Benefits -- Manage all log rules in one API.
+- Manage all log search alert rules in one API.
- Single template for creation of alert rules (previously needed three separate templates). - Single API for all Azure resources log alerting.-- Support for stateful (preview) and 1-minute log alerts.
+- Support for stateful (preview) and 1-minute log search alerts.
- [PowerShell cmdlets](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell) and [Azure CLI](/azure/azure-monitor/alerts/alerts-log#manage-log-alerts-using-cli) support for switched rules. - Alignment of severities with all other alert types and newer rules. - Ability to create a [cross workspace log alert](/azure/azure-monitor/logs/cross-workspace-query) that spans several external resources like Log Analytics workspaces or Application Insights resources for switched rules. - Users can specify dimensions to split the alerts for switched rules.-- Log alerts have extended period of up to two days of data (previously limited to one day) for switched rules.
+- Log search alerts have an extended period of up to two days of data (previously limited to one day) for switched rules.
## Impact - All switched rules must be created/edited with the current API. See [sample use via Azure Resource Template](/azure/azure-monitor/alerts/alerts-log-create-templates) and [sample use via PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell).-- As rules become Azure Resource Manager tracked resources in the current API and must be unique, rules resource ID will change to this structure: `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Display names of the alert rule will remain unchanged.
+- As rules become Azure Resource Manager tracked resources in the current API and must be unique, the resource IDs for the rules change to this structure: `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Display names for the alert rules remain unchanged.
+ ## Process View workspaces to upgrade using this [Azure Resource Graph Explorer query](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29). Open the [link](https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/resources%0A%7C%20where%20type%20%3D~%20%22microsoft.insights%2Fscheduledqueryrules%22%0A%7C%20where%20properties.isLegacyLogAnalyticsRule%20%3D%3D%20true%0A%7C%20distinct%20tolower%28properties.scopes%5B0%5D%29), select all available subscriptions, and run the query.
$switchJSON = '{"scheduledQueryRulesEnabled": true}'
armclient PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview $switchJSON ```
-You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool:
+You can also use the [Azure CLI](/cli/azure/reference-index#az-rest) tool:
```bash az rest --method put --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body "{\"scheduledQueryRulesEnabled\" : true}"
You can also use this API call to check the switch status:
GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview ```
-You can also use [ARMClient](https://github.com/projectkudu/ARMClient) tool:
+You can also use the [ARMClient](https://github.com/projectkudu/ARMClient) tool:
```powershell armclient GET /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview ```
-You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool:
+You can also use the [Azure CLI](/cli/azure/reference-index#az-rest) tool:
```bash az rest --method get --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview
If the Log Analytics workspace wasn't switched, the response is:
## Next steps -- Learn about the [Azure Monitor - Log Alerts](/azure/azure-monitor/alerts/alerts-types).-- Learn how to [manage your log alerts using the API](/azure/azure-monitor/alerts/alerts-log-create-templates).-- Learn how to [manage log alerts using PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell).
+- Learn about the [Azure Monitor log search alerts](/azure/azure-monitor/alerts/alerts-types).
+- Learn how to [manage your log search alerts using the API](/azure/azure-monitor/alerts/alerts-log-create-templates).
+- Learn how to [manage your log search alerts using PowerShell](/azure/azure-monitor/alerts/alerts-manage-alerts-previous-version#manage-log-alerts-by-using-powershell).
- Learn more about the [Azure Alerts experience](/azure/azure-monitor/alerts/alerts-overview).
azure-monitor Alerts Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-query.md
Title: Optimize log alert queries | Microsoft Docs
+ Title: Optimize log search alert queries | Microsoft Docs
description: This article gives recommendations for writing efficient alert queries.
Last updated 5/30/2023
-# Optimize log alert queries
+# Optimize log search alert queries
-This article describes how to write and convert [Log alerts](alerts-types.md#log-alerts) to achieve optimal performance. Optimized queries reduce latency and load of alerts, which run frequently.
+This article describes how to write and convert [log search alerts](alerts-types.md#log-alerts) to achieve optimal performance. Optimized queries reduce latency and load of alerts, which run frequently.
## Start writing an alert log query
Using `limit` and `take` in queries can increase latency and load of alerts beca
[Log queries in Azure Monitor](../logs/log-query-overview.md) start with either a table, [`search`](/azure/kusto/query/searchoperator), or [`union`](/azure/kusto/query/unionoperator) operator.
-Queries for log alert rules should always start with a table to define a clear scope, which improves query performance and the relevance of the results. Queries in alert rules run frequently. Using `search` and `union` can result in excessive overhead that adds latency to the alert because it requires scanning across multiple tables. These operators also reduce the ability of the alerting service to optimize the query.
+Queries for log search alert rules should always start with a table to define a clear scope, which improves query performance and the relevance of the results. Queries in alert rules run frequently. Using `search` and `union` can result in excessive overhead that adds latency to the alert because it requires scanning across multiple tables. These operators also reduce the ability of the alerting service to optimize the query.
-We don't support creating or modifying log alert rules that use `search` or `union` operators, except for cross-resource queries.
+We don't support creating or modifying log search alert rules that use `search` or `union` operators, except for cross-resource queries.
For example, the following alerting query is scoped to the _SecurityEvent_ table and searches for a specific event ID. It's the only table that the query must process.
SecurityEvent
| where EventID == 4624 ```
-Log alert rules using [cross-resource queries](../logs/cross-workspace-query.md) aren't affected by this change because cross-resource queries use a type of `union`, which limits the query scope to specific resources. The following example would be a valid log alert query:
+Log search alert rules using [cross-resource queries](../logs/cross-workspace-query.md) aren't affected by this change because cross-resource queries use a type of `union`, which limits the query scope to specific resources. The following example would be a valid log search alert query:
```Kusto union
workspace('00000000-0000-0000-0000-000000000003').Perf
``` >[!NOTE]
-> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log alerts, see [Upgrade legacy rules management to the current Azure Monitor Log Alerts API](./alerts-log-api-switch.md) to learn about switching.
+> [Cross-resource queries](../logs/cross-workspace-query.md) are supported in the new [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules). If you still use the [legacy Log Analytics Alert API](./api-alerts.md) for creating log search alerts, see [Upgrade legacy rules management to the current Azure Monitor Scheduled Query Rules API](./alerts-log-api-switch.md) to learn about switching.
## Examples
The following examples include log queries that use `search` and `union`. They p
### Example 1
-You want to create a log alert rule by using the following query that retrieves performance information using `search`:
+You want to create a log search alert rule by using the following query that retrieves performance information using `search`:
``` Kusto search *
search *
### Example 2
-You want to create a log alert rule by using the following query that retrieves performance information using `search`:
+You want to create a log search alert rule by using the following query that retrieves performance information using `search`:
``` Kusto search ObjectName =="Memory" and CounterName=="% Committed Bytes In Use"
search ObjectName =="Memory" and CounterName=="% Committed Bytes In Use"
### Example 3
-You want to create a log alert rule by using the following query that uses both `search` and `union` to retrieve performance information:
+You want to create a log search alert rule by using the following query that uses both `search` and `union` to retrieve performance information:
``` Kusto search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceName == "_Total")
search (ObjectName == "Processor" and CounterName == "% Idle Time" and InstanceN
### Example 4
-You want to create a log alert rule by using the following query that joins the results of two `search` queries:
+You want to create a log search alert rule by using the following query that joins the results of two `search` queries:
```Kusto search Type == 'SecurityEvent' and EventID == '4625'
search Type == 'SecurityEvent' and EventID == '4625'
## Next steps -- Learn about [log alerts](alerts-log.md) in Azure Monitor.
+- Learn about [log search alerts](alerts-log.md) in Azure Monitor.
- Learn about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Log Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md
Title: Sample payloads for Azure Monitor log alerts using webhook actions
-description: This article describes how to configure log alert rules with webhook actions and available customizations.
+ Title: Sample payloads for Azure Monitor log search alerts using webhook actions
+description: This article describes how to configure log search alert rules with webhook actions and available customizations.
Last updated 11/23/2023
-# Sample payloads for log alerts using webhook actions
+# Sample payloads for log search alerts using webhook actions
-You can use webhook actions in a log alert rule to invoke a single HTTP POST request. In this article, we describe the properties that are available when you [configure action groups to use webhooks](./action-groups.md). The service that's called must support webhooks and know how to use the payload it receives.
+You can use webhook actions in a log search alert rule to invoke a single HTTP POST request. In this article, we describe the properties that are available when you [configure action groups to use webhooks](./action-groups.md). The service that's called must support webhooks and know how to use the payload it receives.
We recommend that you use [common alert schema](../alerts/alerts-common-schema.md) for your webhook integrations. The common alert schema provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor.
-For log alert rules that have a custom JSON payload defined, enabling the common alert schema reverts the payload schema to the one described in [Common alert schema](../alerts/alerts-common-schema.md#alert-context-fields-for-log-alerts). If you want to have a custom JSON payload defined, the webhook can't use the common alert schema.
+For log search alert rules that have a custom JSON payload defined, enabling the common alert schema reverts the payload schema to the one described in [Common alert schema](../alerts/alerts-common-schema.md#alert-context-fields-for-log-search-alerts). If you want to have a custom JSON payload defined, the webhook can't use the common alert schema.
Alerts with the common schema enabled have an upper size limit of 256 KB per alert. A bigger alert doesn't include search results. When the search results aren't included, use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results via the Log Analytics API. The sample payloads include examples when the payload is standard and when it's custom.
-## Log alert for all resources logs (from API version `2021-08-01`)
+## Log search alert for all resources logs (from API version `2021-08-01`)
-The following sample payload is for a standard webhook when it's used for log alerts based on resources logs:
+The following sample payload is for a standard webhook when it's used for log search alerts based on resources logs:
```json {
The following sample payload is for a standard webhook when it's used for log al
} ```
-## Log alert for Log Analytics (up to API version `2018-04-16`)
+## Log search alert for Log Analytics (up to API version `2018-04-16`)
+ The following sample payload is for a standard webhook action that's used for alerts based on Log Analytics: > [!NOTE]
The following sample payload is for a standard webhook action that's used for al
} ```
-## Log alert for Application Insights (up to API version `2018-04-16`)
-The following sample payload is for a standard webhook when it's used for log alerts based on Application Insights resources:
+## Log search alert for Application Insights (up to API version `2018-04-16`)
+
+The following sample payload is for a standard webhook when it's used for log search alerts based on Application Insights resources:
```json {
The following sample payload is for a standard webhook when it's used for log al
} ```
-## Log alert with a custom JSON payload (up to API version `2018-04-16`)
+## Log search alert with a custom JSON payload (up to API version `2018-04-16`)
> [!NOTE] > A custom JSON-based webhook isn't supported from API version `2021-08-01`.
The following table lists default webhook action properties and their custom JSO
| Parameter | Variable | Description | |: |: |: | | `AlertRuleName` |#alertrulename |Name of the alert rule. |
-| `Severity` |#severity |Severity set for the fired log alert. |
+| `Severity` |#severity |Severity set for the fired log search alert. |
| `AlertThresholdOperator` |#thresholdoperator |Threshold operator for the alert rule. | | `AlertThresholdValue` |#thresholdvalue |Threshold value for the alert rule. | | `LinkToSearchResults` |#linktosearchresults |Link to the Analytics portal that returns the records from the query that created the alert. |
The following table lists default webhook action properties and their custom JSO
| `ResultCount` |#searchresultcount |Number of records in the search results. | | `Search Interval End time` |#searchintervalendtimeutc |End time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM. | | `Search Interval` |#searchinterval |Time window for the alert rule, with the format HH:mm:ss. |
-| `Search Interval StartTime` |#searchintervalstarttimeutc |Start time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM.
+| `Search Interval StartTime` |#searchintervalstarttimeutc |Start time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM. |
| `SearchQuery` |#searchquery |Log search query used by the alert rule. | | `SearchResults` |"IncludeSearchResults": true|Records returned by the query as a JSON table, limited to the first 1,000 records. "IncludeSearchResults": true is added in a custom JSON webhook definition as a top-level property. | | `Dimensions` |"IncludeDimensions": true|Dimensions value combinations that triggered that alert as a JSON section. "IncludeDimensions": true is added in a custom JSON webhook definition as a top-level property. |
-| `Alert Type`| #alerttype | The type of log alert rule configured as [Metric measurement or Number of results](./alerts-unified-log.md#measure).|
+| `Alert Type`| #alerttype | The type of log search alert rule configured as [Metric measurement or Number of results](./alerts-types.md#log-alerts).|
| `WorkspaceID` |#workspaceid |ID of your Log Analytics workspace. | | `Application ID` |#applicationid |ID of your Application Insights app. | | `Subscription ID` |#subscriptionid |ID of your Azure subscription used. |
For example, to create a custom payload that includes only the alert name and th
} ```
-The following sample payload is for a custom webhook action for any log alert:
+The following sample payload is for a custom webhook action for any log search alert:
```json {
The following sample payload is for a custom webhook action for any log alert:
``` ## Next steps+ - Learn about [Azure Monitor alerts](./alerts-overview.md). - Create and manage [action groups in Azure](./action-groups.md). - Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
1. To edit an alert rule, select **Edit**, and then edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, or the **Signal type** of an existing alert rule. - **Scope**. You can edit the scope for all alert rules **other than**:
- - Log alert rules
+ - Log search alert rules
- Metric alert rules that monitor a custom metric - Smart detection alert rules
- - **Condition**. Learn more about conditions for [metric alert rules](./alerts-create-new-alert-rule.md?tabs=metric#tabpanel_1_metric), [log alert rules](./alerts-create-new-alert-rule.md?tabs=log#tabpanel_1_log), and [activity log alert rules](./alerts-create-new-alert-rule.md?tabs=activity-log#tabpanel_1_activity-log)
+ - **Condition**. Learn more about conditions for [metric alert rules](./alerts-create-new-alert-rule.md?tabs=metric#tabpanel_1_metric), [log search alert rules](./alerts-create-new-alert-rule.md?tabs=log#tabpanel_1_log), and [activity log alert rules](./alerts-create-new-alert-rule.md?tabs=activity-log#tabpanel_1_activity-log)
- **Actions** - **Alert rule details** 1. Select **Save** on the top command bar. > [!NOTE]
-> This section describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage log alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage log alert rules created in the previous UI.
+> This section describes how to manage alert rules created in the latest UI or using an API version later than `2018-04-16`. See [View and manage log search alert rules created in previous versions](alerts-manage-alerts-previous-version.md) for information about how to view and manage log search alert rules created in the previous UI.
## Enable recommended alert rules in the Azure portal
Metric alert rules have these dedicated PowerShell cmdlets:
- [Update](/rest/api/monitor/metricalerts/update): Update a metric alert rule. - [Delete](/rest/api/monitor/metricalerts/delete): Delete a metric alert rule.
-## Manage log alert rules using the CLI
+## Manage log search alert rules using the CLI
-This section describes how to manage log alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
+This section describes how to manage log search alerts using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
> [!NOTE] > Azure CLI support is only available for the scheduledQueryRules API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described below. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you will need to switch to use CLI. [Learn more about switching](./alerts-log-api-switch.md). - 1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. 1. Use these options of the `az monitor scheduled-query alert` CLI command in this table:- |What you want to do|CLI command | |||
This section describes how to manage log alerts using the cross-platform [Azure
|Delete a log alert rule|`az monitor scheduled-query delete -g {ResourceGroup} -n {AlertRuleName}`| |Learn more about the command|`az monitor scheduled-query --help`|
-### Manage log alert rules using the Azure Resource Manager CLI with [templates](./alerts-log-create-templates.md)
+### Manage log search alert rules using the Azure Resource Manager CLI with [templates](./alerts-log-create-templates.md)
```azurecli az login
az deployment group create \
A 201 response is returned on successful creation. 200 is returned on successful updates.
-## Manage log alert rules with PowerShell
+## Manage log search alert rules with PowerShell
+
+Log search alert rules have this dedicated PowerShell cmdlet:
+- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): Creates a new log search alert rule or updates an existing log search alert rule.
-Log alert rules have this dedicated PowerShell cmdlet:
-- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): Creates a new log alert rule or updates an existing log alert rule. ## Manage activity log alert rules using PowerShell Activity log alerts have these dedicated PowerShell cmdlets:
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
Title: View and manage log alert rules created in previous versions| Microsoft Docs
-description: Use the Azure Monitor portal to manage log alert rules created in earlier versions.
+ Title: View and manage log search alert rules created in previous versions| Microsoft Docs
+description: Use the Azure Monitor portal to manage log search alert rules created in earlier versions.
Last updated 06/20/2023
# Manage alert rules created in previous versions
-This article describes the process of managing alert rules created in the previous UI or by using API version `2018-04-16` or earlier. Alert rules created in the latest UI are viewed and managed in the new UI, as described in [Create, view, and manage log alerts by using Azure Monitor](alerts-log.md).
+This article describes the process of managing alert rules created in the previous UI or by using API version `2018-04-16` or earlier. Alert rules created in the latest UI are viewed and managed in the new UI, as described in [Create, view, and manage log search alerts by using Azure Monitor](alerts-log.md).
-## Changes to the log alert rule creation experience
+## Changes to the log search alert rule creation experience
The current alert rule wizard is different from the earlier experience:
The current alert rule wizard is different from the earlier experience:
1. Edit the alert rule conditions by using these sections: - **Search query**: In this section, you can modify your query.
- - **Alert logic**: Log alerts can be based on two types of [measures](./alerts-unified-log.md#measure):
+ - **Alert logic**: Log search alerts can be based on two types of [measures](./alerts-types.md#log-alerts):
1. **Number of results**: Count of records returned by the query. 1. **Metric measurement**: **Aggregate value** is calculated by using `summarize` grouped by the expressions chosen and the [bin()](/azure/data-explorer/kusto/query/binfunction) selection. For example: ```Kusto
The current alert rule wizard is different from the earlier experience:
or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m) ```
- For metric measurements alert logic, you can specify how to [split the alerts by dimensions](./alerts-unified-log.md#split-by-alert-dimensions) by using the **Aggregate on** option. The row grouping expression must be unique and sorted.
+ For metric measurements alert logic, you can specify how to [split the alerts by dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions) by using the **Aggregate on** option. The row grouping expression must be unique and sorted.
The [bin()](/azure/data-explorer/kusto/query/binfunction) function can result in uneven time intervals, so the alert service automatically converts the [bin()](/azure/data-explorer/kusto/query/binfunction) function to a [binat()](/azure/data-explorer/kusto/query/binatfunction) function with appropriate time at runtime to ensure results with a fixed point.
The current alert rule wizard is different from the earlier experience:
:::image type="content" source="media/alerts-log/aggregate-on.png" lightbox="media/alerts-log/aggregate-on.png" alt-text="Screenshot that shows Aggregate on.":::
- - **Period**: Choose the time range over which to assess the specified condition by using the [Period](./alerts-unified-log.md#query-time-range) option.
+ - **Period**: Choose the time range over which to assess the specified condition by using the [Period](./alerts-types.md) option.
1. When you're finished editing the conditions, select **Done**.
-1. Use the preview data to set the [Operator, Threshold value](./alerts-unified-log.md#threshold-and-operator), and [Frequency](./alerts-unified-log.md#frequency).
-1. Set the [number of violations to trigger an alert](./alerts-unified-log.md#number-of-violations-to-trigger-alert) by using **Total** or **Consecutive breaches**.
+1. Use the preview data to set the [Operator, Threshold value](./alerts-types.md), and [Frequency](./alerts-types.md).
+1. Set the [number of violations to trigger an alert](./alerts-types.md) by using **Total** or **Consecutive breaches**.
1. Select **Done**. 1. You can edit the rule **Description** and **Severity**. These details are used in all alert actions. You can also choose to not activate the alert rule on creation by selecting **Enable rule upon creation**.
-1. Use the [Suppress Alerts](./alerts-unified-log.md#state-and-resolving-alerts) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts, but actions won't be triggered to prevent noise. The **Mute actions** value must be greater than the frequency of the alert to be effective.
+1. Use the [Suppress Alerts](./alerts-processing-rules.md) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts, but actions won't be triggered to prevent noise. The **Mute actions** value must be greater than the frequency of the alert to be effective.
<!-- convertborder later --> :::image type="content" source="media/alerts-log/AlertsPreviewSuppress.png" lightbox="media/alerts-log/AlertsPreviewSuppress.png" alt-text="Screenshot that shows the Alert Details pane." border="false"::: 1. To make alerts stateful, select **Automatically resolve alerts (preview)**. 1. Specify if the alert rule should trigger one or more [action groups](./action-groups.md) when the alert condition is met. For limits on the actions that can be performed, see [Azure Monitor service limits](../../azure-monitor/service-limits.md).
-1. (Optional) Customize actions in log alert rules:
+1. (Optional) Customize actions in log search alert rules:
- **Custom email subject**: Overrides the *email subject* of email actions. You can't modify the body of the mail and this field *isn't for email addresses*.
- - **Include custom Json payload for webhook**: Overrides the webhook JSON used by action groups, assuming that the action group contains a webhook action. Learn more about [webhook actions for log alerts](./alerts-log-webhook.md).
+ - **Include custom Json payload for webhook**: Overrides the webhook JSON used by action groups, assuming that the action group contains a webhook action. Learn more about [webhook actions for log search alerts](./alerts-log-webhook.md).
<!-- convertborder later -->
- :::image type="content" source="media/alerts-log/AlertsPreviewOverrideLog.png" lightbox="media/alerts-log/AlertsPreviewOverrideLog.png" alt-text="Screenshot that shows Action overrides for log alerts." border="false":::
+ :::image type="content" source="media/alerts-log/AlertsPreviewOverrideLog.png" lightbox="media/alerts-log/AlertsPreviewOverrideLog.png" alt-text="Screenshot that shows Action overrides for log search alerts." border="false":::
1. After you've finished editing all the alert rule options, select **Save**.
-## Manage log alerts using PowerShell
+## Manage log search alerts using PowerShell
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] Use the following PowerShell cmdlets to manage rules with the [Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules): -- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): PowerShell cmdlet to create a new log alert rule.-- [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule): PowerShell cmdlet to update an existing log alert rule.-- [New-AzScheduledQueryRuleSource](/powershell/module/az.monitor/new-azscheduledqueryrulesource): PowerShell cmdlet to create or update the object that specifies source parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.-- [New-AzScheduledQueryRuleSchedule](/powershell/module/az.monitor/new-azscheduledqueryruleschedule): PowerShell cmdlet to create or update the object that specifies schedule parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.-- [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction): PowerShell cmdlet to create or update the object that specifies action parameters for a log alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.-- [New-AzScheduledQueryRuleAznsActionGroup](/powershell/module/az.monitor/new-azscheduledqueryruleaznsactiongroup): PowerShell cmdlet to create or update the object that specifies action group parameters for a log alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.-- [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition): PowerShell cmdlet to create or update the object that specifies trigger condition parameters for a log alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.-- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger): PowerShell cmdlet to create or update the object that specifies metric trigger condition parameters for a metric measurement log alert. Used as input by the [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet.-- [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule): PowerShell cmdlet to list existing log alert rules or a specific log alert rule.-- [Update-AzScheduledQueryRule](/powershell/module/az.monitor/update-azscheduledqueryrule): PowerShell cmdlet to enable or disable a log alert rule.-- [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log alert rule.
+- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): PowerShell cmdlet to create a new log search alert rule.
+- [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule): PowerShell cmdlet to update an existing log search alert rule.
+- [New-AzScheduledQueryRuleSource](/powershell/module/az.monitor/new-azscheduledqueryrulesource): PowerShell cmdlet to create or update the object that specifies source parameters for a log search alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.
+- [New-AzScheduledQueryRuleSchedule](/powershell/module/az.monitor/new-azscheduledqueryruleschedule): PowerShell cmdlet to create or update the object that specifies schedule parameters for a log search alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.
+- [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction): PowerShell cmdlet to create or update the object that specifies action parameters for a log search alert. Used as input by the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) and [Set-AzScheduledQueryRule](/powershell/module/az.monitor/set-azscheduledqueryrule) cmdlets.
+- [New-AzScheduledQueryRuleAznsActionGroup](/powershell/module/az.monitor/new-azscheduledqueryruleaznsactiongroup): PowerShell cmdlet to create or update the object that specifies action group parameters for a log search alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.
+- [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition): PowerShell cmdlet to create or update the object that specifies trigger condition parameters for a log search alert. Used as input by the [New-AzScheduledQueryRuleAlertingAction](/powershell/module/az.monitor/new-azscheduledqueryrulealertingaction) cmdlet.
+- [New-AzScheduledQueryRuleLogMetricTrigger](/powershell/module/az.monitor/new-azscheduledqueryrulelogmetrictrigger): PowerShell cmdlet to create or update the object that specifies metric trigger condition parameters for a metric measurement log search alert. Used as input by the [New-AzScheduledQueryRuleTriggerCondition](/powershell/module/az.monitor/new-azscheduledqueryruletriggercondition) cmdlet.
+- [Get-AzScheduledQueryRule](/powershell/module/az.monitor/get-azscheduledqueryrule): PowerShell cmdlet to list existing log search alert rules or a specific log search alert rule.
+- [Update-AzScheduledQueryRule](/powershell/module/az.monitor/update-azscheduledqueryrule): PowerShell cmdlet to enable or disable a log search alert rule.
+- [Remove-AzScheduledQueryRule](/powershell/module/az.monitor/remove-azscheduledqueryrule): PowerShell cmdlet to delete an existing log search alert rule.
> [!NOTE]
-> The `ScheduledQueryRules` PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log alert rules created by using the legacy [Log Analytics Alert API](./api-alerts.md) can only be managed by using PowerShell after you [switch to the Scheduled Query Rules API](./alerts-log-api-switch.md).
+> The `ScheduledQueryRules` PowerShell cmdlets can only manage rules created in [this version of the Scheduled Query Rules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). Log search alert rules created by using the legacy [Log Analytics Alert API](./api-alerts.md) can only be managed by using PowerShell after you [switch to the Scheduled Query Rules API](./alerts-log-api-switch.md).
-Example steps for creating a log alert rule by using PowerShell:
+Example steps for creating a log search alert rule by using PowerShell:
```powershell $source = New-AzScheduledQueryRuleSource -Query 'Heartbeat | summarize AggregatedValue = count() by bin(TimeGenerated, 5m), _ResourceId' -DataSourceId "/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.OperationalInsights/workspaces/servicews"
$alertingAction = New-AzScheduledQueryRuleAlertingAction -AznsAction $aznsAction
New-AzScheduledQueryRule -ResourceGroupName "contosoRG" -Location "Region Name for your Application Insights App or Log Analytics Workspace" -Action $alertingAction -Enabled $true -Description "Alert description" -Schedule $schedule -Source $source -Name "Alert Name" ```
-Example steps for creating a log alert rule by using PowerShell with cross-resource queries:
+Example steps for creating a log search alert rule by using PowerShell with cross-resource queries:
```powershell $authorized = @ ("/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.OperationalInsights/workspaces/servicewsCrossExample", "/subscriptions/a123d7efg-123c-1234-5678-a12bc3defgh4/resourceGroups/contosoRG/providers/microsoft.insights/components/serviceAppInsights")
$alertingAction = New-AzScheduledQueryRuleAlertingAction -AznsAction $aznsAction
New-AzScheduledQueryRule -ResourceGroupName "contosoRG" -Location "Region Name for your Application Insights App or Log Analytics Workspace" -Action $alertingAction -Enabled $true -Description "Alert description" -Schedule $schedule -Source $source -Name "Alert Name" ```
-You can also create the log alert by using [a template and parameters](./alerts-log-create-templates.md) files using PowerShell:
+You can also create the log search alert by using [a template and parameters](./alerts-log-create-templates.md) files using PowerShell:
```powershell Connect-AzAccount
New-AzResourceGroupDeployment -Name AlertDeployment -ResourceGroupName ResourceG
## Next steps
-* Learn about [log alerts](./alerts-unified-log.md).
-* Create log alerts by using [Azure Resource Manager templates](./alerts-log-create-templates.md).
-* Understand [webhook actions for log alerts](./alerts-log-webhook.md).
+* Learn about [log search alerts](./alerts-types.md#log-alerts).
+* Create log search alerts by using [Azure Resource Manager templates](./alerts-log-create-templates.md).
+* Understand [webhook actions for log search alerts](./alerts-log-webhook.md).
* Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Metric Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-logs.md
The supported Log Analytics logs are the following:
- [Update management](../../automation/update-management/overview.md) records - [Event data](./../agents/data-sources-windows-events.md) logs
-There are many benefits for using **Metric Alerts for Logs** over query based [Log Alerts](./alerts-log.md) in Azure; some of them are listed below:
+There are many benefits for using **Metric Alerts for Logs** over query based [log search alerts](./alerts-log.md) in Azure; some of them are listed below:
- Metric Alerts offer near-real time monitoring capability and Metric Alerts for Logs forks data from the log source to ensure the same.-- Metric Alerts are stateful - only notifying once when alert is fired and once when alert is resolved; as opposed to Log alerts, which are stateless and keep firing at every interval if the alert condition is met.
+- Metric Alerts are stateful - only notifying once when alert is fired and once when alert is resolved; as opposed to log search alerts, which are stateless and keep firing at every interval if the alert condition is met.
- Metric Alerts for Log provide multiple dimensions, allowing filtering to specific values like Computers, OS Type, etc. simpler; without the need for defining a complex query in Log Analytics. > [!NOTE]
az deployment group create --resource-group myRG --template-file metricfromLogsA
## Next steps - Learn more about the [metric alerts](../alerts/alerts-metric.md).-- Learn about [log alerts in Azure](./alerts-unified-log.md).
+- Learn about [log search alerts in Azure](./alerts-types.md#log-alerts).
- Learn about [alerts in Azure](./alerts-overview.md).
azure-monitor Alerts Non Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-non-common-schema-definitions.md
# Noncommon alert schema definitions
-The noncommon alert schema were historically used to customize alert email templates and webhook schemas for metric, log, and activity log alert rules. We recommend using the [common schema](./alerts-common-schema.md) for all alert types and integrations.
+The noncommon alert schema were historically used to customize alert email templates and webhook schemas for metric, log search, and activity log alert rules. We recommend using the [common schema](./alerts-common-schema.md) for all alert types and integrations.
This article describes the noncommon alert schema definitions for Azure Monitor, including definitions for: - Webhooks
See sample values for metric alerts.
} ```
-## Log alerts
+## Log search alerts
-See sample values for log alerts.
+See sample values for log search alerts.
### monitoringService = Log Alerts V1 ΓÇô Metric
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
This table provides a brief description of each alert type. For more information
|Alert type|Description| |:|:| |[Metric alerts](alerts-types.md#metric-alerts)|Metric alerts evaluate resource metrics at regular intervals. Metrics can be platform metrics, custom metrics, logs from Azure Monitor converted to metrics, or Application Insights metrics. Metric alerts can also apply multiple conditions and dynamic thresholds.|
-|[Log alerts](alerts-types.md#log-alerts)|Log alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.|
+|[Log search alerts](alerts-types.md#log-alerts)|Log search alerts allow users to use a Log Analytics query to evaluate resource logs at a predefined frequency.|
|[Activity log alerts](alerts-types.md#activity-log-alerts)|Activity log alerts are triggered when a new activity log event occurs that matches defined conditions. Resource Health alerts and Service Health alerts are activity log alerts that report on your service and resource health.| |[Smart detection alerts](alerts-types.md#smart-detection-alerts)|Smart detection on an Application Insights resource automatically warns you of potential performance problems and failure anomalies in your web application. You can migrate smart detection on your Application Insights resource to create alert rules for the different smart detection modules.| |[Prometheus alerts](alerts-types.md#prometheus-alerts)|Prometheus alerts are used for alerting on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). The alert rules are based on the PromQL open-source query language.|
The alert condition for stateful alerts is `fired`, until it is considered resol
For stateful alerts, while the alert itself is deleted after 30 days, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved.
-Stateful log alerts have limitations - details [here](/azure/azure-monitor/service-limits#alerts).
+Stateful log search alerts have limitations - details [here](/azure/azure-monitor/service-limits#alerts).
This table describes when a stateful alert is considered resolved: |Alert type |The alert is resolved when | ||| |Metric alerts|The alert condition isn't met for three consecutive checks.|
-|Log alerts| The alert condition isn't met for a specific time range. The time range differs based on the frequency of the alert:<ul> <li>**1 minute**: The alert condition isn't met for 10 minutes.</li> <li>**5 to 15 minutes**: The alert condition isn't met for three frequency periods.</li> <li>**15 minutes to 11 hours**: The alert condition isn't met for two frequency periods.</li> <li>**11 to 12 hours**: The alert condition isn't met for one frequency period.</li></ul>|
+|Log search alerts| The alert condition isn't met for a specific time range. The time range differs based on the frequency of the alert:<ul> <li>**1 minute**: The alert condition isn't met for 10 minutes.</li> <li>**5 to 15 minutes**: The alert condition isn't met for three frequency periods.</li> <li>**15 minutes to 11 hours**: The alert condition isn't met for two frequency periods.</li> <li>**11 to 12 hours**: The alert condition isn't met for one frequency period.</li></ul>|
## Recommended alert rules
For metric alert rules for Azure services that don't support multiple resources,
Each metric alert rule is charged based on the number of time series that are monitored.
-### Log alerts
+### Log search alerts
-Use [log alert rules](alerts-create-log-alert-rule.md) to monitor all resources that send data to the Log Analytics workspace. These resources can be from any subscription or region. Use data collection rules when setting up your Log Analytics workspace to collect the required data for your log alerts rule.
+Use [log search alert rules](alerts-create-log-alert-rule.md) to monitor all resources that send data to the Log Analytics workspace. These resources can be from any subscription or region. Use data collection rules when setting up your Log Analytics workspace to collect the required data for your log search alert rule.
You can also create resource-centric alerts instead of workspace-centric alerts by using **Split by dimensions**. When you split on the resourceId column, you will get one alert per resource that meets the condition.
-Log alert rules that use splitting by dimensions are charged based on the number of time series created by the dimensions resulting from your query. If the data is already collected to a Log Analytics workspace, there is no additional cost.
+Log search alert rules that use splitting by dimensions are charged based on the number of time series created by the dimensions resulting from your query. If the data is already collected to a Log Analytics workspace, there is no additional cost.
If you use metric data at scale in the Log Analytics workspace, pricing will change based on the data ingestion.
azure-monitor Alerts Payload Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-payload-samples.md
# Sample alert payloads
-The common alert schema standardizes the consumption experience for alert notifications in Azure. Historically, activity log, metric, and log alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications.
+The common alert schema standardizes the consumption experience for alert notifications in Azure. Historically, activity log, metric, and log search alerts each had their own email templates and webhook schemas. The common alert schema provides one standardized schema for all alert notifications.
A standardized schema can help you minimize the number of integrations, which simplifies the process of managing and maintaining your integrations.
The following are sample metric alert payloads.
} ```
-## Sample log alerts
+## Sample log search alerts
> [!NOTE]
-> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log alerts have these limitations regarding the common schema:
-> - The common schema is not supported for log alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations.
-> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included.
+> When you enable the common schema, the fields in the payload are reset to the common schema fields. Therefore, log search alerts have these limitations regarding the common schema:
+> - The common schema is not supported for log search alerts using webhooks with a custom email subject and/or JSON payload, since the common schema overwrites the custom configurations.
+> - Alerts using the common schema have an upper size limit of 256 KB per alert. If the log search alerts payload includes search results that cause the alert to exceed the maximum size, the search results aren't embedded in the log search alerts payload. You can check if the payload includes the search results with the `IncludedSearchResults` flag. Use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get) if the search results are not included.
-### Log alert with monitoringService = Platform
+### Log search alert with monitoringService = Platform
```json {
The following are sample metric alert payloads.
} } ```
-### Log alert with monitoringService = Application Insights
+### Log search alert with monitoringService = Application Insights
```json {
The following are sample metric alert payloads.
} ```
-### Log alert with monitoringService = Log Alerts V2
+### Log search alert with monitoringService = Log Alerts V2
> [!NOTE]
-> Log alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
+> Log search alert rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log search alerts payload when you use this version. Use [dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to provide context to fired alerts. You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
```json {
The following are sample metric alert payloads.
} ```
-### Sample test action log alerts
+### Sample test action log search alerts
-#### Test action log alert V1 ΓÇô Metric
+#### Test action log search alert V1 ΓÇô Metric
```json {
The following are sample metric alert payloads.
} ```
-#### Test action log alert V1 - Numresults
+#### Test action log search alert V1 - Numresults
```json {
The following are sample metric alert payloads.
} ```
-#### Test action log alert V2
+#### Test action log search alert V2
> [!NOTE]
-> Log alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log alerts payload when you use this version. Use [dimensions](./alerts-unified-log.md#split-by-alert-dimensions) to provide context to fired alerts.
+> Log search alerts rules from API version 2020-05-01 use this payload type, which only supports common schema. Search results aren't embedded in the log search alerts payload when you use this version. Use [dimensions](./alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to provide context to fired alerts.
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results with the [Log Analytics API](/rest/api/loganalytics/dataaccess/query/get). If you must embed the results, use a logic app with the provided links to generate a custom payload.
azure-monitor Alerts Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-plan.md
Title: 'Plan your Alerts and automated actions'
+ Title: 'Plan your alerts and automated actions'
description: Recommendations for deployment of Azure Monitor alerts and automated actions.
Alerts in Azure Monitor are created by alert rules that you must create. For gui
Multiple types of alert rules are defined by the type of data they use. Each has different capabilities and a different cost. The basic strategy is to use the alert rule type with the lowest cost that provides the logic you require. - Activity log rules. Creates an alert in response to a new activity log event that matches specified conditions. There's no cost to these alerts so they should be your first choice, although the conditions they can detect are limited. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating an activity log alert.-- Metric alert rules. Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log alerts. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a metric alert.-- Log alert rules. Creates an alert when the results of a schedule query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a log query alert.
+- Metric alert rules. Creates an alert in response to one or more metric values exceeding a threshold. Metric alerts are stateful, which means that the alert will automatically close when the value drops below the threshold, and it will only send out notifications when the state changes. There's a cost to metric alerts, but it's often much less than log search alerts. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a metric alert.
+- Log search alert rules. Creates an alert when the results of a scheduled query match specified criteria. They're the most expensive of the alert rules, but they allow the most complex criteria. See [Create or edit an alert rule](alerts-create-new-alert-rule.md) for information on creating a log search query alert.
- [Application alerts](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). Performs proactive performance and availability testing of your web application. You can perform a ping test at no cost, but there's a cost to more complex testing. See [Monitor the availability of any website](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) for a description of the different tests and information on creating them. ## Alert severity
You want to create alerts for any important information in your environment. But
- See [Successful alerting strategy](/azure/cloud-adoption-framework/manage/monitor/alerting#successful-alerting-strategy) to determine whether a symptom is an appropriate candidate for alerting. - Use the **Automatically resolve alerts** option in metric alert rules to resolve alerts when the condition has been corrected.-- Use the **Suppress alerts** option in log query alert rules to avoid creating multiple alerts for the same issue.
+- Use the **Suppress alerts** option in log search alert rules to avoid creating multiple alerts for the same issue.
- Ensure that you use appropriate severity levels for alert rules so that high-priority issues can be analyzed together. - Limit notifications for alerts with a severity of Warning or less because they don't require immediate attention.
Typically, you'll want to alert on issues for all your critical Azure applicatio
- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md). - For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](resource-manager-alerts-metric.md).-- To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
+- To return data for multiple resources, write queries in log search alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
> [!NOTE]
-> Resource-centric log query alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log query alert.
+> Resource-centric log search alert rules currently in public preview allow you to use all resources in a subscription or resource group as a target for a log search alert.
## Next steps
azure-monitor Alerts Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-resource-move.md
Navigate to Alerts > Alert processing rules (preview) > filter by the containing
## Next steps
-Learn about fixing other problems with [alert notifications](alerts-troubleshoot.md), [metric alerts](alerts-troubleshoot-metric.md), and [log alerts](alerts-troubleshoot-log.md).
+Learn about fixing other problems with [alert notifications](alerts-troubleshoot.md), [metric alerts](alerts-troubleshoot-metric.md), and [log search alerts](alerts-troubleshoot-log.md).
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
Title: Troubleshoot log alerts in Azure Monitor | Microsoft Docs
-description: Common issues, errors, and resolutions for log alert rules in Azure.
+ Title: Troubleshoot log search alerts in Azure Monitor | Microsoft Docs
+description: Common issues, errors, and resolutions for log search alert rules in Azure.
Last updated 06/20/2023
-# Troubleshoot log alerts in Azure Monitor
+# Troubleshoot log search alerts in Azure Monitor
-This article describes how to resolve common issues with log alerts in Azure Monitor. It also provides solutions to common problems with the functionality and configuration of log alerts.
+This article describes how to resolve common issues with log search alerts in Azure Monitor. It also provides solutions to common problems with the functionality and configuration of log search alerts.
-You can use log alerts to evaluate resources logs every set frequency by using a [Log Analytics](../logs/log-analytics-tutorial.md) query, and fire an alert that's based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md). To learn more about functionality and terminology of log alerts, see [Log alerts in Azure Monitor](alerts-unified-log.md).
+You can use log search alerts to evaluate resources logs every set frequency by using a [Log Analytics](../logs/log-analytics-tutorial.md) query, and fire an alert that's based on the results. Rules can trigger one or more actions using [Action Groups](./action-groups.md). To learn more about functionality and terminology of log search alerts, see [Log search alerts in Azure Monitor](alerts-types.md#log-alerts).
> [!NOTE] > This article doesn't consider cases where the Azure portal shows that an alert rule was triggered but a notification isn't received. For such cases, see [Action or notification on my alert did not work as expected](./alerts-troubleshoot.md#action-or-notification-on-my-alert-did-not-work-as-expected).
-## Log alert didn't fire
+## Log search alert didn't fire
### Data ingestion time for logs
To mitigate latency, the system retries the alert evaluation multiple times. Aft
### Actions are muted or alert rule is defined to resolve automatically
-Log alerts provide an option to mute fired alert actions for a set amount of time using **Mute actions** and to only fire once per condition being met using **Automatically resolve alerts**.
+Log search alerts provide an option to mute fired alert actions for a set amount of time using **Mute actions** and to only fire once per condition being met using **Automatically resolve alerts**.
A common issue is that you think that the alert didn't fire, but it was actually the rule configuration.
A common issue is that you think that the alert didn't fire, but it was actually
### Alert scope resource has been moved, renamed, or deleted
-When you author an alert rule, Log Analytics creates a permission snapshot for your user ID. This snapshot is saved in the rule and contains the rule scope resource, Azure Resource Manager ID. If the rule scope resource moves, gets renamed, or is deleted, all log alert rules that refer to that resource will break. To work correctly, alert rules need to be recreated using the new Azure Resource Manager ID.
+When you author an alert rule, Log Analytics creates a permission snapshot for your user ID. This snapshot is saved in the rule and contains the rule scope resource, Azure Resource Manager ID. If the rule scope resource moves, gets renamed, or is deleted, all log search alert rules that refer to that resource will break. To work correctly, alert rules need to be recreated using the new Azure Resource Manager ID.
### The alert rule uses a system-assigned managed identity
-When you create a log alert rule with system-assigned managed identity, the identity is created without any permissions. After you create the rule, you need to assign the appropriate roles to the ruleΓÇÖs identity so that it can access the data you want to query. For example, you might need to give it a Reader role for the relevant Log Analytics workspaces, or a Reader role and a Database Viewer role for the relevant ADX cluster. See [managed identities](/azure/azure-monitor/alerts/alerts-create-log-alert-rule#configure-the-alert-rule-details) for more information about using managed identities in log alerts.
+When you create a log search alert rule with system-assigned managed identity, the identity is created without any permissions. After you create the rule, you need to assign the appropriate roles to the ruleΓÇÖs identity so that it can access the data you want to query. For example, you might need to give it a Reader role for the relevant Log Analytics workspaces, or a Reader role and a Database Viewer role for the relevant ADX cluster. See [managed identities](/azure/azure-monitor/alerts/alerts-create-log-alert-rule#configure-the-alert-rule-details) for more information about using managed identities in log search alerts.
### Metric measurement alert rule with splitting using the legacy Log Analytics API
-[Metric measurement](alerts-unified-log.md#calculation-of-a-value) is a type of log alert that's based on summarized time series results. You can use these rules to group by columns to [split alerts](alerts-unified-log.md#split-by-alert-dimensions). If you're using the legacy Log Analytics API, splitting doesn't work as expected because it doesn't support grouping.
+[Metric measurement](alerts-types.md#log-alerts) is a type of log search alert that's based on summarized time series results. You can use these rules to group by columns to [split alerts](alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1). If you're using the legacy Log Analytics API, splitting doesn't work as expected because it doesn't support grouping.
-You can use the current ScheduledQueryRules API to set **Aggregate On** in [Metric measurement](alerts-unified-log.md#calculation-of-a-value) rules, which work as expected. To learn more about switching to the current ScheduledQueryRules API, see [Upgrade to the current Log Alerts API from legacy Log Analytics Alert API](./alerts-log-api-switch.md).
+You can use the current ScheduledQueryRules API to set **Aggregate On** in [Metric measurement](alerts-types.md#log-alerts) rules, which work as expected. To learn more about switching to the current ScheduledQueryRules API, see [Upgrade to the current Log Alerts API from legacy Log Analytics Alert API](./alerts-log-api-switch.md).
### Override query time range+ As a part of the configuration of the alert, in the section of the "Advance Options", there is an option to configure "Override query time range" parameter. If you want the alert evaluation period to be different than the query time range, enter a time range here. The alert time range is limited to a maximum of two days. Even if the query contains an ago command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains ago(7d), the query only scans up to two days of data. If the query requires more data than the alert evaluation, you can change the time range manually. If there's ago command in the query, it will be changed automatically to be 2 days (48 hours).
-## Log alert fired unnecessarily
+## Log search alert fired unnecessarily
-A configured [log alert rule in Azure Monitor](./alerts-log.md) might be triggered unexpectedly. The following sections describe some common reasons.
+A configured [log search alert rule in Azure Monitor](./alerts-log.md) might be triggered unexpectedly. The following sections describe some common reasons.
### Alert triggered by partial data
Azure Monitor processes terabytes of customers' logs from across the world, whic
Logs are semi-structured data and are inherently more latent than metrics. If you're experiencing many misfires in fired alerts, you should consider using [metric alerts](alerts-metric-overview.md). You can send data to the metric store from logs using [metric alerts for logs](alerts-metric-logs.md).
-Log alerts work best when you try to detect data in the logs. It works less well when you try to detect lack of data in the logs, like alerting on virtual machine heartbeat.
+Log search alerts work best when you try to detect data in the logs. It works less well when you try to detect lack of data in the logs, like alerting on virtual machine heartbeat.
There are built-in capabilities to prevent false alerts, but they can still occur on very latent data (over ~30 minutes) and data with latency spikes.
-## Log alert was disabled
+## Log search alert was disabled
-The following sections list some reasons why Azure Monitor might disable a log alert rule. After those sections, there's an [example of the activity log that is sent when a rule is disabled](#activity-log-example-when-rule-is-disabled).
+The following sections list some reasons why Azure Monitor might disable a log search alert rule. After those sections, there's an [example of the activity log that is sent when a rule is disabled](#activity-log-example-when-rule-is-disabled).
### Alert scope no longer exists or was moved When the scope resources of an alert rule are no longer valid, rule execution fails, and billing stops.
-If a log alert fails continuously for a week, Azure Monitor disables it.
+If a log search alert fails continuously for a week, Azure Monitor disables it.
-### Query used in a log alert isn't valid
+### <a name="query-used-in-a-log-alert-isnt-valid"></a>Query used in a log search alert isn't valid
-When a log alert rule is created, the query is validated for correct syntax. But sometimes, the query provided in the log alert rule can start to fail. Some common reasons are:
+When a log search alert rule is created, the query is validated for correct syntax. But sometimes, the query provided in the log search alert rule can start to fail. Some common reasons are:
- Rules were created via the API, and validation was skipped by the user. - The query [runs on multiple resources](../logs/cross-workspace-query.md), and one or more of the resources was deleted or moved.
When a log alert rule is created, the query is validated for correct syntax. But
- [Custom logs tables](../agents/data-sources-custom-logs.md) aren't yet created, because the data flow hasn't started. - Changes in [query language](/azure/kusto/query/) include a revised format for commands and functions, so the query provided earlier is no longer valid.
-[Azure Advisor](../../advisor/advisor-overview.md) warns you about this behavior. It adds a recommendation about the affected log alert rule. The category used is 'High Availability' with medium impact and a description of 'Repair your log alert rule to ensure monitoring'.
+[Azure Advisor](../../advisor/advisor-overview.md) warns you about this behavior. It adds a recommendation about the affected log search alert rule. The category used is 'High Availability' with medium impact and a description of 'Repair your log search alert rule to ensure monitoring'.
## Alert rule quota was reached
For details about the number of log search alert rules per subscription and maxi
If you've reached the quota limit, the following steps might help resolve the issue. 1. Delete or disable log search alert rules that arenΓÇÖt used anymore.
-1. Use [splitting of alerts by dimensions](alerts-unified-log.md#split-by-alert-dimensions) to reduce rules count. These rules can monitor many resources and detection cases.
+1. Use [splitting of alerts by dimensions](alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to reduce rules count. These rules can monitor many resources and detection cases.
1. If you need the quota limit to be increased, continue to open a support request, and provide the following information: - The Subscription IDs and Resource IDs for which the quota limit needs to be increased
If you've reached the quota limit, the following steps might help resolve the is
- The resource type for the quota increase, such as **Log Analytics** or **Application Insights** - The requested quota limit
-### To check the current usage of new log alert rules
+### To check the current usage of new log search alert rules
#### From the Azure portal
The total number of log search alert rules is displayed above the rules list.
## Activity log example when rule is disabled
-If query fails for seven days continuously, Azure Monitor disables the log alert and stops the billing of the rule. You can see the exact time when Azure Monitor disabled the log alert in the [Azure activity log](../../azure-monitor/essentials/activity-log.md).
+If query fails for seven days continuously, Azure Monitor disables the log search alert and stops the billing of the rule. You can see the exact time when Azure Monitor disabled the log search alert in the [Azure activity log](../../azure-monitor/essentials/activity-log.md).
See this example:
See this example:
``` ## Query syntax validation error+ If you get an error message that says "Couldn't validate the query syntax because the service can't be reached", it could be either: - A query syntax error. - A problem connecting to the service that validates the query.
Try the following steps to resolve the problem:
- Flush the DNS cache on your local machine, by opening a command prompt and running the following command: `ipconfig /flushdns`, and then check again. If you still get the same error message, try the next step. - Copy and paste this URL into the browser: [https://api.loganalytics.io/v1/version](https://api.loganalytics.io/v1/version). If you get an error, contact your IT administrator to allow the IP addresses associated with **api.loganalytics.io** listed [here](../ip-addresses.md#application-insights-and-log-analytics-apis). - ## Next steps -- Learn about [log alerts in Azure](./alerts-unified-log.md).-- Learn more about [configuring log alerts](../logs/log-query-overview.md).
+- Learn about [log search alerts in Azure](./alerts-types.md#log-alerts).
+- Learn more about [configuring log search alerts](../logs/log-query-overview.md).
- Learn more about [log queries](../logs/log-query-overview.md).
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
If you believe your metric alert shouldn't have fired but it did, the following
- Edit the alert rule in the Azure portal. See if the **Automatically resolve alerts** checkbox under the **Alert rule details** section is cleared. - Review the script used to deploy the alert rule or retrieve the alert rule definition. Check if the `autoMitigate` property is set to `false`.+ ## Can't find the metric to alert on If you want to alert on a specific metric but you can't see it when you create an alert rule, check to determine: - If you can't see any metrics for the resource, [check if the resource type is supported for metric alerts](./alerts-metric-near-real-time.md). - If you can see some metrics for the resource but can't find a specific metric, [check if that metric is available](../essentials/metrics-supported.md). If so, see the metric description to check if it's only available in specific versions or editions of the resource.-- If the metric isn't available for the resource, it might be available in the resource logs and can be monitored by using log alerts. For more information, see how to [collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
+- If the metric isn't available for the resource, it might be available in the resource logs and can be monitored by using log search alerts. For more information, see how to [collect and analyze resource logs from an Azure resource](../essentials/tutorial-resource-logs.md).
## Can't find the metric to alert on: Virtual machines guest metrics
For more information about collecting data from the guest operating system of a
> [!NOTE] > If you configured guest metrics to be sent to a Log Analytics workspace, the metrics appear under the Log Analytics workspace resource and start showing data *only* after you create an alert rule that monitors them. To do so, follow the steps to [configure a metric alert for logs](./alerts-metric-logs.md#configuring-metric-alert-for-logs).
-Currently, monitoring a guest metric for multiple virtual machines with a single alert rule isn't supported by metric alerts. But you can use a [log alert rule](./alerts-unified-log.md). To do so, make sure the guest metrics are collected to a Log Analytics workspace and create a log alert rule on the workspace.
+Currently, monitoring a guest metric for multiple virtual machines with a single alert rule isn't supported by metric alerts. But you can use a [log search alert rule](./alerts-types.md#log-alerts). To do so, make sure the guest metrics are collected to a Log Analytics workspace and create a log search alert rule on the workspace.
+ ## Can't find the metric dimension to alert on If you want to alert on [specific dimension values of a metric](./alerts-metric-overview.md#using-dimensions) but you can't find these values:
If you've reached the quota limit, the following steps might help resolve the is
- Requested quota limit. ## `Metric not found` error:+ - **For a platform metric:** Make sure you're using the **Metric** name from [the Azure Monitor supported metrics page](../essentials/metrics-supported.md) and not the **Metric Display Name**. - **For a custom metric:** Make sure that the metric is already being emitted because you can't create an alert rule on a custom metric that doesn't yet exist. Also ensure that you're providing the custom metric's namespace. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-a-custom-metric). - If you're creating [metric alerts on logs](./alerts-metric-logs.md), ensure appropriate dependencies are included. For a sample template, see [Create Metric Alerts for Logs in Azure Monitor](./alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).
To resolve this, we recommend that you either:
ΓÇó Use the **StartsWith** operator if the dimension values have common names. ΓÇó If relevant, configure the rule to monitor all dimension values if thereΓÇÖs no need to individually monitor the specific dimension values. - ## No permissions to create metric alert rules To create a metric alert rule, you must have the following permissions:
To create a metric alert rule, you must have the following permissions:
- Read permission on the target resource of the alert rule. - Write permission on the resource group in which the alert rule is created. If you're creating the alert rule from the Azure portal, the alert rule is created by default in the same resource group in which the target resource resides. - Read permission on any action group associated to the alert rule, if applicable.+ ## Considerations when creating an alert rule that contains multiple criteria+ - You can only select one value per dimension within each criterion. - You can't use an asterisk (\*) as a dimension value. - When metrics that are configured in different criteria support the same dimension, a configured dimension value must be explicitly set in the same way for all those metrics. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-multiple-criteria).+ ## Check the total number of metric alert rules To check the current usage of metric alert rules, follow the next steps.
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
This article discusses common problems in Azure Monitor alerting and notificatio
You can see fired alerts in the Azure portal.
-Refer to these articles for troubleshooting information about metric or log alerts that are not behaving as expected:
+Refer to these articles for troubleshooting information about metric or log search alerts that are not behaving as expected:
- [Troubleshoot Azure Monitor metric alerts](alerts-troubleshoot-metric.md)-- [Troubleshoot Azure Monitor log alerts](alerts-troubleshoot-log.md)
+- [Troubleshoot Azure Monitor log search alerts](alerts-troubleshoot-log.md)
If the alert fires as intended according to the Azure portal but the proper notifications do not occur, use the information in the rest of this article to troubleshoot that problem.
If you can see a fired alert in the portal, but a related alert processing rule
Here is an example of an alert processing rule adding another action group: <!-- convertborder later --> :::image type="content" source="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" lightbox="media/alerts-troubleshoot/action-repeated-multi-action-groups.png" alt-text="Screenshot of action repeated in multiple action groups." border="false":::
-
1. **Does the alert processing rule scope and filter match the fired alert?** If you think the alert processing rule should have fired but didn't, or that it shouldn't have fired but it did, carefully examine the alert processing rule scope and filter conditions versus the properties of the fired alert. - ## How to find the alert ID of a fired alert When opening a case about a specific fired alert (such as ΓÇô if you did not receive its email notification), you need to provide the alert ID.
If you received an error while trying to create, update or delete an [alert proc
Check the [alert processing rule documentation](../alerts/alerts-action-rules.md), or the [alert processing rule PowerShell Set-AzActionRule](/powershell/module/az.alertsmanagement/set-azalertprocessingrule) command. ## Next steps-- If using a log alert, also see [Troubleshooting Log Alerts](./alerts-troubleshoot-log.md).+
+- If using a log search alert, also see [Troubleshooting Log Search Alerts](./alerts-troubleshoot-log.md).
- Go back to the [Azure portal](https://portal.azure.com) to check if you solved your issue with guidance in this article.
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
For more information about pricing, see the [pricing page](https://azure.microso
The types of alerts are: - [Metric alerts](#metric-alerts)-- [Log alerts](#log-alerts)
+- [Log search alerts](#log-alerts)
- [Activity log alerts](#activity-log-alerts) - [Service Health alerts](#service-health-alerts) - [Resource Health alerts](#resource-health-alerts)
The types of alerts are:
|Alert type |When to use |Pricing information| |||| |Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Use metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time series that are monitored. |
-|Log alert|You can use log alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for at-scale monitoring using splitting by dimensions, the cost also depends on the number of time series created by the dimensions resulting from your query. |
+|Log search alert|You can use log search alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log search alerts.|Each log search alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log search alerts configured for at-scale monitoring using splitting by dimensions, the cost also depends on the number of time series created by the dimensions resulting from your query. |
|Activity log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource like a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts let you know when there's an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).| |Prometheus alerts|Prometheus alerts are used for alerting on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). The alert rules are based on the PromQL open-source query language. |Prometheus alert rules are only charged on the data queried by the rules. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/). |
Dynamic thresholds help you:
See [dynamic thresholds](alerts-dynamic-thresholds.md) for detailed instructions on using dynamic thresholds in metric alert rules.
-## Log alerts
+## <a name="log-alerts"></a>Log search alerts
-A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, you can perform advanced logic operations on your data and use the robust KQL features to manipulate log data.
+A log search alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. If the conditions are met, an alert is fired. Because you can use Log Analytics queries, you can perform advanced logic operations on your data and use the robust KQL features to manipulate log data.
-The target of the log alert rule can be:
+The target of the log search alert rule can be:
- A single resource, such as a VM. - A single container of resources, like a resource group or subscription. - Multiple resources that use a [cross-resource query](../logs/cross-workspace-query.md).
-Log alerts can measure two different things, which can be used for different monitoring scenarios:
+Log search alerts can measure two different things, which can be used for different monitoring scenarios:
- **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. - **Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage.
-You can configure if log alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview.
-Note that stateful log alerts have these limitations:
+You can configure if log search alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). This feature is currently in preview.
+Note that stateful log search alerts have these limitations:
- they can trigger up to 300 alerts per evaluation. - you can have a maximum of 5000 alerts with the `fired` alert condition. > [!NOTE]
-> Log alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md).
+> Log search alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md).
### Monitor multiple instances of a resource using dimensions
-You can use dimensions when you create log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance.
+You can use dimensions when you create log search alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance.
### Monitor the same condition on multiple resources using splitting by dimensions
To monitor for the same condition on multiple Azure resources, you can use split
You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
-### Use the API for log alert rules
+### Use the API for log search alert rules
Manage new rules in your workspaces by using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API. > [!NOTE]
-> Log alerts for Log Analytics used to be managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
+> Log search alerts for Log Analytics used to be managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
-### Log alerts on your Azure bill
+### Log search alerts on your Azure bill
-Log alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
-- Log alerts on Application Insights shown with the exact resource name along with resource group and alert properties.-- Log alerts on Log Analytics are shown with the exact resource name along with resource group and alert properties when they're created by using the scheduledQueryRules API.-- Log alerts created from the [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have the resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log alerts on the legacy API are shown with the preceding hidden resource name along with resource group and alert properties.
+Log search alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with:
+- Log search alerts on Application Insights shown with the exact resource name along with resource group and alert properties.
+- Log search alerts on Log Analytics are shown with the exact resource name along with resource group and alert properties when they're created by using the scheduledQueryRules API.
+- Log search alerts created from the [legacy Log Analytics API](./api-alerts.md) aren't tracked [Azure resources](../../azure-resource-manager/management/overview.md) and don't have enforced unique resource names. These alerts are still created on `microsoft.insights/scheduledqueryrules` as hidden resources, which have the resource naming structure `<WorkspaceName>|<savedSearchId>|<scheduleId>|<ActionId>`. Log search alerts on the legacy API are shown with the preceding hidden resource name along with resource group and alert properties.
> [!Note] > Unsupported resource characters like <, >, %, &, \, ? and / are replaced with an underscore (_) in the hidden resource names. This character change is also reflected in the billing information.
azure-monitor Alerts Understand Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-understand-migration.md
Classic alerts are [retired](./monitoring-classic-retirement.md) for public clou
This article explains how the manual migration and voluntary migration tool work, which will be used to migrate remaining alert rules. It also describes solutions for some common problems. > [!IMPORTANT]
-> Activity log alerts (including Service health alerts) and Log alerts are not impacted by the migration. The migration only applies to classic alert rules described [here](./monitoring-classic-retirement.md#retirement-of-classic-monitoring-and-alerting-platform).
+> Activity log alerts (including Service health alerts) and log search alerts are not impacted by the migration. The migration only applies to classic alert rules described [here](./monitoring-classic-retirement.md#retirement-of-classic-monitoring-and-alerting-platform).
> [!NOTE] > If your classic alert rules are invalid i.e. they are on [deprecated metrics](#classic-alert-rules-on-deprecated-metrics) or resources that have been deleted, they will not be migrated and will not be available after service is retired.
Customers that are interested in manually migrating their remaining alerts can a
Before you can create new metric alerts on guest metrics, the guest metrics must be sent to the Azure Monitor logs store. Follow these instructions to create alerts: - [Enabling guest metrics collection to log analytics](../agents/agent-data-sources.md)-- [Creating log alerts in Azure Monitor](./alerts-log.md)
+- [Creating log search alerts in Azure Monitor](./alerts-log.md)
There are more options to collect guest metrics and alert on them, [learn more](../agents/agents-overview.md).
azure-monitor Api Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/api-alerts.md
Last updated 06/20/2023
-# Legacy Log Analytics alerts REST API
+# Legacy Log Analytics Alert REST API
This article describes how to manage alert rules using the legacy API. > [!IMPORTANT]
-> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log alerts by that date.
+> As [announced](https://azure.microsoft.com/updates/switch-api-preference-log-alerts/), the Log Analytics Alert API will be retired on October 1, 2025. You must transition to using the Scheduled Query Rules API for log search alerts by that date.
> Log Analytics workspaces created after June 1, 2019 use the [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) to manage alert rules. [Switch to the current API](./alerts-log-api-switch.md) in older workspaces to take advantage of Azure Monitor scheduledQueryRules [benefits](./alerts-log-api-switch.md#benefits). The Log Analytics Alert REST API allows you to create and manage alerts in Log Analytics. This article provides details about the API and several examples for performing different operations.
Log Analytics-based query alerts fire every time the threshold is met or exceede
For example, if `Suppress` is set for 30 minutes, the alert will fire the first time and send notifications configured. It will then wait for 30 minutes before notification for the alert rule is again used. In the interim period, the alert rule will continue to run. Only notification is suppressed by Log Analytics for a specified time regardless of how many times the alert rule fired in this period.
-The `Suppress` property of a Log Analytics alert rule is specified by using the `Throttling` value. The suppression period is specified by using the `DurationInMinutes` value.
+The `Suppress` property of a log search alert rule is specified by using the `Throttling` value. The suppression period is specified by using the `DurationInMinutes` value.
The following sample response is for an action with only `Threshold`, `Severity`, and `Suppress` properties.
armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Na
##### Customize WebhookPayload for an action group
-By default, the webhook sent via an action group for Log Analytics has a fixed structure. But you can customize the JSON payload by using specific variables supported to meet requirements of the webhook endpoint. For more information, see [Webhook action for log alert rules](./alerts-log-webhook.md).
+By default, the webhook sent via an action group for Log Analytics has a fixed structure. But you can customize the JSON payload by using specific variables supported to meet requirements of the webhook endpoint. For more information, see [Webhook action for log search alert rules](./alerts-log-webhook.md).
The customized webhook details must be sent along with `ActionGroup` details. They'll be applied to all webhook URIs specified inside the action group. The following sample illustrates the use:
armclient put /subscriptions/{Subscription ID}/resourceGroups/{Resource Group Na
## Next steps * Use the [REST API to perform log searches](../logs/log-query-overview.md) in Log Analytics.
-* Learn about [log alerts in Azure Monitor](./alerts-unified-log.md).
-* Learn how to [create, edit, or manage log alert rules in Azure Monitor](./alerts-log.md).
+* Learn about [log search alerts in Azure Monitor](./alerts-types.md#log-alerts).
+* Learn how to [create, edit, or manage log search alert rules in Azure Monitor](./alerts-log.md).
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
This article shows you how to configure the connection between your IT Service Management (ITSM) product or service by using Secure Webhook.
-Secure Webhook is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log, and activity log alerts.
+Secure Webhook is an updated version of [IT Service Management Connector (ITSMC)](./itsmc-overview.md). Both versions allow you to create work items in an ITSM tool when Azure Monitor sends alerts. The functionality includes metric, log search, and activity log alerts.
ITSMC uses username and password credentials. Secure Webhook has stronger authentication because it uses Microsoft Entra ID. Microsoft Entra ID is Microsoft's cloud-based identity and access management service. It helps users sign in and access internal or external resources. Using Microsoft Entra ID with ITSM helps to identify Azure alerts (through the Microsoft Entra application ID) that were sent to the external system.
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
You can do this step by using the same [PowerShell commands](../alerts/action-gr
After your application is registered with Microsoft Entra ID, you can create work items in your ITSM tool based on Azure alerts by using the Secure Webhook action in action groups.
-Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, activity log alerts, and Log Analytics alerts in the Azure portal.
+Action groups provide a modular and reusable way of triggering actions for Azure alerts. You can use action groups with metric alerts, activity log alerts, and log search alerts in the Azure portal.
To learn more about action groups, see [Create and manage action groups in the Azure portal](../alerts/action-groups.md).
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
When you're successfully connected and synced:
- Selected work items from the ServiceNow instance are imported into Log Analytics. You can view the summary of these work items on the **IT Service Management Connector** tile. -- You can create incidents from Log Analytics alerts or log records, or from Azure alerts in this ServiceNow instance.
+- You can create incidents from log search alerts or log records, or from Azure alerts in this ServiceNow instance.
> [!NOTE] > ServiceNow has a rate limit for requests per hour. To configure the limit, define **Inbound REST API rate limiting** in the ServiceNow instance.
The payload that is sent to ServiceNow has a common structure. The structure has
The structure of the payload for all alert types except log search V1 alert is [common schema](./alerts-common-schema.md).
-For Log Search Alerts (V1 only), the structure is:
+For Log search alerts (V1 only), the structure is:
- Alert (alert rule name) : \<value> - Search Query : \<value>
azure-monitor Itsmc Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md
After you've installed ITSMC, and prepped your ITSM tool, create an ITSM connect
## Create ITSM work items from Azure alerts
-After you create your ITSM connection, use the ITSM action in action groups to create work items in your ITSM tool based on Azure alerts. Action groups provide a modular and reusable way to trigger actions for your Azure alerts. You can use action groups with metric alerts, activity log alerts, and Log Analytics alerts in the Azure portal.
+After you create your ITSM connection, use the ITSM action in action groups to create work items in your ITSM tool based on Azure alerts. Action groups provide a modular and reusable way to trigger actions for your Azure alerts. You can use action groups with metric alerts, activity log alerts, and log search alerts in the Azure portal.
> [!NOTE] > Wait 30 minutes after you create the ITSM connection for the sync process to finish.
To create an action group:
> As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow. For information on the deprecated behavior, see [Use Azure alerts to create a ServiceNow alert or event work item](/previous-versions/azure/azure-monitor/alerts/alerts-create-itsm-work-items). > As of October 2023, we are not supporting UI creation of connector for using ITSM actions to send alerts and events to ServiceNow. Until full deprecation the action creation should be by [API](/rest/api/monitor/action-groups/create-or-update?tabs=HTTP).
-1. In the last section of the interface for creating an ITSM action group, if the alert is a log alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert.
+1. In the last section of the interface for creating an ITSM action group, if the alert is a log search alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert.
:::image type="content" source="media/itsmc-definition/itsm-action-incident.png" lightbox="media/itsmc-definition/itsm-action-incident.png" alt-text="Screenshot that shows the ITSM Ticket area with an incident work item type.":::
azure-monitor Itsmc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md
This article describes how you can integrate Azure Monitor with supported IT Ser
Azure services like Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with your Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service.
-Azure Monitor provides a bidirectional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool based on your Azure metric alerts, activity log alerts, and Log Analytics alerts.
+Azure Monitor provides a bidirectional connection between Azure and ITSM tools to help you resolve issues faster. You can create work items in your ITSM tool based on your Azure metric alerts, activity log alerts, and log search alerts.
Azure Monitor supports connections with the following ITSM tools:
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
The following sections identify common symptoms, possible causes, and resolution
### In the incidents received from ServiceNow, the configuration item is blank **Cause**: The cause can be one of several reasons:
-* The alert isn't a log alert. Configuration items are only supported by log alerts.
+* The alert isn't a log search alert. Configuration items are only supported by log search alerts.
* The search results don't include the **Computer** or **Resource** column. * The values in the configuration item field don't match an entry in the CMDB. **Resolution**:
-* Check if the alert is a log alert. If it isn't a log alert, configuration items are not supported.
+* Check if the alert is a log search alert. If it isn't a log search alert, configuration items are not supported.
* If the search results don't have a Computer or Resource column, add them to the query. When you're defining a query in Log Search alerts you need to have in the query result the Configuration items names with one of the label names "Computer", "Resource", "_ResourceId" or "ResourceIdΓÇ¥. This mapping enables to map the configuration items to the ITSM payload * Check that the values in the Computer and Resource columns are identical to the values in the CMDB. If they aren't, add a new entry to the CMDB with the matching values.
azure-monitor Log Alert Rule Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/log-alert-rule-health.md
This table describes the possible resource health status values for a log search
| Resource health status | Description |Recommended steps| ||| |Available|There are no known issues affecting this log search alert rule.| |
-|Unknown|This log search alert rule is currently disabled or in an unknown state.|[Log alert was disabled](alerts-troubleshoot-log.md#log-alert-was-disabled).|
+|Unknown|This log search alert rule is currently disabled or in an unknown state.|[Log alert was disabled](alerts-troubleshoot-log.md#log-search-alert-was-disabled).|
|Unknown reason|This log search alert rule is currently unavailable due to an unknown reason.|Check if the alert rule was recently created. Health status is updated after the rule completes its first evaluation.| |Degraded due to unknown reason|This log search alert rule is currently degraded due to an unknown reason.| | |Setting up resource health|Setting up Resource health for this resource.|Check if the alert rule was recently created. Health status is updated after the rule completes its first evaluation.|
azure-monitor Resource Manager Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-log.md
Title: Resource Manager template samples for log query alerts
-description: Sample Azure Resource Manager templates to deploy Azure Monitor log query alerts.
+ Title: Resource Manager template samples for log search alerts
+description: Sample Azure Resource Manager templates to deploy Azure Monitor log search alerts.
Last updated 11/07/2023
-# Resource Manager template samples for log alert rules in Azure Monitor
+# Resource Manager template samples for log search alert rules in Azure Monitor
-This article includes samples of [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure log query alerts in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
+This article includes samples of [Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md) to create and configure log search alerts in Azure Monitor. Each sample includes a template file and a parameters file with sample values to provide to the template.
[!INCLUDE [azure-monitor-samples](../../../includes/azure-monitor-resource-manager-samples.md)]
resource alert 'Microsoft.Insights/scheduledQueryRules@2021-08-01' = {
## Number of results template (up to version 2018-04-16)
-The following sample creates a [number of results alert rule](../alerts/alerts-unified-log.md#result-count).
+The following sample creates a [number of results alert rule](../alerts/alerts-types.md#log-alerts).
### Notes
resource logQueryAlert 'Microsoft.Insights/scheduledQueryRules@2018-04-16' = {
## Metric measurement template (up to version 2018-04-16)
-The following sample creates a [metric measurement alert rule](../alerts/alerts-unified-log.md#calculation-of-a-value).
+The following sample creates a [metric measurement alert rule](../alerts/alerts-types.md#log-alerts).
### Template file
azure-monitor Resource Manager Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-metric.md
Title: Resource Manager template samples for metric alerts description: This article provides sample Resource Manager templates used to create metric alerts in Azure Monitor.-+ Previously updated : 10/31/2022 Last updated : 02/16/2024
azure-monitor Tutorial Log Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/tutorial-log-alert.md
Title: Tutorial - Create a log query alert for an Azure resource
-description: Tutorial to create a log query alert for an Azure resource.
+ Title: Tutorial - Create a log search alert for an Azure resource
+description: Tutorial to create a log search alert for an Azure resource.
Last updated 11/07/2023
-# Tutorial: Create a log query alert for an Azure resource
+# Tutorial: Create a log search alert for an Azure resource
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. Log query alert rules create an alert when a log query returns a particular result. For example, receive an alert when a particular event is created on a virtual machine, or send a warning when excessive anonymous requests are made to a storage account.
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. Log search alert rules create an alert when a log query returns a particular result. For example, receive an alert when a particular event is created on a virtual machine, or send a warning when excessive anonymous requests are made to a storage account.
In this tutorial, you learn how to: > [!div class="checklist"] > * Access prebuilt log queries designed to support alert rules for different kinds of resources
-> * Create a log query alert rule
+> * Create a log search alert rule
> * Create an action group to define notification details - ## Prerequisites
-To complete this tutorial you need the following:
+To complete this tutorial, you need the following:
- An Azure resource to monitor. You can use any resource in your Azure subscription that supports diagnostic settings. To determine whether a resource supports diagnostic settings, go to its menu in the Azure portal and verify that there's a **Diagnostic settings** option in the **Monitoring** section of the menu. - If you're using any Azure resource other than a virtual machine: - A diagnostic setting to send the resource logs from your Azure resource to a Log Analytics workspace. See [Tutorial: Create Log Analytics workspace in Azure Monitor](../essentials/tutorial-resource-logs.md).
If you're using any Azure resource other than a virtual machine:
If you're using an Azure virtual machine: - A data collection rule to send guest logs and metrics to a Log Analytics workspace. See [Tutorial: Collect guest logs and metrics from Azure virtual machine](../vm/tutorial-monitor-vm-guest.md).-
-
- ## Select a log query and verify results
-Data is retrieved from a Log Analytics workspace using a log query written in Kusto Query Language (KQL). Insights and solutions in Azure Monitor will provide log queries to retrieve data for a particular service, but you can work directly with log queries and their results in the Azure portal with Log Analytics.
+## Select a log query and verify results
+
+Data is retrieved from a Log Analytics workspace using a log query written in Kusto Query Language (KQL). Insights and solutions in Azure Monitor provide log queries to retrieve data for a particular service, but you can work directly with log queries and their results in the Azure portal with Log Analytics.
-Select **Logs** from your resource's menu. Log Analytics opens with the **Queries** window that includes prebuilt queries for your **Resource type**. Select **Alerts** to view queries specifically designed for alert rules.
+Select **Logs** from your resource's menu. Log Analytics opens with the **Queries** window that includes prebuilt queries for your **Resource type**. Select **Alerts** to view queries designed for alert rules.
> [!NOTE] > If the **Queries** window doesn't open, click **Queries** in the top right. :::image type="content" source="media/tutorial-log-alert/queries.png" lightbox="media/tutorial-log-alert/queries.png"alt-text="Log Analytics with queries window":::
-Select a query and click **Run** to load it in the query editor and return results. You may want to modify the query and run it again. For example, the **Show anonymous requests** query for storage accounts is shown below. You may want to modify the **AuthenticationType** or filter on a different column.
+Select a query and click **Run** to load it in the query editor and return results. You may want to modify the query and run it again. For example, the **Show anonymous requests** query for storage accounts is shown in the following screenshot. You may want to modify the **AuthenticationType** or filter on a different column.
:::image type="content" source="media/tutorial-log-alert/query-results.png" lightbox="media/tutorial-log-alert/query-results.png"alt-text="Query results"::: - ## Create alert rule
-Once you verify your query, you can create the alert rule. Select **New alert rule** to create a new alert rule based on the current log query. The **Scope** will already be set to the current resource. You don't need to change this value.
+
+Once you verify your query, you can create the alert rule. Select **New alert rule** to create a new alert rule based on the current log query. The **Scope** is already set to the current resource. You don't need to change this value.
:::image type="content" source="media/tutorial-log-alert/create-alert-rule.png" lightbox="media/tutorial-log-alert/create-alert-rule.png"alt-text="Create alert rule":::+ ## Configure condition
-On the **Condition** tab, the **Log query** will already be filled in. The **Measurement** section defines how the records from the log query will be measured. If the query doesn't perform a summary, then the only option will be to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you'll have the option to use number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated. For example, if the aggregation granularity is set to 5 minutes, the alert rule will evaluate the data aggregated over the last 5 minutes. If the aggregation granularity is set to 15 minutes, the alert rule will evaluate the data aggregated over the last 15 minutes. It is important to choose the right aggregation granularity for your alert rule, as it can affect the accuracy of the alert.
+On the **Condition** tab, the **Log query** is already filled in. The **Measurement** section defines how the records from the log query are measured. If the query doesn't perform a summary, then the only option is to **Count** the number of **Table rows**. If the query includes one or more summarized columns, then you have the option to use the number of **Table rows** or a calculation based on any of the summarized columns. **Aggregation granularity** defines the time interval over which the collected values are aggregated. For example, if the aggregation granularity is set to 5 minutes, the alert rule evaluates the data aggregated over the last 5 minutes. If the aggregation granularity is set to 15 minutes, the alert rule evaluates the data aggregated over the last 15 minutes. It is important to choose the right aggregation granularity for your alert rule, as it can affect the accuracy of the alert.
:::image type="content" source="media/tutorial-log-alert/alert-rule-condition.png" lightbox="media/tutorial-log-alert/alert-rule-condition.png"alt-text="Alert rule condition"::: ### Configure dimensions+ **Split by dimensions** allows you to create separate alerts for different resources. This setting is useful when you're creating an alert rule that applies to multiple resources. With the scope set to a single resource, this setting typically isn't used. :::image type="content" source="media/tutorial-log-alert/alert-rule-dimensions.png" lightbox="media/tutorial-log-alert/alert-rule-dimensions.png"alt-text="Alert rule dimensions":::
-If you need a certain dimension(s) included in the alert notification email, you can specify a dimension (e.g. "Computer"), the alert notification email will include the computer name that triggered the alert. The alerting engine uses the alert query to determine the available dimensions. If you do not see the dimension you want in the drop-down list for the "Dimension name", it is because the alert query does not expose that column in the results. You can easily add the dimensions you want by adding a Project line to your query that includes the columns you want to use. You can also use the Summarize line to add additional columns to the query results.
+If you need certain dimensions included in the alert notification email, you can specify a dimension (for example, "Computer"), the alert notification email will include the computer name that triggered the alert. The alerting engine uses the alert query to determine the available dimensions. If you do not see the dimension you want in the drop-down list for the "Dimension name", it is because the alert query does not expose that column in the results. You can easily add the dimensions you want by adding a Project line to your query that includes the columns you want to use. You can also use the Summarize line to add more columns to the query results.
:::image type="content" source="media/tutorial-log-alert/alert-rule-condition-2.png" lightbox="media/tutorial-log-alert/alert-rule-condition-2.png" alt-text="Screenshot showing the Alert rule dimensions with a dimension called Computer set."::: ## Configure alert logic+ In the alert logic, configure the **Operator** and **Threshold value** to compare to the value returned from the measurement. An alert is created when this value is true. Select a value for **Frequency of evaluation** which defines how often the log query is run and evaluated. The cost for the alert rule increases with a lower frequency. When you select a frequency, the estimated monthly cost is displayed in addition to a preview of the query results over a time period.
-For example, if the measurement is **Table rows**, the alert logic may be **Greater than 0** indicating that at least one record was returned. If the measurement is a columns value, then the logic may need to be greater than or less than a particular threshold value. In the example below, the log query is looking for anonymous requests to a storage account. If an anonymous request has been made, then we should trigger an alert. In this case, a single row returned would trigger the alert, so the alert logic should be **Greater than 0**.
+For example, if the measurement is **Table rows**, the alert logic may be **Greater than 0** indicating that at least one record was returned. If the measurement is a columns value, then the logic may need to be greater than or less than a particular threshold value. In the following example, the log query is looking for anonymous requests to a storage account. If an anonymous request is made, then we should trigger an alert. In this case, a single row returned would trigger the alert, so the alert logic should be **Greater than 0**.
:::image type="content" source="media/tutorial-log-alert/alert-rule-alert-logic.png" lightbox="media/tutorial-log-alert/alert-rule-alert-logic.png"alt-text="Alert logic"::: -- ## Configure actions [!INCLUDE [Action groups](../../../includes/azure-monitor-tutorial-action-group.md)]
Click **Create alert rule** to create the alert rule.
## View the alert [!INCLUDE [View alert](../../../includes/azure-monitor-tutorial-view-alert.md)] - ## Next steps
-Now that you've learned how to create a log query alert for an Azure resource, have a look at workbooks for creating interactive visualizations of monitoring data.
+
+Now that you've learned how to create a log search alert for an Azure resource, have a look at workbooks for creating interactive visualizations of monitoring data.
> [!div class="nextstepaction"] > [Azure Monitor Workbooks](../visualize/workbooks-overview.md)
azure-monitor Availability Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md
A custom alert rule offers higher values for the aggregation period (up to 24 ho
1. The **Configure alerts** option from the menu takes you to the new experience where you can select specific tests or locations on which to set up alert rules. You can also configure the action groups for this alert rule here. -- **Alert on custom analytics queries**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-unified-log.md). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. It's also applicable if you're sending custom availability results by using the TrackAvailability SDK.
+- **Alert on custom analytics queries**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-types.md#log-alerts). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. It's also applicable if you're sending custom availability results by using the TrackAvailability SDK.
The metrics on availability data include any custom availability results you might be submitting by calling the TrackAvailability SDK. You can use the alerting on metrics support to alert on custom availability results.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
From within the Application Insights resource pane, select **Properties** > **Ch
This section provides answers to common questions.
+### What will happen if I don't migrate my Application Insights classic resource to a workspace-based resource?
+
+Microsoft will begin an automatic phased approach to migrating classic resources to workspace-based resources beginning in May 2024 and this migration will span the course of several months. We can't provide approximate dates that specific resources, subscriptions, or regions will be migrated.
+
+We strongly encourage manual migration to workspace-based resources, which is initiated by selecting the deprecation notice banner in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace will be used to store your application data. If you use continuous export, you'll need to additionally migrate to diagnostic settings or disable the feature first.
+
+If you don't wish to have your classic resource automatically migrated to a workspace-based resource, you may delete or manually migrate the resource.
+ ### Is there any implication on the cost from migration? There's usually no difference, with a couple of exceptions:
If you're using Terraform to manage your Azure resources, it's important to use
To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher, which performs the proper migration steps by issuing the appropriate ARM call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values. ### Can I still use the old API to create Application Insights resources programmatically?
-Yes, calls to the old API for creating Application Insights resources continue to work as before. The old API version doesn't include a reference to the Log Analytics resource. However, when you trigger a legacy API call, it automatically creates a resource and the required association between Application Insights and Log Analytics.
+
+For backwards compatibility, calls to the old API for creating Application Insights resources will continue to work. Each of these calls will eventually create both a workspace-based Application Insights resource and a Log Analytics workspace to store the data.
+
+We strongly encourage updating to the [new API](https://learn.microsoft.com/azure/azure-monitor/app/resource-manager-app-resource) for better control over resource creation.
### Should I migrate diagnostic settings on classic Application Insights before moving to a workspace-based AI? Yes, we recommend migrating diagnostic settings on classic Application Insights resources before transitioning to a workspace-based Application Insights. It ensures continuity and compatibility of your diagnostic settings.
-### What is the migration process for Application Insights resources?
-The migration of Application Insights resources to the new format isn't instantaneous on the day of deprecation. Instead, it occurs over time. We'll gradually migrate all Application Insights resources, ensuring a smooth transition with minimal disruption to your services.
- ## Troubleshooting This section offers troubleshooting tips for common issues.
azure-monitor Azure Monitor Rest Api Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-rest-api-index.md
Organized by subject area.
| [Metric alerts](/rest/api/monitor/metric-alerts) | Manages and lists [metric alert rules](./alerts/alerts-overview.md). | | [Metric alerts status](/rest/api/monitor/metric-alerts-status) | Lists the status of [metric alert rules](./alerts/alerts-overview.md). | | [Prometheus rule groups](/rest/api/monitor/prometheus-rule-groups) | Manages and lists [Prometheus rule groups](./essentials/prometheus-rule-groups.md) (alert rules and recording rules). |
-| [Scheduled query rules - 2023-03-15 (preview)](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2023-03-15-preview&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). |
-| [Scheduled query rules - 2018-04-16](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2018-04-16&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). |
-| [Scheduled query rules - 2021-08-01](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2021-08-01&preserve-view=true) | Manages and lists [log alert rules](./alerts/alerts-types.md#log-alerts). |
+| [Scheduled query rules - 2023-03-15 (preview)](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2023-03-15-preview&preserve-view=true) | Manages and lists [log search alert rules](./alerts/alerts-types.md#log-alerts). |
+| [Scheduled query rules - 2018-04-16](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2018-04-16&preserve-view=true) | Manages and lists [log search alert rules](./alerts/alerts-types.md#log-alerts). |
+| [Scheduled query rules - 2021-08-01](/rest/api/monitor/scheduled-query-rules?view=rest-monitor-2021-08-01&preserve-view=true) | Manages and lists [log search alert rules](./alerts/alerts-types.md#log-alerts). |
| [Smart Detector alert rules](/rest/api/monitor/smart-detector-alert-rules) | Manages and lists [smart detection alert rules](./alerts/alerts-types.md#smart-detection-alerts). | | ***Application Insights*** | | | [Components](/rest/api/application-insights/components) | Enables you to manage components that contain Application Insights data. |
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
This table describes Azure Monitor features that provide analysis of collected d
|||--| |Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. | |[Metrics Explorer](essentials/metrics-getting-started.md)|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no other configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. |
-|[Log Analytics](logs/log-analytics-overview.md)|With Log Analytics, you can create log queries to interactively work with log data and create log query alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. |
+|[Log Analytics](logs/log-analytics-overview.md)|With Log Analytics, you can create log queries to interactively work with log data and create log search alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. |
## Built-in visualization tools
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
The following table shows the configuration steps required to collect all availa
### Collect tenant and subscription logs
-The [Microsoft Entra logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically. When you send them to a Log Analytics workspace, you can analyze these events with other log data by using log queries in Log Analytics. You can also create log query alerts, which are the only way to alert on Microsoft Entra logs and provide more complex logic than activity log alerts.
+The [Microsoft Entra logs](../active-directory/reports-monitoring/overview-reports.md) for your tenant and the [activity log](essentials/platform-logs-overview.md) for your subscription are collected automatically. When you send them to a Log Analytics workspace, you can analyze these events with other log data by using log queries in Log Analytics. You can also create log search alerts, which are the only way to alert on Microsoft Entra logs and provide more complex logic than activity log alerts.
There's no cost for sending the activity log to a workspace, but there's a data ingestion and retention charge for Microsoft Entra logs.
azure-monitor Container Insights Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-custom-metrics.md
Title: Custom metrics collected by Container insights
description: Describes the custom metrics collected for a Kubernetes cluster by Container insights in Azure Monitor. Previously updated : 09/28/2022 Last updated : 02/15/2024
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
Title: Log alerts from Container insights | Microsoft Docs
-description: This article describes how to create custom log alerts for memory and CPU utilization from Container insights.
+ Title: Log search alerts from Container insights | Microsoft Docs
+description: This article describes how to create custom log search alerts for memory and CPU utilization from Container insights.
Last updated 08/29/2022
-# Create log alerts from Container insights
+# Create log search alerts from Container insights
Container insights monitors the performance of container workloads that are deployed to managed or self-managed Kubernetes clusters. To alert on what matters, this article describes how to create log-based alerts for the following situations with Azure Kubernetes Service (AKS) clusters:
Container insights monitors the performance of container workloads that are depl
- `Failed`, `Pending`, `Unknown`, `Running`, or `Succeeded` pod-phase counts - When free disk space on cluster nodes exceeds a threshold
-To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. Metric alerts have lower latency than log alerts, but log alerts provide advanced querying and greater sophistication. Log alert queries compare a datetime to the present by using the `now` operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time [UTC] format.)
+To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. Metric alerts have lower latency than log search alerts, but log search alerts provide advanced querying and greater sophistication. Log search alert queries compare a datetime to the present by using the `now` operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time [UTC] format.)
> [!IMPORTANT] > Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create alert rules, see the "Alert rules" section in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-If you aren't familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log alerts in Azure Monitor](../alerts/alerts-unified-log.md). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md).
+If you aren't familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log search alerts in Azure Monitor](../alerts/alerts-types.md#log-alerts). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md).
## Log query measurements
-[Log alerts](../alerts/alerts-unified-log.md) can measure two different things, which can be used to monitor virtual machines in different scenarios:
+[Log search alerts](../alerts/alerts-types.md#log-alerts) can measure two different things, which can be used to monitor virtual machines in different scenarios:
-- [Result count](../alerts/alerts-unified-log.md#result-count): Counts the number of rows returned by the query and can be used to work with events such as Windows event logs, Syslog, and application exceptions.-- [Calculation of a value](../alerts/alerts-unified-log.md#calculation-of-a-value): Makes a calculation based on a numeric column and can be used to include any number of resources. An example is CPU percentage.
+- [Result count](../alerts/alerts-types.md#log-alerts): Counts the number of rows returned by the query and can be used to work with events such as Windows event logs, Syslog, and application exceptions.
+- [Calculation of a value](../alerts/alerts-types.md#log-alerts): Makes a calculation based on a numeric column and can be used to include any number of resources. An example is CPU percentage.
### Target resources and dimensions
To create resource-centric alerts at scale for a subscription or resource group,
You might also decide not to split when you want a condition on multiple resources in the scope. For example, you might want to create an alert if at least five machines in the resource group scope have CPU usage over 80%. You might want to see a list of the alerts by affected computer. You can use a custom workbook that uses a custom [resource graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook.
-## Create a log query alert rule
-To create a log query alert rule by using the portal, see [this example of a log query alert](../alerts/tutorial-log-alert.md), which provides a complete walkthrough. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article.
+## Create a log search alert rule
+To create a log search alert rule by using the portal, see [this example of a log search alert](../alerts/tutorial-log-alert.md), which provides a complete walkthrough. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article.
-To create a query alert rule by using an Azure Resource Manager (ARM) template, see [Resource Manager template samples for log alert rules in Azure Monitor](../alerts/resource-manager-alerts-log.md). You can use these same processes to create ARM templates for the log queries in this article.
+To create a query alert rule by using an Azure Resource Manager (ARM) template, see [Resource Manager template samples for log search alert rules in Azure Monitor](../alerts/resource-manager-alerts-log.md). You can use these same processes to create ARM templates for the log queries in this article.
## Resource utilization
azure-monitor Container Insights Metric Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md
Source code for the recommended alerts can be found in [GitHub](https://aka.ms/a
| Completed job count | Completed job count | Calculates number of jobs completed more than six hours ago. | 0 | > [!NOTE]
-> The recommended alert rules in the Azure portal also include a log alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule isn't included with the Prometheus alert rules.
+> The recommended alert rules in the Azure portal also include a log search alert rule called *Daily Data Cap Breach*. This rule alerts when the total data ingestion to your Log Analytics workspace exceeds the [designated quota](../logs/daily-cap.md). This alert rule isn't included with the Prometheus alert rules.
>
-> You can create this rule on your own by creating a [log alert rule](../alerts/alerts-types.md#log-alerts) that uses the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
+> You can create this rule on your own by creating a [log search alert rule](../alerts/alerts-types.md#log-alerts) that uses the query `_LogOperation | where Operation == "Data collection Status" | where Detail contains "OverQuota"`.
Common properties across all these alert rules include:
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
No. Container insights don't support collection of Kubernetes audit logs.
**Does Container Insights support pod sandboxing?** Yes, Container Insights supports pod sandboxing through support for Kata Containers. See [Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)](../../aks/use-pod-sandboxing.md).
+**Is it possible for a single AKS cluster to use multiple Log Analytics workspaces in Container Insights?**
+No. Container insights only accepts one Log Analytics Workspace in Container Insights for each AKS cluster.
+ ## Next steps - See [Enable monitoring for Kubernetes clusters](kubernetes-monitoring-enable.md) to enable Managed Prometheus and Container insights on your cluster.
azure-monitor Container Insights Prometheus Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-logs.md
# Send Prometheus metrics to Log Analytics workspace with Container insights
-This article describes how to send Prometheus metrics from your Kubernetes cluster monitored by Container insights to a Log Analytics workspace. Before you perform this configuration, you should first ensure that you're [scraping Prometheus metrics from your cluster using Azure Monitor managed service for Prometheus](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration), which is the recommended method for monitoring your clusters. Use the configuration described in this article only if you also want to send this same data to a Log Analytics workspace where you can analyze it using [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log-query.md).
+This article describes how to send Prometheus metrics from your Kubernetes cluster monitored by Container insights to a Log Analytics workspace. Before you perform this configuration, you should first ensure that you're [scraping Prometheus metrics from your cluster using Azure Monitor managed service for Prometheus](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration), which is the recommended method for monitoring your clusters. Use the configuration described in this article only if you also want to send this same data to a Log Analytics workspace where you can analyze it using [log queries](../logs/log-query-overview.md) and [log search alerts](../alerts/alerts-log-query.md).
This configuration requires configuring the *monitoring addon* for the Azure Monitor agent, which is the same one used by Container insights to send data to a Log Analytics workspace. It requires exposing the Prometheus metrics endpoint through your exporters or pods and then configuring the monitoring addon for the Azure Monitor agent used by Container insights as shown the following diagram.
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
Title: Troubleshoot Container insights | Microsoft Docs description: This article describes how you can troubleshoot and resolve issues with Container insights. Previously updated : 05/24/2022 Last updated : 02/15/2024
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
The following table describes the different types of custom alert rules that you
|:|:| | Prometheus alerts | [Prometheus alerts](../alerts/prometheus-alerts.md) are written in Prometheus Query Language (Prom QL) and applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Recommended alerts already include the most common Prometheus alerts, and you can [create addition alert rules](../essentials/prometheus-rule-groups.md) as required. | | Metric alert rules | Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. Metric alert rules can be useful to alert on AKS performance using any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). |
-| Log alert rules | Use log alert rules to generate an alert from the results of a log query. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). |
+| Log search alert rules | Use log search alert rules to generate an alert from the results of a log query. For more information, see [How to create log search alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). |
#### Recommended alerts Start with a set of recommended Prometheus alerts from [Metric alert rules in Container insights (preview)](container-insights-metric-alerts.md#prometheus-alert-rules) which include the most common alerting conditions for a Kubernetes cluster. You can add more alert rules later as you identify additional alerting conditions.
azure-monitor Cost Estimate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-estimate.md
This section includes charges for alert rules.
| Category | Description | |:|:| | Metric Signals Monitored | Number of [metrics alert rules](alerts/alerts-types.md#metric-alerts) and their time series. |
-| Log Signals Monitored | Number of [log alert rules](alerts/alerts-types.md#log-alerts) and their frequency. |
+| Log Signals Monitored | Number of [log search alert rules](alerts/alerts-types.md#log-alerts) and their frequency. |
## ITSM connector - ticket creation/update This section includes charges for ITSM events, which are sent in response to alerts being triggered.
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
Several other features don't have a direct cost, but you instead pay for the ing
| Platform Logs | Processing of [diagnostic and auditing information](essentials/resource-logs.md) is charged for [certain services](essentials/resource-logs-categories.md#costs) when sent to destinations other than a Log Analytics workspace. There's no direct charge when this data is sent to a Log Analytics workspace, but there is a charge for the workspace data ingestion and collection. | | Metrics | There is no charge for [standard metrics](essentials/metrics-supported.md) collected from Azure resources. There is a cost for collecting [custom metrics](essentials/metrics-custom-overview.md) and for retrieving metrics from the [REST API](essentials/rest-api-walkthrough.md#retrieve-metric-values). | | Prometheus Metrics | Pricing for [Azure Monitor managed service for Prometheus](essentials/prometheus-metrics-overview.md) is based on [data samples ingested](containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana) and [query samples processed](essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace). Data is retained for 18 months at no extra charge. |
-| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log alerts](alerts/alerts-unified-log.md) configured for [at scale monitoring](alerts/alerts-unified-log.md#split-by-alert-dimensions), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+| Alerts | Alerts are charged based on the type and number of [signals](alerts/alerts-overview.md) used by the alert rule, its frequency, and the type of [notification](alerts/action-groups.md) used in response. For [log search alerts](alerts/alerts-types.md#log-alerts) configured for [at scale monitoring](alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
| Web tests | There is a cost for [standard web tests](app/availability-standard-tests.md) and [multi-step web tests](app/availability-multistep.md) in Application Insights. Multi-step web tests have been deprecated.
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Logs in Azure Monitor are stored in a Log Analytics workspace that's based on [A
> >Azure Monitor Logs is a log data platform that collects Activity logs and resource logs along with other monitoring data to provide deep analysis across your entire set of resources.
- You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal. You can also add the results to an [Azure dashboard](app/overview-dashboard.md#create-custom-kpi-dashboards-using-application-insights) for visualization in combination with other data. You can create [log alerts](alerts/alerts-log.md), which will trigger an alert based on the results of a schedule query.
+ You can work with [log queries](logs/log-query-overview.md) interactively with [Log Analytics](logs/log-query-overview.md) in the Azure portal. You can also add the results to an [Azure dashboard](app/overview-dashboard.md#create-custom-kpi-dashboards-using-application-insights) for visualization in combination with other data. You can create [log search alerts](alerts/alerts-log.md), which will trigger an alert based on the results of a schedule query.
Read more about Azure Monitor logs including their sources of data in [Logs in Azure Monitor](logs/data-platform-logs.md).
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
You can also access activity log events by using the following methods:
- Correlate activity log data with other monitoring data collected by Azure Monitor. - Consolidate log entries from multiple Azure subscriptions and tenants into one location for analysis together. - Use log queries to perform complex analysis and gain deep insights on activity log entries.-- Use log alerts with Activity entries for more complex alerting logic.
+- Use log search alerts with Activity entries for more complex alerting logic.
- Store activity log entries for longer than the activity log retention period. - Incur no data ingestion or retention charges for activity log data stored in a Log Analytics workspace. - The default retention period in Log Analytics is 90 days
azure-monitor Collect Custom Metrics Linux Telegraf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-linux-telegraf.md
Last updated 08/01/2023
# Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article explains how to deploy and configure the [InfluxData](https://www.influxdata.com/) Telegraf agent on a Linux virtual machine to send metrics to Azure Monitor. > [!NOTE]
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
The **Activity log** menu item lets you view entries in the [activity log](../es
The **Alerts** page shows you any recent alerts that were fired for the resource. Alerts proactively notify you when important conditions are found in your monitoring data and can use data from either Metrics or Logs.
-To learn how to create alert rules and view alerts, see [Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md).
+To learn how to create alert rules and view alerts, see [Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md) or [Create a log search alert for an Azure resource](../alerts/tutorial-log-alert.md).
:::image type="content" source="media/monitor-azure-resource/alerts-view.png" lightbox="media/monitor-azure-resource/alerts-view.png" alt-text="Screenshot that shows the Alerts page.":::
azure-monitor Platform Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/platform-logs-overview.md
Resource logs must have a diagnostic setting to be viewed. Create a [diagnostic
| Destination | Description | |:|:|
-| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
+| Log Analytics workspace | Analyze the logs of all your Azure resources together and take advantage of all the features available to [Azure Monitor Logs](../logs/data-platform-logs.md) including [log queries](../logs/log-query-overview.md) and [log search alerts](../alerts/alerts-log.md). Pin the results of a log query to an Azure dashboard or include it in a workbook as part of an interactive report. |
| Event hub | Send platform log data outside of Azure, for example, to a third-party SIEM or custom telemetry platform via Event hubs | | Azure Storage | Archive the logs to Azure storage for audit or backup. | | [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Partner integrations are specialized integrations between Azure Monitor and non-Microsoft monitoring platforms. Partner integrations are especially useful when you're already using one of the supported partners. |
azure-monitor Prometheus Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-get-started.md
+
+ Title: Get started with Azure Monitor Managed Service for Prometheus
+description: Get started with Azure Monitor managed service for Prometheus, which provides a Prometheus-compatible interface for storing and retrieving metric data.
++++ Last updated : 02/15/2024++
+# Get Started with Azure Monitor managed service for Prometheus
+
+The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics.
+
+- To collect Prometheus metrics from your Kubernetes cluster, see [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
+- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write](./remote-write-prometheus.md).
+
+## Data sources
+
+Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources:
+
+- Azure Kubernetes service (AKS)
+- Azure Arc-enabled Kubernetes
+- Any server or Kubernetes cluster running self-managed Prometheus using [remote-write](./remote-write-prometheus.md).
+
+## Next steps
+
+- [Learn more about Azure Monitor Workspace](./azure-monitor-workspace-overview.md)
+- [Enable Azure Monitor managed service for Prometheus on your Kubernetes clusters](../containers/kubernetes-monitoring-enable.md).
+- [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Azure Monitor managed service for Prometheus allows you to collect and analyze m
Azure Monitor managed service for Prometheus can currently collect data from any of the following data sources: - Azure Kubernetes service (AKS)-- Any Kubernetes cluster running self-managed Prometheus using [remote-write](https://aka.ms/azureprometheus-promio-prw).-- Azure Arc-enabled Kubernetes
+- Azure Arc-enabled Kubernetes
+- Any server or Kubernetes cluster running self-managed Prometheus using [remote-write](./remote-write-prometheus.md).
## Enable The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics. - To collect Prometheus metrics from your Kubernetes cluster, see [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).-- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write - managed identity](prometheus-remote-write-managed-identity.md).
+- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write](./remote-write-prometheus.md).
## Grafana integration The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards.
azure-monitor Remote Write Prometheus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/remote-write-prometheus.md
+
+ Title: Remote-write Prometheus metrics to Azure Monitor managed service for Prometheus
+description: Describes how customers can configure remote-write to send data from self-managed Prometheus running in any environment to Azure Monitor managed service for Prometheus
++ Last updated : 02/12/2024++
+# Prometheus Remote-Write to Azure Monitor Workspace
+
+Azure Monitor managed service for Prometheus is intended to be a replacement for self-managed Prometheus so you don't need to manage a Prometheus server in your Kubernetes clusters. You may also choose to use the managed service to centralize data from self-managed Prometheus clusters for long term data retention and to create a centralized view across your clusters.
+In case you are using self-managed Prometheus, you can use [remote_write](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) to send data from your self-managed Prometheus into the Azure managed service.
+
+For sending data from self-managed Prometheus running on your environments to Azure Monitor workspace, follow the steps in this document.
+
+## Choose the right solution for remote-write
+
+Based on where your self-managed Prometheus is running, choose from the options below:
+
+- **Self-managed Prometheus running on Azure Kubernetes Services (AKS) or Azure VM/VMSS**: Follow the steps in this documentation for configuring remote-write in Prometheus using User-assigned managed identity authentication.
+- **Self-managed Prometheus running on non-Azure environments**: Azure Monitor managed service for Prometheus has a managed offering for supported [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). However, if you wish to send data from self-managed Prometheus running on non-Azure or on-premises environments, consider the following options:
+ - Onboard supported Kubernetes or VM/VMSS to [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md) / [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) which will allow you to manage and configure them in Azure. Then follow the steps in this documentation for configuring remote-write in Prometheus using User-assigned managed identity authentication.
+ - For all other scenarios, follow the steps in this documentation for configuring remote-write in Prometheus using Azure Entra application.
+
+> [!NOTE]
+> Currently user-assigned managed identity and Azure Entra application are the authentication methods supported for remote-writing to Azure Monitor Workspace. If you are using other authentication methods and running self-managed Prometheus on **Kubernetes**, Azure Monitor provides a reverse proxy container that provides an abstraction for ingestion and authentication for Prometheus remote-write metrics. Please see [remote-write from Kubernetes to Azure Monitor Managed Service for Prometheus](../containers/prometheus-remote-write.md) to use this reverse proxy container.
+
+## Prerequisites
+
+- You must have [self-managed Prometheus](https://prometheus.io/) running on your environment. Supported versions are:
+ - For managed identity, versions greater than v2.45
+ - For Azure Entra, versions greater than v2.48
+- Azure Monitor managed service for Prometheus stores metrics in [Azure Monitor workspace](./azure-monitor-workspace-overview.md). To proceed, you need to have an Azure Monitor Workspace instance. [Create a new workspace](./azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace) if you don't already have one.
+
+## Configure Remote-Write to send data to Azure Monitor Workspace
+
+You can enable remote-write by configuring one or more remote-write sections in the Prometheus configuration file. Details about the Prometheus remote write setting can be found [here](https://prometheus.io/docs/practices/remote_write/).
+
+The **remote_write** section in the Prometheus configuration file defines one or more remote-write configurations, each of which has a mandatory url parameter and several optional parameters. The url parameter specifies the HTTP URL of the remote endpoint that implements the Prometheus remote-write protocol. In this case, the URL is the metrics ingestion endpoint for your Azure Monitor Workspace. The optional parameters can be used to customize the behavior of the remote-write client, such as authentication, compression, retry, queue, or relabeling settings. For a full list of the available parameters and their meanings, see the Prometheus documentation: [https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write).
+
+To send data to your Azure Monitor Workspace, you will need the following information:
+
+- **Remote-write URL**: This is the metrics ingestion endpoint of the Azure Monitor workspace. To find this, go to the Overview page of your Azure Monitor Workspace instance in Azure portal, and look for the Metrics ingestion endpoint property.
+
+ :::image type="content" source="media/azure-monitor-workspace-overview/remote-write-ingestion-endpoint.png" lightbox="media/azure-monitor-workspace-overview/remote-write-ingestion-endpoint.png" alt-text="Screenshot of Azure Monitor workspaces menu and ingestion endpoint.":::
+
+- **Authentication settings**: Currently **User-assigned managed identity** and **Azure Entra application** are the authentication methods supported for remote-writing to Azure Monitor Workspace. Note that for Azure Entra application, client secrets have an expiration date and it is the responsibility of the user to keep secrets valid.
+
+### User-assigned managed identity
+
+1. Create a managed identity and then add a role assignment for the managed identity to access your environment. For details, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+1. Assign the Monitoring Metrics Publisher role on the workspace data collection rule to the managed identity:
+ 1. The managed identity must be assigned the **Monitoring Metrics Publisher** role on the data collection rule that is associated with your Azure Monitor Workspace.
+ 1. On the resource menu for your Azure Monitor workspace, select Overview. Select the link for Data collection rule:
+
+ :::image type="content" source="media/azure-monitor-workspace-overview/remote-write-dcr.png" lightbox="media/azure-monitor-workspace-overview/remote-write-dcr.png" alt-text="Screenshot of how to navigate to the data collection rule.":::
+
+ 1. On the resource menu for the data collection rule, select **Access control (IAM)**. Select Add, and then select Add role assignment.
+ 1. Select the **Monitoring Metrics Publisher role**, and then select **Next**.
+ 1. Select Managed Identity, and then choose Select members. Select the subscription that contains the user-assigned identity, and then select User-assigned managed identity. Select the user-assigned identity that you want to use, and then choose Select.
+ 1. To complete the role assignment, select **Review + assign**.
+
+### Azure Entra application
+
+The process to set up Prometheus remote write for an application by using Microsoft Entra authentication involves completing the following tasks:
+
+1. Complete the steps to [register an application with Microsoft Entra ID](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) and create a service principal.
+
+1. Get the client ID and secret ID of the Microsoft Entra application. In the Azure portal, go to the **Microsoft Entra ID** menu and select **App registrations**.
+1. In the list of applications, copy the value for **Application (client) ID** for the registered application.
++
+1. Open the **Certificates and Secrets** page of the application, and click on **+ New client secret** to create a new Secret. Copy the value of the secret securely.
+
+> [!WARNING]
+> Client secrets have an expiration date. It's the responsibility of the user to keep them valid.
+
+1. Assign the **Monitoring Metrics Publisher** role on the workspace data collection rule to the application. The application must be assigned the Monitoring Metrics Publisher role on the data collection rule that is associated with your Azure Monitor workspace.
+1. On the resource menu for your Azure Monitor workspace, select **Overview**. For **Data collection rule**, select the link.
+
+ :::image type="content" source="../containers/media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png" alt-text="Screenshot that shows the data collection rule that's used by Azure Monitor workspace." lightbox="../containers/media/prometheus-remote-write-managed-identity/azure-monitor-account-data-collection-rule.png":::
+
+1. On the resource menu for the data collection rule, select **Access control (IAM)**.
+
+1. Select **Add**, and then select **Add role assignment**.
+
+ :::image type="content" source="../containers/media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png" alt-text="Screenshot that shows adding a role assignment on Access control pages." lightbox="../containers/media/prometheus-remote-write-managed-identity/data-collection-rule-add-role-assignment.png":::
+
+1. Select the **Monitoring Metrics Publisher** role, and then select **Next**.
+
+ :::image type="content" source="../containers/media/prometheus-remote-write-managed-identity/add-role-assignment.png" alt-text="Screenshot that shows a list of role assignments." lightbox="../containers/media/prometheus-remote-write-managed-identity/add-role-assignment.png":::
+
+1. Select **User, group, or service principal**, and then choose **Select members**. Select the application that you created, and then choose **Select**.
+
+ :::image type="content" source="../containers/media/prometheus-remote-write-active-directory/select-application.png" alt-text="Screenshot that shows selecting the application." lightbox="../containers/media/prometheus-remote-write-active-directory/select-application.png":::
+
+1. To complete the role assignment, select **Review + assign**.
+
+## Configure remote-write
+
+Now, that you have the required information, configure the following section in the Prometheus.yml config file of your self-managed Prometheus instance to send data to your Azure Monitor Workspace.
+
+```yaml
+remote_write:
+ url: "<<Metrics Ingestion Endpoint for your Azure Monitor Workspace>>"
+# AzureAD configuration.
+# The Azure Cloud. Options are 'AzurePublic', 'AzureChina', or 'AzureGovernment'.
+ azuread:
+ cloud: 'AzurePublic'
+ managed_identity:
+ client_id: "<<client-id of the managed identity>>"
+ oauth:
+ client_id: "<<client-id of the app>>"
+ client_secret: "<<client secret>>"
+ tenant_id: "<<tenant id of Azure subscription>>"
+```
+
+Replace the values in the YAML with the values that you copied in the previous steps. If you are using Managed Identity authentication, then you can skip the **"oauth"** section of the yaml. And similarly, if you are using Azure Entra as the authentication method, you can skip the **"managed_identity"** section of the yaml.
+
+After editing the configuration file, you need to reload or restart Prometheus to apply the changes.
+
+## Verify if the remote-write is setup correctly
+
+Use the following methods to verify that Prometheus data is being sent into your Azure Monitor workspace.
+
+### PromQL queries
+
+Use PromQL queries in Grafana and verify that the results return expected data. See [getting Grafana setup with Managed Prometheus](../essentials/prometheus-grafana.md) to configure Grafana.
+
+### Prometheus explorer in Azure Monitor Workspace
+
+Go to your Azure Monitor workspace in the Azure portal and click on Prometheus Explorer to query the metrics that you are expecting from the self-managed Prometheus environment.
+
+## Troubleshoot remote write
+
+You can look at few remote write metrics that can help understand possible issues. A list of these metrics can be found [here](https://github.com/prometheus/prometheus/blob/v2.26.0/storage/remote/queue_manager.go#L76-L223) and [here](https://github.com/prometheus/prometheus/blob/v2.26.0/tsdb/wal/watcher.go#L88-L136).
+
+For example, *prometheus_remote_storage_retried_samples_total* could indicate problems with the remote setup if there is a steady high rate for this metric, and you can contact support if such issues arise.
+
+### Hitting your ingestion quota limit
+
+With remote write you will typically get started using the remote write endpoint shown on the Azure Monitor workspace overview page. Behind the scenes, this uses a system Data Collection Rule (DCR) and system Data Collection Endpoint (DCE). These resources have an ingestion limit covered in the [Azure Monitor service limits](../service-limits.md#prometheus-metrics) document. You may hit these limits if you set up remote write for several clusters all sending data into the same endpoint in the same Azure Monitor workspace. If this is the case you can [create additional DCRs and DCEs](https://aka.ms/prometheus/remotewrite/dcrartifacts) and use them to spread out the ingestion loads across a few ingestion endpoints.
+
+The INGESTION-URL uses the following format:
+https\://\<**Metrics-Ingestion-URL**>/dataCollectionRules/\<**DCR-Immutable-ID**>/streams/Microsoft-PrometheusMetrics/api/v1/write?api-version=2021-11-01-preview
+
+**Metrics-Ingestion-URL**: can be obtained by viewing DCE JSON body with API version 2021-09-01-preview or newer. See screenshot below for reference.
++
+**DCR-Immutable-ID**: can be obtained by viewing DCR JSON body or running the following command in the Azure CLI:
+
+```azureccli
+az monitor data-collection rule show --name "myCollectionRule" --resource-group "myResourceGroup"
+```
+
+## Next steps
+
+- [Learn more about Azure Monitor managed service for Prometheus](./prometheus-metrics-overview.md).
+- [Learn more about Azure Monitor reverse proxy side car for remote-write from self-managed Prometheus running on Kubernetes](../containers/prometheus-remote-write.md)
azure-monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs.md
Azure resource logs are [platform logs](../essentials/platform-logs-overview.md)
- Correlate resource log data with other monitoring data collected by Azure Monitor. - Consolidate log entries from multiple Azure resources, subscriptions, and tenants into one location for analysis together. - Use log queries to perform complex analysis and gain deep insights on log data.-- Use log alerts with complex alerting logic.
+- Use log search alerts with complex alerting logic.
[Create a diagnostic setting](../essentials/diagnostic-settings.md) to send resource logs to a Log Analytics workspace. This data is stored in tables as described in [Structure of Azure Monitor Logs](../logs/data-platform-logs.md). The tables used by resource logs depend on what type of collection the resource is using:
azure-monitor Tutorial Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md
Browse through the available queries. Identify one to run and select **Run**. Th
:::image type="content" source="media/tutorial-resource-logs/query-results.png" lightbox="media/tutorial-resource-logs/query-results.png"alt-text="Screenshot that shows the results of a sample log query."::: ## Next steps
-Now that you're collecting resource logs, create a log query alert to be proactively notified when interesting data is identified in your log data.
+Now that you're collecting resource logs, create a log search alert to be proactively notified when interesting data is identified in your log data.
> [!div class="nextstepaction"]
-> [Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md)
+> [Create a log search alert for an Azure resource](../alerts/tutorial-log-alert.md)
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
An unexpected increase in any of these factors can result in increased charges f
To avoid unexpected bills, you should be proactively notified anytime you experience excessive usage. Notification allows you to address any potential anomalies before the end of your billing period.
-The following example is a [log alert rule](../alerts/alerts-unified-log.md) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule.
+The following example is a [log search alert rule](../alerts/alerts-types.md#log-alerts) that sends an alert if the billable data volume ingested in the last 24 hours was greater than 50 GB. Modify the **Alert Logic** setting to use a different threshold based on expected usage in your environment. You can also increase the frequency to check usage multiple times every day, but this option will result in a higher charge for the alert rule.
| Setting | Value | |:|:|
azure-monitor Cross Workspace Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cross-workspace-query.md
If you manage subscriptions in other Microsoft Entra tenants through [Azure Ligh
* Cross-resource and cross-service queries donΓÇÖt support parameterized functions and functions whose definition includes other cross-workspace or cross-service expressions, including `adx()`, `arg()`, `resource()`, `workspace()`, and `app()`. * You can include up to 100 Log Analytics workspaces or classic Application Insights resources in a single query. * Querying across a large number of resources can substantially slow down the query.
-* Cross-resource queries in log alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md).
+* Cross-resource queries in log search alerts are only supported in the current [scheduledQueryRules API](/rest/api/monitor/scheduledqueryrule-2018-04-16/scheduled-query-rules). If you're using the legacy Log Analytics Alerts API, you'll need to [switch to the current API](../alerts/alerts-log-api-switch.md).
* References to a cross resource, such as another workspace, should be explicit and can't be parameterized. ## Query across workspaces, applications, and resources using functions
applicationsScoping
``` >[!NOTE]
-> This method can't be used with log alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isn't supported. If you prefer to use a function for resource scoping in log alerts, you must edit the alert rule in the portal or with an Azure Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log alert query.
+> This method can't be used with log search alerts because the access validation of the alert rule resources, including workspaces and applications, is performed at alert creation time. Adding new resources to the function after the alert creation isn't supported. If you prefer to use a function for resource scoping in log search alerts, you must edit the alert rule in the portal or with an Azure Resource Manager template to update the scoped resources. Alternatively, you can include the list of resources in the log search alert query.
## Query across Log Analytics workspaces using workspace()
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Key rotation has two modes:
All your data remains accessible after the key rotation operation. Data always encrypted with the Account Encryption Key ("AEK"), which is encrypted with your new Key Encryption Key ("KEK") version in Key Vault.
-## Customer-managed key for saved queries and log alerts
+## Customer-managed key for saved queries and log search alerts
-The query language used in Log Analytics is expressive and can contain sensitive information in comments, or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store saved queries and log alerts encrypted with your key in your own Storage Account when linked to your workspace.
+The query language used in Log Analytics is expressive and can contain sensitive information in comments, or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store saved queries and log search alerts encrypted with your key in your own Storage Account when linked to your workspace.
## Customer-managed key for Workbooks
-With the considerations mentioned for [Customer-managed key for saved queries and log alerts](#customer-managed-key-for-saved-queries-and-log-alerts), Azure Monitor enables you to store Workbook queries encrypted with your key in your own Storage Account, when selecting **Save content to an Azure Storage Account** in Workbook 'Save' operation.
+With the considerations mentioned for [Customer-managed key for saved queries and log search alerts](#customer-managed-key-for-saved-queries-and-log-search-alerts), Azure Monitor enables you to store Workbook queries encrypted with your key in your own Storage Account, when selecting **Save content to an Azure Storage Account** in Workbook 'Save' operation.
<!-- convertborder later --> :::image type="content" source="media/customer-managed-keys/cmk-workbook.png" lightbox="media/customer-managed-keys/cmk-workbook.png" alt-text="Screenshot of Workbook save." border="false"::: > [!NOTE] > Queries remain encrypted with Microsoft key ("MMK") in the following scenarios regardless Customer-managed key configuration: Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
-When linking your Storage Account for saved queries, the service stores saved-queries and log alerts queries in your Storage Account. Having control on your Storage Account [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md), you can protect saved queries and log alerts with Customer-managed key. You will, however, be responsible for the costs associated with that Storage Account.
+When linking your Storage Account for saved queries, the service stores saved queries and log search alert queries in your Storage Account. Having control on your Storage Account [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md), you can protect saved queries and log search alerts with Customer-managed key. You will, however, be responsible for the costs associated with that Storage Account.
**Considerations before setting Customer-managed key for queries** * You need to have "write" permissions on your workspace and Storage Account.
When linking your Storage Account for saved queries, the service stores saved-qu
* Saves queries in storage are considered service artifacts and their format may change. * Linking a Storage Account for queries removes existing saves queries from your workspace. Copy saves queries that you need before this configuration. You can view your saved queries using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch). * Query 'history' and 'pin to dashboard' aren't supported when linking Storage Account for queries.
-* You can link a single Storage Account to a workspace for both saved queries and log alerts queries.
-* Log alerts are saved in blob storage and Customer-managed key encryption can be configured at Storage Account creation, or later.
-* Fired log alerts won't contain search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts.
+* You can link a single Storage Account to a workspace for both saved queries and log search alert queries.
+* Log search alerts are saved in blob storage and Customer-managed key encryption can be configured at Storage Account creation, or later.
+* Fired log search alerts won't contain search results or alert query. You can use [alert dimensions](../alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) to get context in the fired alerts.
**Configure BYOS for saved queries**
Content-type: application/json
After the configuration, any new *saved search* query will be saved in your storage.
-**Configure BYOS for log alerts queries**
+**Configure BYOS for log search alert queries**
-Link a Storage Account for *Alerts* to keep *log alerts* queries in your Storage Account.
+Link a Storage Account for *Alerts* to keep *log search alert* queries in your Storage Account.
# [Azure portal](#tab/portal)
azure-monitor Daily Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md
To configure the daily cap with Azure Resource Manager, set the `dailyQuota`, `d
## Alert when daily cap is reached When the daily cap is reached for a Log Analytics workspace, a banner is displayed in the Azure portal, and an event is written to the **Operations** table in the workspace. You should create an alert rule to proactively notify you when this occurs.
-To receive an alert when the daily cap is reached, create a [log alert rule](../alerts/alerts-unified-log.md) with the following details.
+To receive an alert when the daily cap is reached, create a [log search alert rule](../alerts/alerts-types.md#log-alerts) with the following details.
| Setting | Value | |:|:|
To create an alert when the daily cap is reached, create an [Activity log alert
## View the effect of the daily cap
-The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. This accounts for the security data types that aren't included in the daily cap. In this example, the workspace's reset hour is 14:00. Change this value for your workspace.
+The following query can be used to track the data volumes that are subject to the daily cap for a Log Analytics workspace between daily cap resets. In this example, the workspace's reset hour is 14:00. Change `DailyCapResetHour` to match the reset hour of your workspace which you can see on the Daily Cap configuration page.
```kusto let DailyCapResetHour=14; Usage
-| where DataType !in ("SecurityAlert", "SecurityBaseline", "SecurityBaselineSummary", "SecurityDetection", "SecurityEvent", "WindowsFirewall", "MaliciousIPCommunication", "LinuxAuditLog", "SysmonEvent", "ProtectionStatus", "WindowsEvent")
| where TimeGenerated > ago(32d) | extend StartTime=datetime_add("hour",-1*DailyCapResetHour,StartTime) | where StartTime > startofday(ago(31d))
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
The following table describes some of the ways that you can use Azure Monitor Lo
| Capability | Description | |:|:| | Analyze | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data by using a powerful analysis engine. |
-| Alert | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. |
+| Alert | Configure a [log search alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. |
| Visualize | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](./log-powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.| | Get insights | Logs support [insights](../insights/insights-overview.md) that provide a customized monitoring experience for particular applications and services. | | Retrieve | Access log query results from:<ul><li>Command line via the [Azure CLI](/cli/azure/monitor/log-analytics) or [Azure PowerShell cmdlets](/powershell/module/az.operationalinsights).</li><li>Custom app via the [REST API](/rest/api/loganalytics/) or client library for [.NET](/dotnet/api/overview/azure/Monitor.Query-readme), [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery), [Java](/java/api/overview/azure/monitor-query-readme), [JavaScript](/javascript/api/overview/azure/monitor-query-readme), or [Python](/python/api/overview/azure/monitor-query-readme).</li></ul> |
This configuration will be different depending on the data source. For example:
Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./workspace-design.md). You must create at least one workspace to use Azure Monitor Logs. For a description of Log Analytics workspaces, see [Log Analytics workspace overview](log-analytics-workspace-overview.md). ## Log Analytics
-Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log query alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal.
+Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log search alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal.
For a description of Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md). To walk through using Log Analytics features to create a simple log query and analyze its results, see [Log Analytics tutorial](./log-analytics-tutorial.md).
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Log Analytics is a tool in the Azure portal that's used to edit and run log quer
You might write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you might write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend.
-Whether you work with the results of your queries interactively or use them with other Azure Monitor features, such as log query alerts or workbooks, Log Analytics is the tool that you'll use to write and test them.
+Whether you work with the results of your queries interactively or use them with other Azure Monitor features, such as log search alerts or workbooks, Log Analytics is the tool that you'll use to write and test them.
> [!TIP] > This article describes Log Analytics and its features. If you want to jump right into a tutorial, see [Log Analytics tutorial](./log-analytics-tutorial.md).
The top bar has controls for working with a query in the query window.
| Time picker | Select the time range for the data available to the query. This action is overridden if you include a time filter in the query. See [Log query scope and time range in Azure Monitor Log Analytics](./scope.md). | | Save button | Save the query to a [query pack](./query-packs.md). Saved queries are available from: <ul><li> The **Other** section in the **Queries** dialog for the workspace</li><li>The **Other** section in the **Queries** tab in the [left sidebar](#left-sidebar) for the workspace</ul> | Share button | Copy a link to the query, the query text, or the query results to the clipboard. |
-| New alert rule button | Open the Create an alert rule page. Use this page to [create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=log) with an alert type of [log alert](../alerts/alerts-types.md#log-alerts). The page opens with the [Conditions tab](../alerts/alerts-create-new-alert-rule.md?tabs=log#set-the-alert-rule-conditions) selected, and your query is added to the **Search query** field. |
+| New alert rule button | Open the Create an alert rule page. Use this page to [create an alert rule](../alerts/alerts-create-new-alert-rule.md?tabs=log) with an alert type of [log search alert](../alerts/alerts-types.md#log-alerts). The page opens with the [Conditions tab](../alerts/alerts-create-new-alert-rule.md?tabs=log#set-the-alert-rule-conditions) selected, and your query is added to the **Search query** field. |
| Export button | Export the results of the query to a CSV file or the query to Power Query Formula Language format for use with Power BI. | | Pin to button | Pin the results of the query to an Azure dashboard or add them to an Azure workbook. | | Format query button | Arrange the selected text for readability. |
azure-monitor Log Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md
Azure Monitor Logs is based on Azure Data Explorer, and log queries are written
Areas in Azure Monitor where you'll use queries include: - [Log Analytics](../logs/log-analytics-overview.md): Use this primary tool in the Azure portal to edit log queries and interactively analyze their results. Even if you intend to use a log query elsewhere in Azure Monitor, you'll typically write and test it in Log Analytics before you copy it to its final location.-- [Log alert rules](../alerts/alerts-overview.md): Proactively identify issues from data in your workspace. Each alert rule is based on a log query that's automatically run at regular intervals. The results are inspected to determine if an alert should be created.
+- [Log search alert rules](../alerts/alerts-overview.md): Proactively identify issues from data in your workspace. Each alert rule is based on a log query that's automatically run at regular intervals. The results are inspected to determine if an alert should be created.
- [Workbooks](../visualize/workbooks-overview.md): Include the results of log queries by using different visualizations in interactive visual reports in the Azure portal. - [Azure dashboards](../visualize/tutorial-logs-dashboards.md): Pin the results of any query into an Azure dashboard, which allows you to visualize log and metric data together and optionally share with other Azure users. - [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md): Use the results of a log query in an automated workflow by using a logic app workflow.
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md
The list shows the resource IDs where the agent has the wrong configuration. To
## Alert rules
-Use [log query alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. Use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription will be charged for each alert rule as listed in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs).
+Use [log search alerts](../alerts/alerts-log-query.md) in Azure Monitor to be proactively notified when an issue is detected in your Log Analytics workspace. Use a strategy that allows you to respond in a timely manner to issues while minimizing your costs. Your subscription will be charged for each alert rule as listed in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#platform-logs).
A recommended strategy is to start with two alert rules based on the level of the issue. Use a short frequency such as every 5 minutes for Errors and a longer frequency such as 24 hours for Warnings. Because Errors indicate potential data loss, you want to respond to them quickly to minimize any loss. Warnings typically indicate an issue that doesn't require immediate attention, so you can review them daily.
-Use the process in [Create, view, and manage log alerts by using Azure Monitor](../alerts/alerts-log.md) to create the log alert rules. The following sections describe the details for each rule.
+Use the process in [Create, view, and manage log search alerts by using Azure Monitor](../alerts/alerts-log.md) to create the log search alert rules. The following sections describe the details for each rule.
| Query | Threshold value | Period | Frequency | |:|:|:|:|
The following example creates a Warning alert when the data collection has reach
## Next steps -- Learn more about [log alerts](../alerts/alerts-log.md).
+- Learn more about [log search alerts](../alerts/alerts-log.md).
- [Collect query audit data](./query-audit.md) for your workspace.
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
To configure your Azure Storage account to use CMKs with Key Vault, use the [Azu
> - When linking Storage Account for query, existing saved queries in workspace are deleted permanently for privacy. You can copy existing saved queries before storage link using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch). > - Queries saved in [query pack](./query-packs.md) aren't encrypted with Customer-managed key. Select **Save as Legacy query** when saving queries instead, to protect them with Customer-managed key. > - Saved queries are stored in table storage and encrypted with Customer-managed key when encryption is configured at Storage Account creation.
-> - Log alerts are saved in blob storage where configuration of Customer-managed key encryption can be at Storage Account creation, or later.
+> - Log search alerts are saved in blob storage where configuration of Customer-managed key encryption can be at Storage Account creation, or later.
> - You can use a single Storage Account for all purposes, query, alert, custom log and IIS logs. Linking storage for custom log and IIS logs might require more Storage Accounts for scale, depending on the ingestion rate and storage limits. You can link up to five Storage Accounts to a workspace. ## Link storage accounts to your Log Analytics workspace
azure-monitor Query Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-audit.md
An audit record is created each time a query is run. If you send the data to a L
|AzureAutomation|[Azure Automation.](../../automation/overview.md)| |AzureMonitorLogsConnector|[Azure Monitor Logs Connector](../../connectors/connectors-azure-monitor-logs.md).| |csharpsdk|[Log Analytics Query API.](../logs/api/overview.md)|
-|Draft-Monitor|[Log alert creation in the Azure portal.](../alerts/alerts-create-new-alert-rule.md?tabs=log)|
+|Draft-Monitor|[Log search alert creation in the Azure portal.](../alerts/alerts-create-new-alert-rule.md?tabs=log)|
|Grafana|[Grafana connector.](../visualize/grafana-plugin.md)| |IbizaExtension|Experiences of Log Analytics in the Azure portal.| |infraInsights/container|[Container insights.](../containers/container-insights-overview.md)|
azure-monitor Monitor Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-azure-monitor.md
These are now listed in the [Log Analytics user interface](./logs/queries.md).
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](./alerts/alerts-metric-overview.md), [logs](./alerts/alerts-unified-log.md), and the [activity log](./alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](./alerts/alerts-metric-overview.md), [logs](./alerts/alerts-types.md#log-alerts), and the [activity log](./alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
For an in-depth discussion of using alerts with autoscale, see [Troubleshoot Azure autoscale](./autoscale/autoscale-troubleshoot.md).
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
An effective monitoring solution proactively responds to critical events, withou
**[Azure Monitor Alerts](alerts/alerts-overview.md)** notify you of critical conditions and can take corrective action. Alert rules can be based on metric or log data. - Metric alert rules provide near-real-time alerts based on collected metrics. -- Log alerts rules based on logs allow for complex logic across data from multiple sources.
+- Log search alert rules based on logs allow for complex logic across data from multiple sources.
Alert rules use [action groups](alerts/action-groups.md), which can perform actions such as sending email or SMS notifications. Action groups can send notifications using webhooks to trigger external processes or to integrate with your IT service management tools. Action groups, actions, and sets of recipients can be shared across multiple rules.
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
In the request body, provide a link to your template and parameter file.
- [Agents](agents/resource-manager-agent.md): Deploy and configure the Log Analytics agent and a diagnostic extension. - Alerts:
- - [Log alert rules](alerts/resource-manager-alerts-log.md): Configure alerts from log queries and Azure Activity Log.
+ - [Log search alert rules](alerts/resource-manager-alerts-log.md): Configure alerts from log queries and Azure Activity Log.
- [Metric alert rules](alerts/resource-manager-alerts-metric.md): Configure alerts from metrics that use different kinds of logic. - [Application Insights](app/resource-manager-app-resource.md) - [Diagnostic settings](essentials/resource-manager-diagnostic-settings.md): Create diagnostic settings to forward logs and metrics from different resource types.
azure-monitor Roles Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/roles-permissions-security.md
If the preceding built-in roles don't meet the exact needs of your team, you can
| Microsoft.Insights/MetricDefinitions/Read |Read metric definitions (list of available metric types for a resource). | | Microsoft.Insights/Metrics/Read |Read metrics for a resource. | | Microsoft.Insights/Register/Action |Register the Azure Monitor resource provider. |
-| Microsoft.Insights/ScheduledQueryRules/[Read, Write, Delete] |Read, write, or delete log alerts in Azure Monitor. |
+| Microsoft.Insights/ScheduledQueryRules/[Read, Write, Delete] |Read, write, or delete log search alerts in Azure Monitor. |
> [!NOTE] > Access to alerts, diagnostic settings, and metrics for a resource requires that the user has read access to the resource type and scope of that resource. Creating a diagnostic setting that sends data to a storage account or streams to event hubs requires the user to also have ListKeys permission on the target resource.
azure-monitor Monitor Virtual Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-agent.md
Previously updated : 01/05/2023 Last updated : 02/15/2024
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Previously updated : 01/11/2023 Last updated : 02/15/2024
Azure Monitor provides a set of [recommended alert rules](tutorial-monitor-vm-al
## Alert types
-The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario depends on where the data that you're alerting on is located.
+The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log search alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario depends on where the data that you're alerting on is located.
You might have cases where data for a particular alerting scenario is available in both Metrics and Logs. If so, you need to determine which rule type to use. You might also have flexibility in how you collect certain data and let your decision of alert rule type drive your decision for data collection method.
Data sources for metric alerts:
- Host metrics for Azure virtual machines, which are collected automatically - Metrics collected by Azure Monitor Agent from the guest operating system
-### Log alerts
-Common uses for log alerts:
+### Log search alerts
+Common uses for log search alerts:
- Alert when a particular event or pattern of events from Windows event log or Syslog are found. These alert rules typically measure table rows returned from the query. - Alert based on a calculation of numeric data across multiple machines. These alert rules typically measure the calculation of a numeric column in the query results.
-Data sources for log alerts:
+Data sources for log search alerts:
- All data collected in a Log Analytics workspace ## Scaling alert rules
As you identify requirements for more metric alert rules, follow this same strat
- Minimize the number of alert rules you need to manage. - Ensure that they're automatically applied to any new machines.
-### Log alert rules
+### Log search alert rules
-If you set the target resource of a log alert rule to a specific machine, queries are limited to data associated with that machine, which gives you individual alerts for it. This arrangement requires a separate alert rule for each machine.
+If you set the target resource of a log search alert rule to a specific machine, queries are limited to data associated with that machine, which gives you individual alerts for it. This arrangement requires a separate alert rule for each machine.
-If you set the target resource of a log alert rule to a Log Analytics workspace, you have access to all data in that workspace. For this reason, you can alert on data from all machines in the workgroup with a single rule. This arrangement gives you the option of creating a single alert for all machines. You can then use dimensions to create a separate alert for each machine.
+If you set the target resource of a log search alert rule to a Log Analytics workspace, you have access to all data in that workspace. For this reason, you can alert on data from all machines in the workgroup with a single rule. This arrangement gives you the option of creating a single alert for all machines. You can then use dimensions to create a separate alert for each machine.
For example, you might want to alert when an error event is created in the Windows event log by any machine. You first need to create a data collection rule as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) to send these events to the `Event` table in the Log Analytics workspace. Then you create an alert rule that queries this table by using the workspace as the target resource and the condition shown in the following image. The query returns a record for any error messages on any machine. Use the **Split by dimensions** option and specify **_ResourceId** to instruct the rule to create an alert for each machine if multiple machines are returned in the results. #### Dimensions Depending on the information you want to include in the alert, you might need to split by using different dimensions. In this case, make sure the necessary dimensions are projected in the query by using the [project](/azure/data-explorer/kusto/query/projectoperator) or [extend](/azure/data-explorer/kusto/query/extendoperator) operator. Set the **Resource ID column** field to **Don't split** and include all the meaningful dimensions in the list. Make sure **Include all future values** is selected so that any value returned from the query is included. #### Dynamic thresholds
-Another benefit of using log alert rules is the ability to include complex logic in the query for determining the threshold value. You can hardcode the threshold, apply it to all resources, or calculate it dynamically based on some field or calculated value. The threshold is applied to resources only according to specific conditions. For example, you might create an alert based on available memory but only for machines with a particular amount of total memory.
+Another benefit of using log search alert rules is the ability to include complex logic in the query for determining the threshold value. You can hardcode the threshold, apply it to all resources, or calculate it dynamically based on some field or calculated value. The threshold is applied to resources only according to specific conditions. For example, you might create an alert based on available memory but only for machines with a particular amount of total memory.
## Common alert rules
-The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log alerts are provided for each. For guidance on which type of alert to use, see [Alert types](#alert-types). If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md).
+The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log search alerts are provided for each. For guidance on which type of alert to use, see [Alert types](#alert-types). If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md).
> [!NOTE]
-> The details for log alerts provided here are using data collected by using [VM Insights](vminsights-overview.md), which provides a set of common performance counters for the client operating system. This name is independent of the operating system type.
+> The details for log search alerts provided here are using data collected by using [VM Insights](vminsights-overview.md), which provides a set of common performance counters for the client operating system. This name is independent of the operating system type.
### Machine unavailable One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method is to create a metric alert rule in Azure Monitor by using the VM availability metric, which is currently in public preview. For a walk-through on this metric, see [Create availability alert rule for Azure virtual machine](tutorial-monitor-vm-alert-availability.md).
The agent heartbeat is slightly different than the machine unavailable alert bec
A metric called **Heartbeat** is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**.
-#### Log alert rules
+#### Log search alert rules
-Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine.
+Log search alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine.
Use a rule with the following query:
This section describes CPU alerts.
| Windows guest | \Processor Information(_Total)\% Processor Time | | Linux guest | cpu/usage_active |
-#### Log alert rules
+#### Log search alert rules
**CPU utilization**
This section describes memory alerts.
| Windows guest | \Memory\% Committed Bytes in Use<br>\Memory\Available Bytes | | Linux guest | mem/available<br>mem/available_percent |
-#### Log alert rules
+#### Log search alert rules
**Available memory in MB**
This section describes disk alerts.
| Windows guest | \Logical Disk\(_Total)\% Free Space<br>\Logical Disk\(_Total)\Free Megabytes | | Linux guest | disk/free<br>disk/free_percent |
-#### Log query alert rules
+#### Log search alert rules
**Logical disk used - all disks on each computer**
InsightsMetrics
| Windows guest | \Network Interface\Bytes Sent/sec<br>\Logical Disk\(_Total)\Free Megabytes | | Linux guest | disk/free<br>disk/free_percent |
-#### Log query alert rules
+#### Log search alert rules
**Network interfaces bytes received - all interfaces**
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Previously updated : 01/10/2023 Last updated : 02/15/2024
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
Previously updated : 01/05/2023 Last updated : 02/15/2024 # Monitor virtual machines with Azure Monitor: Collect data This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to configure collection of data after you deploy Azure Monitor Agent to your Azure and hybrid virtual machines in Azure Monitor.
-This article provides guidance on collecting the most common types of telemetry from virtual machines. The exact configuration that you choose depends on the workloads that you run on your machines. Included in each section are sample log query alerts that you can use with that data.
+This article provides guidance on collecting the most common types of telemetry from virtual machines. The exact configuration that you choose depends on the workloads that you run on your machines. Included in each section are sample log search alerts that you can use with that data.
- For more information about analyzing telemetry collected from your virtual machines, see [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md). - For more information about using telemetry collected from your virtual machines to create alerts in Azure Monitor, see [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md).
For guidance on creating a DCR to collect performance counters, see [Collect eve
Destination | Description | |:|:| | Metrics | Host metrics are automatically sent to Azure Monitor Metrics. You can use a DCR to collect client metrics so that they can be analyzed together with [metrics explorer](../essentials/metrics-getting-started.md) or used with [metrics alerts](../alerts/alerts-create-new-alert-rule.md?tabs=metric). This data is stored for 93 days. |
-| Logs | Performance data stored in Azure Monitor Logs can be stored for extended periods. The data can be analyzed along with your event data by using [log queries](../logs/log-query-overview.md) with [Log Analytics](../logs/log-analytics-overview.md) or [log query alerts](../alerts/alerts-create-new-alert-rule.md?tabs=log). You can also correlate data by using complex logic across multiple machines, regions, and subscriptions.<br><br>Performance data is sent to the following tables:<br>- VM insights: [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics)<br>- Other performance data: [Perf](/azure/azure-monitor/reference/tables/perf) |
+| Logs | Performance data stored in Azure Monitor Logs can be stored for extended periods. The data can be analyzed along with your event data by using [log queries](../logs/log-query-overview.md) with [Log Analytics](../logs/log-analytics-overview.md) or [log search alerts](../alerts/alerts-create-new-alert-rule.md?tabs=log). You can also correlate data by using complex logic across multiple machines, regions, and subscriptions.<br><br>Performance data is sent to the following tables:<br>- VM insights: [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics)<br>- Other performance data: [Perf](/azure/azure-monitor/reference/tables/perf) |
### Sample log queries The following samples use the `Perf` table with custom performance data. For information on performance data collected by VM insights, see [How to query logs from VM insights](../vm/vminsights-log-query.md#performance-records).
Azure Monitor has no ability on its own to monitor the status of a service or da
For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You have to [create an Azure Automation account](../../automation/quickstarts/create-azure-automation-account-portal.md) to support the solution.
-When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for logs queries and log query alert rules.
+When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for logs queries and log search alert rules.
| Table | Description | |:|:|
When you enable Change Tracking and Inventory, two new tables are created in you
| sort by Computer, SvcName ``` -- **Alert when a specific service stops.** Use this query in a log alert rule.
+- **Alert when a specific service stops.** Use this query in a log search alert rule.
```kusto ConfigurationData
When you enable Change Tracking and Inventory, two new tables are created in you
| summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m) ``` -- **Alert when one of a set of services stops.** Use this query in a log alert rule.
+- **Alert when one of a set of services stops.** Use this query in a log search alert rule.
```kusto let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]);
When you enable Change Tracking and Inventory, two new tables are created in you
Port monitoring verifies that a machine is listening on a particular port. Two potential strategies for port monitoring are described here. ### Dependency agent tables
-If you're using VM insights with **Processes and dependencies collection** enabled, you can use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The `VMBoundPort` table is updated every minute with each process running on the computer and the port it's listening on. You can create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port.
+If you're using VM insights with **Processes and dependencies collection** enabled, you can use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The `VMBoundPort` table is updated every minute with each process running on the computer and the port it's listening on. You can create a log search alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port.
- **Review the count of ports open on your VMs to assess which VMs have configuration and security vulnerabilities.**
There's an extra cost for Connection Manager. For more information, see [Network
## Run a process on a local machine Monitoring of some workloads requires a local process. An example is a PowerShell script that runs on the local machine to connect to an application and collect or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md), which is part of [Azure Automation](../../automation/automation-intro.md), to run a local PowerShell script. There's no direct charge for Hybrid Runbook Worker, but there's a cost for each runbook that it uses.
-The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log. Then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
+The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log. Then configure that log to be collected by Azure Monitor. Create a log search alert rule that fires on that log entry.
## Next steps
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
Previously updated : 01/05/2023 Last updated : 02/15/2024
azure-monitor Tutorial Monitor Vm Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-guest.md
In the empty query window, enter either **Event** or **Syslog** depending on whe
:::image type="content" source="media/tutorial-monitor-vm/log-analytics-query.png" lightbox="media/tutorial-monitor-vm/log-analytics-query.png" alt-text="Screenshot that shows Log Analytics with query results.":::
-For a tutorial on using Log Analytics to analyze log data, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md). For a tutorial on creating alert rules from log data, see [Tutorial: Create a log query alert for an Azure resource](../alerts/tutorial-log-alert.md).
+For a tutorial on using Log Analytics to analyze log data, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md). For a tutorial on creating alert rules from log data, see [Tutorial: Create a log search alert for an Azure resource](../alerts/tutorial-log-alert.md).
## View guest metrics You can view metrics for your host virtual machine with metrics explorer without a DCR like [any other Azure resource](../essentials/tutorial-metrics.md). With the DCR, you can use metrics explorer to view guest metrics and host metrics.
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
Last updated 09/28/2023
# Dependency Agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The Dependency Agent collects data about processes running on the virtual machine and external process dependencies. Dependency Agent updates include bug fixes or support of new features or functionality. This article describes Dependency Agent requirements and how to upgrade Dependency Agent manually or through automation. >[!NOTE]
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
Output for this command will look similar to the following and specify whether a
When you enable VM Insights for a machine, the following agents are installed. For the network requirements for these agents, see [Network requirements](../agents/log-analytics-agent.md#network-requirements). > [!IMPORTANT]
-> Azure Monitor Agent has several advantages over the legacy Log Analytics agent, which will be deprecated by August 2024. After this date, Microsoft will no longer provide any support for the Log Analytics agent. [Migrate to Azure Monitor agent](../agents/azure-monitor-agent-migration.md) before August 2024 to continue ingesting data.
+> Azure Monitor Agent has several advantages over the legacy Log Analytics agent, which will be deprecated by August 2024. After this date, Microsoft will no longer provide any support for the Log Analytics agent. [Migrate to Azure Monitor agent](../agents/azure-monitor-agent-migration.md) before August 2024 to continue ingesting data.
- **[Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or [Log Analytics agent](../agents/log-analytics-agent.md):** Collects data from the virtual machine or Virtual Machine Scale Set and delivers it to the Log Analytics workspace.
When you enable VM Insights for a machine, the following agents are installed. F
(If using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)) - The Dependency agent requires a connection from the virtual machine to the address 169.254.169.254. This address identifies the Azure metadata service endpoint. Ensure that firewall settings allow connections to this endpoint.
-## Data collection rule
+## Data collection rule
When you enable VM Insights on a machine with the Azure Monitor agent, you must specify a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to use. The DCR specifies the data to collect and the workspace to use. VM Insights creates a default DCR if one doesn't already exist. For more information on how to create and edit the VM Insights DCR, see [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent).
The DCR is defined by the options in the following table.
> [!IMPORTANT] > VM Insights automatically creates a DCR that includes a special data stream required for its operation. Do not modify the VM Insights DCR or create your own DCR to support VM Insights. To collect additional data, such as Windows and Syslog events, create separate DCRs and associate them with your machines.
-If you associate a data collection rule with the Map feature enabled to a machine on which Dependency Agent isn't installed, the Map view won't be available. To enable the Map view, set `enableAMA property = true` in the Dependency Agent extension when you install Dependency Agent. We recommend following the procedure described in [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent).
+If you associate a data collection rule with the Map feature enabled to a machine on which Dependency Agent isn't installed, the Map view won't be available. To enable the Map view, set `enableAMA property = true` in the Dependency Agent extension when you install Dependency Agent. We recommend following the procedure described in [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent).
## Migrate from Log Analytics agent to Azure Monitor Agent - You can install both Azure Monitor Agent and Log Analytics agent on the same machine during migration. If a machine has both agents installed, you'll see a warning in the Azure portal that you might be collecting duplicate data.
-
+ :::image type="content" source="media/vminsights-enable-portal/both-agents-installed.png" lightbox="media/vminsights-enable-portal/both-agents-installed.png" alt-text="Screenshot that shows both agents installed."::: > [!WARNING]
If you associate a data collection rule with the Map feature enabled to a machin
> | summarize max(TimeGenerated) by Computer, Category > | sort by Computer > ```
-
+ ## Diagnostic and usage data Microsoft automatically collects usage and performance data through your use of Azure Monitor. Microsoft uses this data to improve the quality, security, and integrity of the service.
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
Last updated 09/28/2023
# How to query logs from VM insights
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ VM insights collects performance and connection metrics, computer and process inventory data, and health state information and forwards it to the Log Analytics workspace in Azure Monitor. This data is available for [query](../logs/log-query-overview.md) in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis, discovery, and on-demand performance troubleshooting. ## Map records
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
Last updated 09/28/2023
# Chart performance with VM insights
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ VM insights includes a set of performance charts that target several key [performance indicators](vminsights-log-query.md#performance-records) to help you determine how well a virtual machine is performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks and anomalies. You can also switch to a perspective that lists each machine to view resource utilization based on the metric selected. VM insights monitors key operating system performance indicators related to processor, memory, network adapter, and disk utilization. Performance complements the health monitoring feature and helps to:
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Agents|[MMA Discovery and Removal Utility](agents/azure-monitor-agent-mma-remova
Agents|[Send data to Event Hubs and Storage (Preview)](agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md)|Update azure-monitor-agent-send-data-to-event-hubs-and-storage.md| Alerts|[Resource Manager template samples for metric alert rules in Azure Monitor](alerts/resource-manager-alerts-metric.md)|We've added a clarification about the parameters used when creating metric alert rules programatically.| Alerts|[Manage your alert instances](alerts/alerts-manage-alert-instances.md)|We've added documentation about the new alerts timeline view.|
-Alerts|[Create or edit a log alert rule](alerts/alerts-create-log-alert-rule.md)|Added limitations to log search alert queries.|
-Alerts|[Create or edit a log alert rule](alerts/alerts-create-log-alert-rule.md)|We've added samples of log search alert rule queries that use Azure Data Explorer and Azure Resource Graph.|
+Alerts|[Create or edit a log search alert rule](alerts/alerts-create-log-alert-rule.md)|Added limitations to log search alert queries.|
+Alerts|[Create or edit a log search alert rule](alerts/alerts-create-log-alert-rule.md)|We've added samples of log search alert rule queries that use Azure Data Explorer and Azure Resource Graph.|
Application-Insights|[Data Collection Basics of Azure Monitor Application Insights](app/opentelemetry-overview.md)|We've provided information on how to get a list of Application Insights SDK versions and their names.| Application-Insights|[Application Insights logging with .NET](app/ilogger.md)|We've clarified steps to view ILogger telemetry.| Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|The script to discover classic resources has been updated.|
Logs|[Query data in Azure Data Explorer and Azure Resource Graph from Azure Moni
|||| Agents|[Azure Monitor Agent Health (Preview)](agents/azure-monitor-agent-health.md)|Introduced a new Azure Monitor Agent Health workbook, which monitors the health of agents deployed across your organization. | Alerts|[Manage your alert instances](alerts/alerts-manage-alert-instances.md)|View alerts as a timeline (preview)|
-Alerts|[Upgrade to the Log Alerts API from the legacy Log Analytics alerts API](alerts/alerts-log-api-switch.md)|Changes to the log alert rule creation experience|
+Alerts|[Upgrade to the Scheduled Query Rules API from the legacy Log Analytics alerts API](alerts/alerts-log-api-switch.md)|Changes to the log alert rule creation experience|
Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|We now support migrating classic components to workspace-based components via PowerShell cmdlet. | Application-Insights|[EventCounters introduction](app/eventcounters.md)|Code samples have been provided for the latest .NET versions.| Application-Insights|[Enable a framework extension for Application Insights JavaScript SDK](app/javascript-framework-extensions.md)|We've added a section for the React Native Manual Device Plugin, and clarified exception tracking and device info collection.|
Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to
|[Convert ITSM actions that send events to ServiceNow to Secure Webhook actions](./alerts/itsm-convert-servicenow-to-webhook.md)|As of September 2022, we're starting the three-year process of deprecating support of using ITSM actions to send events to ServiceNow. Learn how to convert ITSM actions that send events to ServiceNow to Secure Webhook actions.| |[Create a new alert rule](./alerts/alerts-create-new-alert-rule.md)|Added description of all available monitoring services to **Create a new alert rule** and **Alert processing rules** pages. <br><br>Added support for regional processing for metric alert rules that monitor a custom metric with the scope defined as one of the supported regions. <br><br> Clarified that selecting the **Automatically resolve alerts** setting makes log alerts stateful.| |[Types of Azure Monitor alerts](alerts/alerts-types.md)|Azure Database for PostgreSQL - Flexible Servers is supported for monitoring multiple resources.|
-|[Upgrade legacy rules management to the current Log Alerts API from legacy Log Analytics Alert API](./alerts/alerts-log-api-switch.md)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.|
+|[Upgrade legacy rules management to the current Scheduled Query Rules API from legacy Log Analytics Alert API](./alerts/alerts-log-api-switch.md)|The process of moving legacy log alert rules management from the legacy API to the current API is now supported by the government cloud.|
### Application Insights
Azure Monitor Workbooks documentation previously resided on an external GitHub r
|:|:| | [Configure Azure to connect ITSM tools by using Secure Webhook](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md) | Added the workflow for ITSM management and removed all references to System Center Service Manager. | | [Overview of Azure Monitor Alerts](alerts/alerts-overview.md) | Complete rewrite. |
-| [Resource Manager template samples for log query alerts](alerts/resource-manager-alerts-log.md) | Added Bicep samples for alerting to the Resource Manager template samples articles. |
+| [Resource Manager template samples for log search alerts](alerts/resource-manager-alerts-log.md) | Added Bicep samples for alerting to the Resource Manager template samples articles. |
| [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md) | Added a newly supported resource type. | ### Application Insights
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
The following table describes resource limits for Azure NetApp Files:
| Minimum size of a single regular volume | 100 GiB | No | | Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 102,401 GiB | No |
+| Large volume size increase | 30% of lowest provisioned size | Yes |
| Maximum size of a single large volume | 500 TiB | No | | Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No |
azure-netapp-files Azure Netapp Files Service Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md
Service levels are an attribute of a capacity pool. Service levels are defined a
Azure NetApp Files supports three service levels: *Ultra*, *Premium*, and *Standard*.
-* <a name="Ultra"></a>Ultra storage:
- The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned.
-
-* <a name="Premium"></a>Premium storage:
- The Premium service level provides up to 64 MiB/s of throughput per 1 TiB of capacity provisioned.
- * <a name="Standard"></a>Standard storage: The Standard service level provides up to 16 MiB/s of throughput per 1 TiB of capacity provisioned. * Standard storage with cool access: The throughput experience for this service level is the same as the Standard service level for data that is in the hot tier. But it may differ when data that resides in the cool tier is accessed. For more information, see [Standard storage with cool access in Azure NetApp Files](cool-access-introduction.md#effects-of-cool-access-on-data).
+* <a name="Premium"></a>Premium storage:
+ The Premium service level provides up to 64 MiB/s of throughput per 1 TiB of capacity provisioned.
+
+* <a name="Ultra"></a>Ultra storage:
+ The Ultra service level provides up to 128 MiB/s of throughput per 1 TiB of capacity provisioned.
+ ## Throughput limits The throughput limit for a volume is determined by the combination of the following factors:
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
The following diagram demonstrates how customer-managed keys work with Azure Net
* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * For increased security, you can select the **Disable public access** option within the network settings of your key vault. When selecting this option, you must also select **Allow trusted Microsoft services to bypass this firewall** to permit the Azure NetApp Files service to access your encryption key.
-* Automatic Managed System Identity (MSI) certificate renewal isn't currently supported. It's recommended you create an Azure monitor alert to notify you when the MSI certificate is set to expire.
-* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.**
- * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility.
- * Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example:
-
- `az netappfiles account renew-credentials ΓÇô-account-name myaccount -ΓÇôresource-group myresourcegroup`
-
- * If the account isn't eligible for MSI certificate renewal, an error message communicates the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline.
+* Customer-managed keys support automatic Managed System Identity (MSI) certificate renewal. If your certificate is valid, you don't need to manually update it.
* Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled. * If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information. * If Azure Key Vault becomes inaccessible, Azure NetApp Files loses its access to the encryption keys and the ability to read or write data to volumes enabled with customer-managed keys. In this situation, create a support ticket to have access manually restored for the affected volumes.
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* East US 2 * France Central * Germany West Central
+* Japan East
* North Central US * North Europe * Southeast Asia
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## February 2024
+
+* [Large volumes (Preview) improvement:](large-volumes-requirements-considerations.md#requirements-and-considerations) volume size increase beyond 30% default limit
+
+ For capacity and resources planning purposes the Azure NetApp Files large volume feature has a [volume size increase limit of up to 30% of the lowest provisioned size](large-volumes-requirements-considerations.md#requirements-and-considerations). This volume size increase limit is now adjustable beyond this 30% (default) limit via a support ticket. For more information, see [Resource limits](azure-netapp-files-resource-limits.md).
+
+ ## January 2024 * [Standard network features - Edit volumes available in US Gov regions](azure-netapp-files-network-topologies.md#regions-edit-network-features) (Preview)
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config
description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 01/17/2024 Last updated : 02/16/2024 # Add module settings in the Bicep config file
You can override the public module registry alias definition in the bicepconfig.
## Configure profiles and credentials
-To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can configure the profile and the credential precedence for authenticating to the registry. By default, Bicep uses the `AzureCloud` profile and the credentials from the user authenticated in Azure CLI or Azure PowerShell. You can customize `currentProfile` and `credentialPrecedence` in the config file.
+To [publish](bicep-cli.md#publish) modules to a private module registry or to [restore](bicep-cli.md#restore) external modules to the local cache, the account must have the correct permissions to access the registry. You can manually configure `currentProfile` and `credentialPrecedence` in the [Bicep config file](./bicep-config.md) for authenticating to the registry.
```json {
The available profiles are:
- AzureChinaCloud - AzureUSGovernment
-You can customize these profiles, or add new profiles for your on-premises environments.
+By default, Bicep uses the `AzureCloud` profile and the credentials of the user authenticated in Azure CLI or Azure PowerShell. You can customize these profiles or include new ones for your on-premises environments. If you want to publish or restore a module to a national cloud environment such as `AzureUSGovernment`, you must set `"currentProfile": "AzureUSGovernment"` even if you've selected that cloud profile in the Azure CLI. Bicep is unable to automatically determine the current cloud profile based on Azure CLI settings.
Bicep uses the [Azure.Identity SDK](/dotnet/api/azure.identity) to do authentication. The available credential types are:
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resource providers for storage services are:
| Resource provider namespace | Azure service | | | - | | Microsoft.ClassicStorage | Classic deployment model storage |
-| Microsoft.ElasticSan | [Elastic SAN Preview](../../storage/elastic-san/index.yml) |
+| Microsoft.ElasticSan | [Elastic SAN](../../storage/elastic-san/index.yml) |
| Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) | | Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) | | Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
azure-resource-manager Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/linked-templates.md
Make sure there is no leading "?" in QueryString. The deployment adds one when a
## Template specs
-Instead of maintaining your linked templates at an accessible endpoint, you can create a [template spec](template-specs.md) that packages the main template and its linked templates into a single entity you can deploy. The template spec is a resource in your Azure subscription. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec. This feature is currently in preview.
+Instead of maintaining your linked templates at an accessible endpoint, you can create a [template spec](template-specs.md) that packages the main template and its linked templates into a single entity you can deploy. The template spec is a resource in your Azure subscription. It makes it easy to securely share the template with users in your organization. You use Azure role-based access control (Azure RBAC) to grant access to the template spec.
For more information, see:
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 2/14/2024 Last updated : 2/15/2024 # Known issues: Azure VMware Solution
Refer to the table to find details about resolution dates or possible workaround
| When adding a cluster to my private cloud, the **Cluster-n: vSAN physical disk alarm 'Operation'** and **Cluster-n: vSAN cluster alarm 'vSAN Cluster Configuration Consistency'** alerts are active in the vSphere Client | 2021 | This alert should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **Capacity - Maximum Capacity Threshold** alarm is raised | 2023 | Alarm raised because there are more than four clusters in the private cloud with the medium form factor for the NSX-T Data Center Unified Appliance. The form factor needs to be scaled up to large. This issue should get detected through Microsoft, however you can also open a support request. | 2023 | | When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option isn't available. | 2023 | The default VMware HCX Compute Profile doesn't have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 |
-| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necassary to gain interactive access to the vCenter network segment. Microsoft is working on a plan to roll out security fixes soon to completely remediate the security vulnerability. | October 2023 |
+| [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | A risk assessment of CVE-2023-03048 was conducted and it was determined that sufficient controls are in place within Azure VMware Solution to reduce the risk of CVE-2023-03048 from a CVSS Base Score of 9.8 to an adjusted Environmental Score of [6.8](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H/MAC:L/MPR:H/MUI:R) or lower. Adjustments from the base score were possible due to the network isolation of the Azure VMware Solution vCenter Server (ports 2012, 2014, and 2020 are not exposed via any interactive network path) and multiple levels of authentication and authorization necessary to gain interactive access to the vCenter Server network segment. Microsoft is working on a plan to roll out security fixes soon to completely remediate the security vulnerability. | October 2023 |
| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | Use AV36, AV36P, or AV52 SKUs when RAID-6 FTT2 or RAID-1 FTT3 storage policies are needed. | N/A |
-| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | N/A |
+| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | Feb 2024 - Resolved in [VMware HCX 4.8.2](https://docs.vmware.com/en/VMware-HCX/4.8.2/rn/vmware-hcx-482-release-notes/https://docsupdatetracker.net/index.html) |
In this article, you learned about the current known issues with the Azure VMware Solution.
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
All Azure NetApp Files features available on Azure public cloud are also availab
**Azure Arc-enabled VMware vSphere**
-Customers can start their onboarding with Azure Arc-enabled VMware vSphere, install agents at-scale, and enable Azure management, observability, and security solutions, while benefitting from the existing lifecycle management capabilities. Azure Arc-enabled VMware vSphere VMs now show up alongside other Azure Arc-enabled servers under ΓÇÿMachinesΓÇÖ view in the Azure portal. [Learn more](https://aka.ms/vSphereGAblog)
+Azure Arc-enabled VMware vSphere term refers to both vSphere on-premises and Azure VMware Solutions customer. Customers can start their onboarding with Azure Arc-enabled VMware vSphere, install agents at-scale, and enable Azure management, observability, and security solutions, while benefitting from the existing lifecycle management capabilities. Azure Arc-enabled VMware vSphere VMs now show up alongside other Azure Arc-enabled servers under ΓÇÿMachinesΓÇÖ view in the Azure portal. [Learn more](https://aka.ms/vSphereGAblog)
**Five-year Reserved Instance**
azure-vmware Configure Azure Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md
Title: Use Azure VMware Solution with Azure Elastic SAN Preview
-description: Learn how to use Elastic SAN Preview with Azure VMware Solution
+ Title: Use Azure VMware Solution with Azure Elastic SAN
+description: Learn how to use Elastic SAN with Azure VMware Solution
# Use Azure VMware Solution with Azure Elastic SAN (Integration in Preview)
-This article explains how to use Azure Elastic SAN Preview as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
+This article explains how to use Azure Elastic SAN as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
-Azure Elastic storage area network (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. For more information on Azure Elastic SAN, see [What is Azure Elastic SAN? Preview](../storage/elastic-san/elastic-san-introduction.md).
+Azure Elastic storage area network (SAN) addresses the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. For more information on Azure Elastic SAN, see [What is Azure Elastic SAN?](../storage/elastic-san/elastic-san-introduction.md).
## Prerequisites
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Title: Deploy Arc-enabled Azure VMware Solution
+ Title: Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud
description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud.
Last updated 12/08/2023
-# Deploy Arc-enabled Azure VMware Solution
+# Deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud
-In this article, learn how to deploy Arc for Azure VMware Solution. Once you set up the components needed, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the actions:
+In this article, learn how to deploy Arc-enabled VMware vSphere for Azure VMware Solution private cloud. Once you set up the components needed, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the following actions:
- Identify your VMware vSphere resources (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register them with Arc at scale. - Perform different virtual machine (VM) operations directly from Azure like; create, resize, delete, and power cycle operations (start/stop/restart) on VMware VMs consistently with Azure.
In this article, learn how to deploy Arc for Azure VMware Solution. Once you set
## Deployment Considerations
-Running software in Azure VMware Solution, as a private cloud in Azure, offers some benefits not realized by operating your environment outside of Azure. For software running in a VM, such as SQL Server and Windows Server, running in Azure VMware Solution provides additional value such as free Extended Security Updates (ESUs).
+When you run software in Azure VMware Solution, as a private cloud in Azure, there are benefits not realized by operating your environment outside of Azure. For software running in a virtual machine (VM) like, SQL Server and Windows Server, running in Azure VMware Solution provides more value such as free Extended Security Updates (ESUs).
-To take advantage of these benefits if you are running in an Azure VMware Solution it is important to enable Arc through this document to fully integrate the experience with the AVS private cloud. Alternatively, Arc-enabling VMs through the following mechanisms will not create the necessary attributes to register the VM and software as part of Azure VMware Solution and therefore result in billing for SQL Server ESUs for:
+To take advantage of the benefits when you're running in an Azure VMware Solution, use this article to enable Arc and fully integrate the experience with the Azure VMware Solution private cloud. Alternatively, Arc-enabling VMs through the following mechanisms won't create the necessary attributes to register the VM and software as part of Azure VMware Solution and will result in billing for SQL Server ESUs for:
- Arc-enabled servers- - Arc-enabled VMware vSphere- - SQL Server enabled by Azure Arc ## How to manually integrate an Arc-enabled VM into Azure VMware Solutions
There are two ways to refresh the integration between the Arc-enabled VMs and Az
1. In the Azure VMware Solution private cloud, navigate to the vCenter Server inventory and Virtual Machines section within the portal. Locate the virtual machine that requires updating and follow the process to 'Enable in Azure'. If the option is grayed out, you must first **Remove from Azure** and then proceed to **Enable in Azure**
-2. Run the [az connectedvmware vm create ](/cli/azure/connectedvmware/vm#az-connectedvmware-vm-create)Azure CLI command on the VM in Azure VMware Solution to update the machine type. 
+2. Run the [az connectedvmware vm create](/cli/azure/connectedvmware/vm?view=azure-cli-latest%22%20\l%20%22az-connectedvmware-vm-create&preserve-view=true) Azure CLI command on the VM in Azure VMware Solution to update the machine type. 
```azurecli
You need the following items to ensure you're set up to begin the onboarding pro
- From the Management VM, verify you have access to [vCenter Server and NSX-T manager portals](/azure/azure-vmware/tutorial-access-private-cloud#connect-to-the-vcenter-server-of-your-private-cloud). - A resource group in the subscription where you have an owner or contributor role. - An unused, isolated [NSX Data Center network segment](/azure/azure-vmware/tutorial-nsx-t-network-segment) that is a static network segment used for deploying the Arc for Azure VMware Solution OVA. If an isolated NSX-T Data Center network segment doesn't exist, one gets created.-- Verify your Azure subscription is enabled and has connectivity to Azure end points.-- The firewall and proxy URLs must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. See the [Azure Arc resource bridge network requirements](/azure/azure-arc/resource-bridge/network-requirements).-- Verify your vCenter Server version is 6.7 or higher.
+- The firewall and proxy URLs must be allowlisted to enable communication from the management machine and Appliance VM to the required Arc resource bridge URLs. See the [Azure Arc resource bridge network requirements](/azure/azure-arc/resource-bridge/network-requirements).
+- Verify your vCenter Server version is 7.0 or higher.
- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs. - A datastore with a minimum of 100 GB of free disk space is available through the resource pool or cluster. -- On the vCenter Server, allow inbound connections on TCP port 443. This action ensures that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server. > [!NOTE] > - Private endpoint is currently not supported. > - DHCP support isn't available to customers at this time, only static IP addresses are currently supported.
+If you want to use a custom DNS, use the following steps:
-## Registration to Arc for Azure VMware Solution feature set
-
-The following **Register features** are for provider registration using Azure CLI.
-
-```azurecli
-az provider register --namespace Microsoft.ConnectedVMwarevSphere
-az provider register --namespace Microsoft.ExtendedLocation
-az provider register --namespace Microsoft.KubernetesConfiguration
-az provider register --namespace Microsoft.ResourceConnector
-az provider register --namespace Microsoft.AVS
-```
-Alternately, users can sign in to their Subscription, navigate to the **Resource providers** tab, and register themselves on the resource providers mentioned previously.
-
+1. In your Azure VMware Solution private cloud, navigate to the DNS page, under **Workload networking**, select **DNS** and identify the default forwarder-zones under the **DNS zones** tab.
+1. Edit the forwarder zone to add the custom DNS server IP. By adding the custom DNS as the first IP, it allows requests to be directly forwarded to the first IP and decreases the number of retries.
## Onboard process to deploy Azure Arc Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution.
-1. Sign in to the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the preview software.
+1. Sign in to the Management VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the software.
2. Open the 'config_avs.json' file and populate all the variables. **Config JSON**
Use the following steps to guide you through the process to onboard Azure Arc fo
- `GatewayIPAddress` is the gateway for the segment for Arc appliance VM. - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the K8s node pool IP range. - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd` are the starting and ending IP of the pool of IPs to assign to the appliance VM. Both need to be within the `networkCIDRForApplianceVM`.
- - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You can choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM`
+ - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You can choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM` with the first lp as the gateway.
+ - If all the parameters are provided, the firewall and proxy URLs must be allowlisted for the lps between k8sNodeIPPoolStart, k8sNodeIPPoolEnd.
+ - If you're skipping the optional fields, the firewall and proxy URLs must be allowlisted the following IPs in the segment. If the networkCIDRForApplianceVM is x.y.z.1/28, the IPs to allowlist are between x.y.z.11 ΓÇô x.y.z.14. See theΓÇ»[Azure Arc resource bridge network requirements](/azure/azure-arc/resource-bridge/network-requirements).ΓÇ»
**Json example** ```json
Once you connected your Azure VMware Solution private cloud to Azure, you can br
Repeat the previous steps for one or more virtual machine, network, resource pool, and VM template resources. Additionally, for virtual machines there is an additional section to configure **VM extensions**. This will enable guest management to facilitate additional Azure extensions to be installed on the VM. The steps to enable this would be:+ 1. Select **Enable guest management**. 2. Choose a __Connectivity Method__ for the Arc agent. 3. Provide an Administrator/Root access username and password for the VM.
-If you choose to enable the guest management as a separate step or have issues with the VM extension install steps please review the prerequisites and steps discussed in the section below.
+If you choose to enable the guest management as a separate step or have issues with the VM extension install steps, review the prerequisites and steps discussed in the following section.
## Enable guest management and extension installation
-Before you install an extension, you need to enable guest management on the VMware VM.
+Before you install an extension, you must enable guest management on the VMware VM.
### Prerequisite
You need to enable guest management on the VMware VM before you can install an e
1. Select **Configuration** from the left navigation for a VMware VM. 1. Verify **Enable guest management** is now checked.
-From here additional extensions can be installed. See the [VM extensions](/azure/azure-arc/servers/manage-vm-extensions?branch=main) for a list of current extensions.
-
-### Install extensions
-To add extensions, follow these steps:
-1. Go to **vCenter Server Inventory >** **Virtual Machines** and select the virtual machine to which you need to add an extension.
-2. Locate **Settings >** **Extensions** from the left navigation and select **Add**. Alternatively, in the **Overview** page an **Extensions** click-through is listed under Properties.
-1. Select the extension you want to install. Some extensions require additional information.
-4. When you're done, select **Review + create**.
+From here additional extensions can be installed. See the [VM extensions Overview](/azure/azure-arc/servers/manage-vm-extensions) for a list of current extensions.
### Next Steps To manage Arc-enabled Azure VMware Solution go to: [Manage Arc-enabled Azure VMware private cloud - Azure VMware Solution](/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution)- To remove Arc-enabled  Azure VMWare Solution resources from Azure go to: [Remove Arc-enabled Azure VMware Solution vSphere resources from Azure - Azure VMware Solution](/azure/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure)
azure-vmware Manage Arc Enabled Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/manage-arc-enabled-azure-vmware-solution.md
Title: Manage Arc-enabled Azure VMware private cloud
description: Learn how to manage your Arc-enabled Azure VMware private cloud. Previously updated : 12/18/2023 Last updated : 2/6/2024
The following command invokes the set credential for the specified appliance res
## Upgrade the Arc resource bridge
+> [!NOTE]
+> Arc resource bridges, on a supported [private cloud provider](/azure/azure-arc/resource-bridge/upgrade#private-cloud-providers) with an appliance version **1.0.15 or higher**, are automatically opted in to [cloud-managed upgrade](/azure/azure-arc/resource-bridge/upgrade#cloud-managed-upgrade). 
+ Azure Arc-enabled Azure VMware Private Cloud requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. The Arc resource bridge can be manually upgraded from the vCenter server. You must meet all upgrade [prerequisites](/azure/azure-arc/resource-bridge/upgrade#prerequisites) before attempting to upgrade. The vCenter server must have the kubeconfig and appliance configuration files stored locally. If the cloudadmin credentials change after the initial deployment of the resource bridge, [update the Arc appliance credential](/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution#update-arc-appliance-credential) before you attempt a manual upgrade. Arc resource bridge can be manually upgraded from the management machine. The [manual upgrade](/azure/azure-arc/resource-bridge/upgrade#manual-upgrade) generally takes between 30-90 minutes, depending on the network speed. The upgrade command takes your Arc resource bridge to the immediate next version, which might not be the latest available version. Multiple upgrades could be needed to reach a [supported version](/azure/azure-arc/resource-bridge/upgrade#supported-versions). Verify your resource bridge version by checking the Azure resource of your Arc resource bridge.
-Arc resource bridges, on a supported [private cloud provider](/azure/azure-arc/resource-bridge/upgrade#private-cloud-providers) with an appliance version 1.0.15 or higher, are automatically opted in to [cloud-managed upgrade](/azure/azure-arc/resource-bridge/upgrade#cloud-managed-upgrade). 
- ## Collect logs from the Arc resource bridge
azure-vmware Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md
During onboarding, to create a connection between your VMware vCenter and Azure,
As a last step, run the following command:
-`az rest --method delete`
+[`az rest --method delete --url`](https://management.azure.com/subscriptions/%3csubscrption-id%3e/resourcegroups/%3cresource-group-name%3e/providers/Microsoft.AVS/privateClouds/%3cprivate-cloud-name%3e/addons/arc?api-version=2022-05-01%22)
Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer.
backup Azure Kubernetes Service Cluster Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md
Title: Azure Kubernetes Service (AKS) backup support matrix description: This article provides a summary of support settings and limitations of Azure Kubernetes Service (AKS) backup. Previously updated : 12/25/2023 Last updated : 02/16/2024 - references_regions - ignite-2023
You can use [Azure Backup](./backup-overview.md) to help protect Azure Kubernete
- During restore from Vault Tier, the provided staging location shouldn't have a *Read*/*Delete Lock*; otherwise, hydrated resources aren't cleaned after restore.
+- Don't install AKS Backup Extension along with Velero or other Velero-based backup services. This could lead to disruption of backup service during any future Velero upgrades driven by you or AKS backup
+ ## Next steps - [About Azure Kubernetes Service cluster backup](azure-kubernetes-service-cluster-backup-concept.md)
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
# Azure Chaos Studio fault and action library
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
- The faults listed in this article are currently available for use. To understand which resource types are supported, see [Supported resource types and role assignments for Azure Chaos Studio](./chaos-studio-fault-providers.md). ## Time delay
The faults listed in this article are currently available for use. To understand
| Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. | | Description | Adds CPU pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the **% Processor Utility** performance counter is used at fault start to determine current CPU percentage, which is subtracted from the `pressureLevel` defined in the fault so that **% Processor Utility** hits approximately the `pressureLevel` defined in the fault parameters. |
-| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
+| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
| | **Windows**: None. | | Urn | urn:csci:microsoft:agent:cpuPressure/1.0 | | Parameters (key, value) |
Known issues on Linux:
| Target type | Microsoft-Agent | | Supported OS types | Windows, Linux. | | Description | Adds physical memory pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. |
-| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
+| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
| | **Windows**: None. | | Urn | urn:csci:microsoft:agent:physicalMemoryPressure/1.0 | | Parameters (key, value) | |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Target type | Microsoft-Agent | | Supported OS types | Linux | | Description | Uses stress-ng to apply pressure to the disk. One or more worker processes are spawned that perform I/O processes with temporary files. Pressure is added to the primary disk by default, or the disk specified with the targetTempDirectory parameter. For information on how pressure is applied, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. |
-| Prerequisites | The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
+| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
| Urn | urn:csci:microsoft:agent:linuxDiskIOPressure/1.1 | | Parameters (key, value) | | | workerCount | Number of worker processes to run. Setting `workerCount` to 0 generated as many worker processes as there are number of processors. |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Target type | Microsoft-Agent | | Supported OS types | Linux | | Description | Runs any stress-ng command by passing arguments directly to stress-ng. Useful when one of the predefined faults for stress-ng doesn't meet your needs. |
-| Prerequisites | The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on Debian-based systems (including Ubuntu), Red Hat Enterprise Linux, CentOS, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
+| Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. This happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, you must install **stress-ng** manually. |
| Urn | urn:csci:microsoft:agent:stressNg/1.0 | | Parameters (key, value) | | | stressNgArguments | One or more arguments to pass to the stress-ng process. For information on possible stress-ng arguments, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. |
Currently, only virtual machine scale sets configured with the **Uniform** orche
### Limitations * A maximum of 1000 topic entities can be passed to this fault.+
+## Change Event Hub State
+
+| Property | Value |
+| - | |
+| Capability name | ChangeEventHubState-1.0 |
+| Target type | Microsoft-EventHub |
+| Description | Sets individual event hubs to the desired state within an Azure Event Hubs namespace. You can affect specific event hub names or use ΓÇ£*ΓÇ¥ to affect all within the namespace. This can help test your messaging infrastructure for maintenance or failure scenarios. This is a discrete fault, so the entity will not be returned to the starting state automatically. |
+| Prerequisites | An Azure Event Hubs namespace with at least one [event hub entity](../event-hubs/event-hubs-create.md). |
+| Urn | urn:csci:microsoft:eventHub:changeEventHubState/1.0 |
+| Fault type | Discrete. |
+| Parameters (key, value) | |
+| desiredState | The desired state for the targeted event hubs. The possible states are Active, Disabled, and SendDisabled. |
+| eventHubs | A comma-separated list of the event hub names within the targeted namespace. Use "*" to affect all entities within the namespace. |
+
+### Sample JSON
+
+```json
+{
+ "name": "Branch1",
+ "actions": [
+ {
+ "selectorId": "Selector1",
+ "type": "discrete",
+ "parameters": [
+ {
+ "key": "eventhubs",
+ "value": "[\"*\"]"
+ },
+ {
+ "key": "desiredState",
+ "value": "Disabled"
+ }
+ ],
+ "name": "urn:csci:microsoft:eventHub:changeEventHubState/1.0"
+ }
+ ]
+}
+```
chaos-studio Chaos Studio Fault Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md
The following table lists the supported resource types for faults, the target types, and suggested roles to use when you give an experiment permission to a resource of that type.
+More information about role assignments can be found on the [Azure built-in roles page](../role-based-access-control/built-in-roles.md).
+ | Resource type | Target name/type | Suggested role assignment | |-|--|-|
-| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor |
-| Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | Classic Virtual Machine Contributor |
-| Microsoft.Compute/virtualMachines (agent-based) | Microsoft-Agent | Reader |
-| Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | Reader |
-| Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | Virtual Machine Contributor |
-| Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | Virtual Machine Contributor |
-| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster Admin Role |
-| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Azure Cosmos DB Operator |
-| Microsoft.Insights/autoscalesettings (service-direct) | Microsoft-AutoScaleSettings | Web Plan Contributor |
-| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Azure Key Vault Contributor |
-| Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor |
-| Microsoft.Web/sites (service-direct) | Microsoft-AppService | Website Contributor |
+| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | [Redis Cache Contributor](../role-based-access-control/built-in-roles.md#redis-cache-contributor) |
+| Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | [Classic Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#classic-virtual-machine-contributor) |
+| Microsoft.Compute/virtualMachines (agent-based) | Microsoft-Agent | [Reader](../role-based-access-control/built-in-roles.md#reader) |
+| Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | [Reader](../role-based-access-control/built-in-roles.md#reader) |
+| Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
+| Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
+| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) |
+| Microsoft.DocumentDb/databaseAccounts (Cosmos DB, service-direct) | Microsoft-Cosmos DB | [Azure Cosmos DB Operator](../role-based-access-control/built-in-roles.md#cosmos-db-operator) |
+| Microsoft.Insights/autoscalesettings (service-direct) | Microsoft-AutoScaleSettings | [Web Plan Contributor](../role-based-access-control/built-in-roles.md#web-plan-contributor) |
+| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | [Azure Key Vault Contributor](../role-based-access-control/built-in-roles.md#key-vault-contributor) |
+| Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor) |
+| Microsoft.Web/sites (service-direct) | Microsoft-AppService | [Website Contributor](../role-based-access-control/built-in-roles.md#website-contributor) |
+| Microsoft.ServiceBus/namespaces (service-direct) | Microsoft-ServiceBus | [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner) |
+| Microsoft.EventHub/namespaces (service-direct) | Microsoft-EventHub | [Azure Event Hubs Data Owner](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-owner) |
chaos-studio Chaos Studio Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-versions.md
Chaos Studio currently tests with the following version combinations.
| Chaos Studio fault version | Kubernetes version | Chaos Mesh version | Notes | |::|::|::|::|
+| 2.1 | 1.27 | 2.6.3 | |
| 2.1 | 1.25.11 | 2.5.1 | | The *Chaos Studio fault version* column refers to the individual fault version for each AKS Chaos Mesh fault used in the experiment JSON, for example `urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1`. If a past version of the corresponding Chaos Studio fault remains available from the Chaos Studio API (for example, `...podChaos/1.0`), it is within support.
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
description: Overview of features in Azure Cloud Shell ms.contributor: jahelmic Previously updated : 12/06/2023 Last updated : 02/15/2024 tags: azure-resource-manager Title: Azure Cloud Shell features # Features & tools for Azure Cloud Shell
-Azure Cloud Shell is a browser-based shell experience to manage and develop Azure resources.
-
-Cloud Shell offers a browser-accessible, preconfigured shell experience for managing Azure
-resources without the overhead of installing, versioning, and maintaining a machine yourself.
-
-Cloud Shell allocates machines on a per-request basis and as a result machine state doesn't
-persist across sessions. Since Cloud Shell is built for interactive sessions, shells automatically
-terminate after 20 minutes of shell inactivity.
+Azure Cloud Shell is a browser-based terminal that provides an authenticated, preconfigured shell
+experience for managing Azure resources without the overhead of installing and maintaining a machine
+yourself.
Azure Cloud Shell runs on **Azure Linux**, Microsoft's Linux distribution for cloud infrastructure
-edge products and services. Microsoft internally compiles all the packages included in the **Azure
-Linux** repository to help guard against supply chain attacks.
+edge products and services. You can choose Bash or PowerShell as your default shell.
## Features
-### Secure automatic authentication
+### Secure environment
+
+Microsoft internally compiles all the packages included in the **Azure Linux** repository to help
+guard against supply chain attacks. For more information or to request changes to the **Azure
+Linux** image, see the [Cloud Shell GitHub repository][24].
-Cloud Shell securely and automatically authenticates account access for the Azure CLI and Azure
-PowerShell.
+Cloud Shell automatically authenticates your Azure account to allow secure access for Azure CLI,
+Azure PowerShell, and other cloud management tools.
### $HOME persistence across sessions
-To persist files across sessions, Cloud Shell walks you through attaching an Azure file share on
-first launch. Once completed, Cloud Shell will automatically attach your storage (mounted as
-`$HOME\clouddrive`) for all future sessions. Additionally, your `$HOME` directory is persisted as an
-.img in your Azure File share. Files outside of `$HOME` and machine state aren't persisted across
-sessions. Learn more about [Persisting files in Cloud Shell][09].
+When you start Cloud Shell for the first time, you have the option of using Cloud Shell with or
+without an attached storage account. Choosing to continue without storage is the fastest way to
+start using Cloud Shell. In Cloud Shell, this is known as an _ephemeral session_. When you close the
+Cloud Shell window, all files you saved are deleted and don't persist across sessions.
+
+To persist files across sessions, you can choose to mount a storage account. Cloud Shell
+automatically attaches your storage (mounted as `$HOME\clouddrive`) for all future sessions.
+Additionally, your `$HOME` directory is persisted as an `.img` file in your Azure File share. The
+machine state and files outside of `$HOME` aren't persisted across sessions. Learn more about
+[Persisting files in Cloud Shell][32].
Use best practices when storing secrets such as SSH keys. You can use Azure Key Vault to securely
-store and retrieve your keys. For more information, see [Manage Key Vault using the Azure CLI][02].
+store and retrieve your keys. For more information, see [Manage Key Vault using the Azure CLI][05].
### Azure drive (Azure:) PowerShell in Cloud Shell provides the Azure drive (`Azure:`). You can switch to the Azure drive with `cd Azure:` and back to your home directory with `cd ~`. The Azure drive enables easy discovery and navigation of Azure resources such as Compute, Network, Storage etc. similar to
-filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][14] to manage
-these resources regardless of the drive you are in. Any changes made to the Azure resources, either
-made directly in Azure portal or through Azure PowerShell cmdlets, are reflected in the Azure drive.
-You can run `dir -Force` to refresh your resources.
-
-![Screenshot of an Azure Cloud Shell being initialized and a list of directory resources.][06]
-
-### Manage Exchange Online
+filesystem navigation. You can continue to use the familiar [Azure PowerShell cmdlets][09] to manage
+these resources regardless of the drive you are in.
-PowerShell in Cloud Shell contains the ExchangeOnlineManagement module. Run the following command to
-get a list of Exchange cmdlets.
-
-```powershell
-Get-Command -Module ExchangeOnlineManagement
-```
-
-For more information about using the ExchangeOnlineManagement module, see
-[Exchange Online PowerShell][15].
+> [!NOTE]
+> Any changes made to the Azure resources, either made directly in Azure portal or through Azure
+> PowerShell cmdlets, are reflected in the `Azure:` drive. However, you must run `dir -Force` to
+> refresh the view of your resources in the `Azure:`.
### Deep integration with open source tooling Cloud Shell includes preconfigured authentication for open source tools such as Terraform, Ansible, and Chef InSpec. For more information, see the following articles: -- [Run Ansible playbook][11]-- [Manage your Azure dynamic inventories][10]-- [Install and configure Terraform][12]
+- [Run Ansible playbook][03]
+- [Manage your Azure dynamic inventories][02]
+- [Install and configure Terraform][04]
-### Preinstalled tools
+## Preinstalled tools
-The most commonly used tools are preinstalled in Cloud Shell.
+The most commonly used tools are preinstalled in Cloud Shell. If you're using PowerShell, use the
+`Get-PackageVersion` command to see a more complete list of tools and versions. If you're using
+Bash, use the `tdnf list` command.
-#### Azure tools
+### Azure tools
Cloud Shell comes with the following Azure command-line tools preinstalled:
-| Tool | Version | Command |
-| - | -- | |
-| [Azure CLI][13] | 2.55.0 | `az --version` |
-| [Azure PowerShell][14] | 11.1.0 | `Get-Module Az -ListAvailable` |
-| [AzCopy][04] | 10.15.0 | `azcopy --version` |
-| [Azure Functions CLI][01] | 4.0.5390 | `func --version` |
-| [Service Fabric CLI][03] | 11.2.0 | `sfctl --version` |
-| [Batch Shipyard][18] | 3.9.1 | `shipyard --version` |
-| [blobxfer][19] | 1.11.0 | `blobxfer --version` |
-
-You can verify the version of the language using the command listed in the table.
-Use the `Get-PackageVersion` to see a more complete list of tools and versions.
-
-#### Linux tools
--- bash-- zsh-- sh-- tmux-- dig-
-#### Text editors
+- [Azure CLI][08]
+- [Azure PowerShell][09]
+- [Az.Tools.Predictor][10]
+- [AzCopy][07]
+- [Azure Functions CLI][01]
+- [Service Fabric CLI][06]
+- [Batch Shipyard][17]
+- [blobxfer][18]
+
+### Other Microsoft services
+
+- [Office 365 CLI][28]
+- [Exchange Online PowerShell][11]
+- A basic set of [Microsoft Graph PowerShell][12] modules
+ - Microsoft.Graph.Applications
+ - Microsoft.Graph.Authentication
+ - Microsoft.Graph.Groups
+ - Microsoft.Graph.Identity.DirectoryManagement
+ - Microsoft.Graph.Identity.Governance
+ - Microsoft.Graph.Identity.SignIns
+ - Microsoft.Graph.Users.Actions
+ - Microsoft.Graph.Users.Functions
+- [MicrosoftPowerBIMgmt][13] PowerShell modules
+- [SqlServer][14] PowerShell modules
+
+### Productivity tools
+
+Linux tools
+
+- `bash`
+- `zsh`
+- `sh`
+- `tmux`
+- `dig`
+
+Text editors
- Cloud Shell editor (code) - vim - nano - emacs
-#### Source control
+### Cloud management tools
-- Git-- GitHub CLI
+- [Docker Desktop][23]
+- [Kubectl][27]
+- [Helm][26]
+- [D2iQ Kubernetes Platform CLI][22]
+- [Cloud Foundry CLI][21]
+- [Terraform][31]
+- [Ansible][30]
+- [Chef InSpec][20]
+- [Puppet Bolt][29]
+- [HashiCorp Packer][19]
-#### Build tools
+## Developer tools
-- make-- maven-- npm-- pip
+Build tools
-#### Containers
+- `make`
+- `maven`
+- `npm`
+- `pip`
-- [Docker Desktop][24]-- [Kubectl][29]-- [Helm][28]-- [D2iQ Kubernetes Platform CLI][23]
+Source control
+
+- Git
+- GitHub CLI
-#### Databases
+Database tools
- MySQL client - PostgreSql client-- [sqlcmd Utility][17]-- [mssql-scripter][27]-
-#### Other
--- iPython Client-- [Cloud Foundry CLI][22]-- [Terraform][33]-- [Ansible][32]-- [Chef InSpec][21]-- [Puppet Bolt][31]-- [HashiCorp Packer][20]-- [Office 365 CLI][30]-
-### Preinstalled developer languages
-
-Cloud Shell comes with the following languages preinstalled:
+- [sqlcmd Utility][15]
+- [mssql-scripter][25]
-| Language | Version | Command |
-| - | - | |
-| .NET Core | [7.0.400][25] | `dotnet --version` |
-| Go | 1.19.11 | `go version` |
-| Java | 17.0.8 | `java --version` |
-| Node.js | 16.20.1 | `node --version` |
-| PowerShell | [7.4.0][16] | `pwsh -Version` |
-| Python | 3.9.14 | `python --version` |
-| Ruby | 3.1.4p223 | `ruby --version` |
+Programming languages
-You can verify the version of the language using the command listed in the table.
+- .NET Core 7.0
+- PowerShell 7.4
+- Node.js
+- Java
+- Python 3.9
+- Ruby
+- Go
## Next steps -- [Cloud Shell Quickstart][05]-- [Learn about Azure CLI][13]-- [Learn about Azure PowerShell][14]
+- [Cloud Shell Quickstart][16]
+- [Learn about Azure CLI][08]
+- [Learn about Azure PowerShell][09]
<!-- link references -->
-[01]: ../azure-functions/functions-run-local.md
-[02]: ../key-vault/general/manage-with-cli2.md#prerequisites
-[03]: ../service-fabric/service-fabric-cli.md
-[04]: ../storage/common/storage-use-azcopy-v10.md
-[05]: ./get-started.md
-[06]: ./media/features/azure-drive.png
-[09]: ./persisting-shell-storage.md
-[10]: /azure/developer/ansible/dynamic-inventory-configure
-[11]: /azure/developer/ansible/getting-started-cloud-shell
-[12]: /azure/developer/terraform/quickstart-configure
-[13]: /cli/azure/
-[14]: /powershell/azure
-[15]: /powershell/exchange/exchange-online-powershell
-[16]: /powershell/scripting/whats-new/what-s-new-in-powershell-74
-[17]: /sql/tools/sqlcmd-utility
-[18]: https://batch-shipyard.readthedocs.io/en/latest/
-[19]: https://blobxfer.readthedocs.io/en/latest/
-[20]: https://developer.hashicorp.com/packer/docs
-[21]: https://docs.chef.io/
-[22]: https://docs.cloudfoundry.org/cf-cli/
-[23]: https://docs.d2iq.com/dkp/2.6/azure-infrastructure
-[24]: https://docs.docker.com/desktop/
-[25]: https://dotnet.microsoft.com/download/dotnet/7.0
-[27]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md
-[28]: https://helm.sh/docs/
-[29]: https://kubernetes.io/docs/reference/kubectl/
-[30]: https://pnp.github.io/office365-cli/
-[31]: https://puppet.com/docs/bolt/latest/bolt.html
-[32]: https://www.ansible.com/microsoft-azure
-[33]: https://www.terraform.io/docs/providers/azurerm/
+[01]: /azure/azure-functions/functions-run-local
+[02]: /azure/developer/ansible/dynamic-inventory-configure
+[03]: /azure/developer/ansible/getting-started-cloud-shell
+[04]: /azure/developer/terraform/quickstart-configure
+[05]: /azure/key-vault/general/manage-with-cli2#prerequisites
+[06]: /azure/service-fabric/service-fabric-cli
+[07]: /azure/storage/common/storage-use-azcopy-v10
+[08]: /cli/azure/
+[09]: /powershell/azure
+[10]: /powershell/azure/predictor-overview
+[11]: /powershell/exchange/exchange-online-powershell
+[12]: /powershell/module/?term=Microsoft.Graph
+[13]: /powershell/module/?term=MicrosoftPowerBIMgmt
+[14]: /powershell/module/sqlserver
+[15]: /sql/tools/sqlcmd-utility
+[16]: get-started.md
+[17]: https://batch-shipyard.readthedocs.io/en/latest/
+[18]: https://blobxfer.readthedocs.io/en/latest/
+[19]: https://developer.hashicorp.com/packer/docs
+[20]: https://docs.chef.io/
+[21]: https://docs.cloudfoundry.org/cf-cli/
+[22]: https://docs.d2iq.com/dkp/2.6/azure-infrastructure
+[23]: https://docs.docker.com/desktop/
+[24]: https://github.com/Azure/CloudShell
+[25]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md
+[26]: https://helm.sh/docs/
+[27]: https://kubernetes.io/docs/reference/kubectl/
+[28]: https://pnp.github.io/office365-cli/
+[29]: https://puppet.com/docs/bolt/latest/bolt.html
+[30]: https://www.ansible.com/microsoft-azure
+[31]: https://www.terraform.io/docs/providers/azurerm/
+[32]: persisting-shell-storage.md
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
The following list presents the set of features that are currently available in
| | Mute participant | ✔️ | ✔️ | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ | | | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Blind Transfer* a participant from group call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Blind Transfer* a participant from group call to another endpoint| ✔️ | ✔️ | ✔️ | ✔️ |
| | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel media operations | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Share [custom info](../../how-tos/call-automation/custom-context.md) (via VOIP or SIP headers) with endpoints when adding them to a call or transferring a call to them| ✔️ | ✔️ | ✔️ | ✔️ |
| Query scenarios | Get the call state | ✔️ | ✔️ | ✔️ | ✔️ | | | Get a participant in a call | ✔️ | ✔️ | ✔️ | ✔️ | | | List all participants in a call | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Video Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/video-constraints.md
Previously updated : 2/20/2023 Last updated : 2/15/2024
# Video constraints
+The Video constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The ACS video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality isn't a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality.
-The Video Constraints API is a powerful tool that enables developers to control the video quality from within their video calls. With this API, developers can set maximum video resolutions, frame rate, and bitrate used so that the call is optimized for the user's device and network conditions. The Azure Communication Services video engine is optimized to allow the video quality to change dynamically based on devices ability and network quality. But there might be certain scenarios where you would want to have tighter control of the video quality that end users experience. For instance, there may be situations where the highest video quality is not a priority, or you may want to limit the video bandwidth usage in the application. To support those use cases, you can use the Video Constraints API to have tighter control over video quality.
-
-Another benefit of the Video Constraints API is that it enables developers to optimize the video call for different devices. For example, if a user is using an older device with limited processing power, developers can set constraints on the video resolution to ensure that the video call runs smoothly on that device
-
-Azure Communication Services Web Calling SDK supports setting the maximum video resolution, framerate, or bitrate that a client sends. The sender video constraints are supported on Desktop browsers (Chrome, Edge, Firefox) and when using iOS Safari mobile browser or Android Chrome mobile browser.
-
-The native Calling SDK (Android, iOS, Windows) supports setting the maximum values of video resolution and framerate for outgoing video streams and setting the maximum resolution for incoming video streams. These constraints can be set at the start of the call and during the call.
+Another benefit of the Video Constraints API is that it enables developers to optimize the video call for different devices. For example, if a user is using an older device with limited processing power, developers can set constraints on the video resolution to ensure that the video call runs smoothly on that device.
## Supported constraints | Platform | Supported Constraints | | -- | -- |
-| Web | Outgoing video: resolution, framerate, bitrate |
-| Android | Incoming video: resolution<br />Outgoing video: resolution, framerate |
-| iOS | Incoming video: resolution<br />Outgoing video: resolution, framerate |
-| Windows | Incoming video: resolution<br />Outgoing video: resolution, framerate |
+| **Web** | **Incoming video**: resolution<br />**Outgoing video**: resolution, framerate, bitrate |
+| **Android** | **Incoming video**: resolution<br />**Outgoing video**: resolution, framerate |
+| **iOS** | **Incoming video**: resolution<br />**Outgoing video**: resolution, framerate |
+| **Windows** | **Incoming video**: resolution<br />**Outgoing** video: resolution, framerate |
## Next steps For more information, see the following articles:
communication-services Record Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md
zone_pivot_groups: acs-plat-web-ios-android-windows
[!INCLUDE [Public Preview Disclaimer](../../includes/public-preview-include-document.md)]
-[Call recording](../../concepts/voice-video-calling/call-recording.md), lets your users record their calls made with Azure Communication Services. Here we learn how to manage recording on the client side. Before this can work, you'll need to set up [server side](../../quickstarts/voice-video-calling/call-recording-sample.md) recording.
+[Call recording](../../concepts/voice-video-calling/call-recording.md) lets your users record calls that they make with Azure Communication Services. In this article, you learn how to manage recording on the client side. Before you start, you need to set up recording on the [server side](../../quickstarts/voice-video-calling/call-recording-sample.md).
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). - A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).-- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+- Optional: Completion of the [quickstart to add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md).
::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android" ::: zone-end ::: zone pivot="platform-ios" ::: zone-end ::: zone pivot="platform-windows" ::: zone-end
-### Compliance Recording
-Compliance recording is Teams policy based recording that could be enabled using this tutorial: [Introduction to Teams policy-based recording for callings](/microsoftteams/teams-recording-policy).<br>
-Policy based recording will be started automatically when user with this policy will join a call. To get notification from Azure Communication Service about recording - we can use Cloud Recording section from this article.
+### Compliance recording
+
+Compliance recording is recording that's based on Microsoft Teams policy. You can enable it by using this tutorial: [Introduction to Teams policy-based recording for callings](/microsoftteams/teams-recording-policy).
+
+Policy-based recording starts automatically when a user who has the policy joins a call. To get a notification from Azure Communication Services about recording, use the following code:
```js const callRecordingApi = call.feature(Features.Recording);
const isComplianceRecordingActiveChangedHandler = () => {
callRecordingApi.on('isRecordingActiveChanged', isComplianceRecordingActiveChangedHandler); ```
-Compliance recording could be implemented by using custom recording bot [GitHub Example](https://github.com/microsoftgraph/microsoft-graph-comms-samples/tree/a3943bafd73ce0df780c0e1ac3428e3de13a101f/Samples/BetaSamples/LocalMediaSamples/ComplianceRecordingBot).<br>
+You can also implement compliance recording by using a custom recording bot. See the [GitHub example](https://github.com/microsoftgraph/microsoft-graph-comms-samples/tree/a3943bafd73ce0df780c0e1ac3428e3de13a101f/Samples/BetaSamples/LocalMediaSamples/ComplianceRecordingBot).
## Next steps+ - [Learn how to manage calls](./manage-calls.md) - [Learn how to manage video](./manage-video.md) - [Learn how to transcribe calls](./call-transcription.md)
communication-services Get Started Teams Auto Attendant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md
# Quickstart: Join your calling app to a Teams Auto Attendant - In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Auto Attendant. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: -- Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](../ui-library/get-started-composites.md)
+- Get started with [UI Calling to Teams Voice Apps](../../tutorials/calling-widget/calling-widget-tutorial.md)
- Learn about [Calling SDK capabilities](./getting-started-with-calling.md) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Get Started Teams Call Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md
# Quickstart: Join your calling app to a Teams call queue - In this quickstart you are going to learn how to start a call from Azure Communication Services user to Teams Call Queue. You are going to achieve it with the following steps: 1. Enable federation of Azure Communication Services resource with Teams Tenant.
If you want to clean up and remove a Communication Services subscription, you ca
For more information, see the following articles: -- Check out our [calling hero sample](../../samples/calling-hero-sample.md)-- Get started with the [UI Library](../ui-library/get-started-composites.md)
+- Get started with [UI Calling to Teams Voice Apps](../../tutorials/calling-widget/calling-widget-tutorial.md)
- Learn about [Calling SDK capabilities](./getting-started-with-calling.md) - Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
communication-services Calling Widget Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial.md
# Get started with Azure Communication Services UI library calling to Teams Voice Apps - ![Home page of Calling Widget sample app](../media/calling-widget/sample-app-splash-widget-open.png) This project aims to guide developers to initiate a call from the Azure Communication Services Calling Web SDK to Teams Call Queue and Auto Attendant using the Azure Communication UI Library.
If you wish to try it out, you can download the code from [GitHub](https://githu
Following this tutorial will: - Allow you to control your customers audio and video experience depending on your customer scenario-- Teach you how to build a simple widget for starting calls on your webapp using the UI library.
+- Teach you how to build a widget for starting calls on your webapp using the UI library.
## Prerequisites
Following this tutorial will:
### Set up the project
-Only use this step if you are creating a new application.
+Only use this step if you're creating a new application.
To set up the react App, we use the `create-react-app` command line tool. This tool
-creates an easy to run TypeScript application powered by React. This command creates a simple react application using TypeScript.
+creates an easy to run TypeScript application powered by React. This command creates a react application using TypeScript.
```bash # Create an Azure Communication Services App powered by React.
cd ui-library-calling-widget-app
### Get your dependencies
-Then you need to update the dependency array in the `package.json` to include some packages from Azure Communication Services for the widget experience we are going to build to work:
+Then you need to update the dependency array in the `package.json` to include some packages from Azure Communication Services for the widget experience we're going to build to work:
```json
-"@azure/communication-calling": "1.19.1-beta.2",
-"@azure/communication-chat": "1.4.0-beta.1",
-"@azure/communication-react": "1.10.0-beta.1",
+"@azure/communication-calling": "1.22.1",
+"@azure/communication-chat": "1.4.0",
+"@azure/communication-react": "1.13.0",
"@azure/communication-calling-effects": "1.0.1", "@azure/communication-common": "2.3.0", "@fluentui/react-icons": "~2.0.203",
Your `App.tsx` file should look like this:
`src/App.tsx` ```ts
+import "./App.css";
+import {
+ CommunicationIdentifier,
+ MicrosoftTeamsAppIdentifier,
+} from "@azure/communication-common";
+import {
+ Spinner,
+ Stack,
+ initializeIcons,
+ registerIcons,
+ Text,
+} from "@fluentui/react";
+import { CallAdd20Regular, Dismiss20Regular } from "@fluentui/react-icons";
+import logo from "./logo.svg";
-import './App.css';
-import { CommunicationIdentifier, MicrosoftTeamsAppIdentifier } from '@azure/communication-common';
-import { Spinner, Stack, initializeIcons, registerIcons, Text } from '@fluentui/react';
-import { CallAdd20Regular, Dismiss20Regular } from '@fluentui/react-icons';
-import logo from './logo.svg';
-
-import { CallingWidgetComponent } from './components/CallingWidgetComponent';
+import { CallingWidgetComponent } from "./components/CallingWidgetComponent";
registerIcons({ icons: { dismiss: <Dismiss20Regular />, callAdd: <CallAdd20Regular /> },
function App() {
/** * Token for local user. */
- const token = "<Enter your Azure Communication Services token here>";
+ const token = "<Enter your ACS Token here>";
/** * User identifier for local user. */ const userId: CommunicationIdentifier = {
- communicationUserId: "<Enter your Azure Communication Services ID here>",
+ communicationUserId: "Enter your ACS Id here",
}; /** * Enter your Teams voice app identifier from the Teams admin center here */ const teamsAppIdentifier: MicrosoftTeamsAppIdentifier = {
- teamsAppId: '<Enter your teams voice app ID here>', cloud: 'public'
- }
+ teamsAppId: "<Enter your Teams Voice app id here>",
+ cloud: "public",
+ };
const widgetParams = { userId,
function App() {
teamsAppIdentifier, }; - if (!token || !userId || !teamsAppIdentifier) { return (
- <Stack verticalAlign='center' style={{ height: '100%', width: '100%' }}>
- <Spinner label={'Getting user credentials from server'} ariaLive="assertive" labelPosition="top" />;
+ <Stack verticalAlign="center" style={{ height: "100%", width: "100%" }}>
+ <Spinner
+ label={"Getting user credentials from server"}
+ ariaLive="assertive"
+ labelPosition="top"
+ />
+ ;
</Stack>
- )
-
+ );
} - return ( <Stack style={{ height: "100%", width: "100%", padding: "3rem" }} tokens={{ childrenGap: "1.5rem" }} >
- <Stack tokens={{ childrenGap: '1rem' }} style={{ margin: "auto" }}>
+ <Stack tokens={{ childrenGap: "1rem" }} style={{ margin: "auto" }}>
<Stack style={{ padding: "3rem" }} horizontal
function App() {
</Stack> <Text>
- Welcome to a Calling Widget sample for the Azure Communication Services UI
- Library. Sample has the ability to connect you through Teams voice apps to a agent to help you.
+ Welcome to a Calling Widget sample for the Azure Communication
+ Services UI Library. Sample has the ability to connect you through
+ Teams voice apps to a agent to help you.
</Text> <Text> As a user all you need to do is click the widget below, enter your
function App() {
action the <b>start call</b> button. </Text> </Stack>
- <Stack horizontal tokens={{ childrenGap: '1.5rem' }} style={{ overflow: 'hidden', margin: 'auto' }}>
+ <Stack
+ horizontal
+ tokens={{ childrenGap: "1.5rem" }}
+ style={{ overflow: "hidden", margin: "auto" }}
+ >
<CallingWidgetComponent widgetAdapterArgs={widgetParams} onRenderLogo={() => { return ( <img
- style={{ height: '4rem', width: '4rem', margin: 'auto' }}
+ style={{ height: "4rem", width: "4rem", margin: "auto" }}
src={logo} alt="logo" />
export default App;
```
-In this snippet we register two new icons `<Dismiss20Regular/>` and `<CallAdd20Regular>`. These new icons are used inside the widget component that we are creating in the next section.
+In this snippet, we register two new icons `<Dismiss20Regular/>` and `<CallAdd20Regular>`. These new icons are used inside the widget component that we're creating in the next section.
### Create the widget Now we need to make a widget that can show in three different modes: - Waiting: This widget state is how the component will be in before and after a call is made - Setup: This state is when the widget asks for information from the user like their name.-- In a call: The widget is replaced here with the UI library Call Composite. This is the mode when the user is calling the Voice app or talking with an agent.
+- In a call: The widget is replaced here with the UI library Call Composite. This widget mode is when the user is calling the Voice app or talking with an agent.
-Lets create a folder called `src/components`. In this folder make a new file called `CallingWidgetComponent.tsx`. This file should look like the following snippet:
+Lets create a folder called `src/components`. In this folder, make a new file called `CallingWidgetComponent.tsx`. This file should look like the following snippet:
`CallingWidgetComponent.tsx` ```ts
-import { IconButton, PrimaryButton, Stack, TextField, useTheme, Checkbox, Icon } from '@fluentui/react';
-import React, { useState } from 'react';
import {
- callingWidgetSetupContainerStyles,
- checkboxStyles,
- startCallButtonStyles,
- callingWidgetContainerStyles,
- callIconStyles,
- logoContainerStyles,
- collapseButtonStyles,
- callingWidgetInCallContainerStyles
-} from '../styles/CallingWidgetComponent.styles';
-import { AzureCommunicationTokenCredential, CommunicationIdentifier, MicrosoftTeamsAppIdentifier } from '@azure/communication-common';
+ IconButton,
+ PrimaryButton,
+ Stack,
+ TextField,
+ useTheme,
+ Checkbox,
+ Icon,
+ Spinner,
+} from "@fluentui/react";
+import React, { useEffect, useRef, useState } from "react";
+import {
+ callingWidgetSetupContainerStyles,
+ checkboxStyles,
+ startCallButtonStyles,
+ callingWidgetContainerStyles,
+ callIconStyles,
+ logoContainerStyles,
+ collapseButtonStyles,
+} from "../styles/CallingWidgetComponent.styles";
+
+import {
+ AzureCommunicationTokenCredential,
+ CommunicationUserIdentifier,
+ MicrosoftTeamsAppIdentifier,
+} from "@azure/communication-common";
import {
- CallAdapter,
- CallComposite,
- useAzureCommunicationCallAdapter,
- AzureCommunicationCallAdapterArgs
-} from '@azure/communication-react';
-import { useCallback, useMemo } from 'react';
+ CallAdapter,
+ CallAdapterState,
+ CallComposite,
+ CommonCallAdapterOptions,
+ StartCallIdentifier,
+ createAzureCommunicationCallAdapter,
+} from "@azure/communication-react";
+// lets add to our react imports as well
+import { useMemo } from "react";
+
+import { callingWidgetInCallContainerStyles } from "../styles/CallingWidgetComponent.styles";
/** * Properties needed for our widget to start a call. */ export type WidgetAdapterArgs = {
- token: string;
- userId: CommunicationIdentifier;
- teamsAppIdentifier: MicrosoftTeamsAppIdentifier;
+ token: string;
+ userId: CommunicationUserIdentifier;
+ teamsAppIdentifier: MicrosoftTeamsAppIdentifier;
}; export interface CallingWidgetComponentProps {
- /**
- * arguments for creating an AzureCommunicationCallAdapter for your Calling experience
- */
- widgetAdapterArgs: WidgetAdapterArgs;
- /**
- * Custom render function for displaying logo.
- * @returns
- */
- onRenderLogo?: () => JSX.Element;
+ /**
+ * arguments for creating an AzureCommunicationCallAdapter for your Calling experience
+ */
+ widgetAdapterArgs: WidgetAdapterArgs;
+ /**
+ * Custom render function for displaying logo.
+ * @returns
+ */
+ onRenderLogo?: () => JSX.Element;
} /**
export interface CallingWidgetComponentProps {
* @param props */ export const CallingWidgetComponent = (
- props: CallingWidgetComponentProps
+ props: CallingWidgetComponentProps
): JSX.Element => {
- const { onRenderLogo, widgetAdapterArgs } = props;
-
- const [widgetState, setWidgetState] = useState<'new' | 'setup' | 'inCall'>('new');
- const [displayName, setDisplayName] = useState<string>();
- const [consentToData, setConsentToData] = useState<boolean>(false);
- const [useLocalVideo, setUseLocalVideo] = useState<boolean>(false);
+ const { onRenderLogo, widgetAdapterArgs } = props;
- const theme = useTheme();
+ const [widgetState, setWidgetState] = useState<"new" | "setup" | "inCall">(
+ "new"
+ );
+ const [displayName, setDisplayName] = useState<string>();
+ const [consentToData, setConsentToData] = useState<boolean>(false);
+ const [useLocalVideo, setUseLocalVideo] = useState<boolean>(false);
+ const [adapter, setAdapter] = useState<CallAdapter>();
+
+ const callIdRef = useRef<string>();
+
+ const theme = useTheme();
+
+ // add this before the React template
+ const credential = useMemo(() => {
+ try {
+ return new AzureCommunicationTokenCredential(widgetAdapterArgs.token);
+ } catch {
+ console.error("Failed to construct token credential");
+ return undefined;
+ }
+ }, [widgetAdapterArgs.token]);
+
+ const adapterOptions: CommonCallAdapterOptions = useMemo(
+ () => ({
+ callingSounds: {
+ callEnded: { url: "/sounds/callEnded.mp3" },
+ callRinging: { url: "/sounds/callRinging.mp3" },
+ callBusy: { url: "/sounds/callBusy.mp3" },
+ },
+ }),
+ []
+ );
- const credential = useMemo(() => {
- try {
- return new AzureCommunicationTokenCredential(widgetAdapterArgs.token);
- } catch {
- console.error('Failed to construct token credential');
- return undefined;
+ const callAdapterArgs = useMemo(() => {
+ return {
+ userId: widgetAdapterArgs.userId,
+ credential: credential,
+ targetCallees: [
+ widgetAdapterArgs.teamsAppIdentifier,
+ ] as StartCallIdentifier[],
+ displayName: displayName,
+ options: adapterOptions,
+ };
+ }, [
+ widgetAdapterArgs.userId,
+ widgetAdapterArgs.teamsAppIdentifier.teamsAppId,
+ credential,
+ displayName,
+ ]);
+
+ useEffect(() => {
+ if (adapter) {
+ adapter.on("callEnded", () => {
+ /**
+ * We only want to reset the widget state if the call that ended is the same as the current call.
+ */
+ if (
+ adapter.getState().acceptedTransferCallState &&
+ adapter.getState().acceptedTransferCallState?.id !== callIdRef.current
+ ) {
+ return;
}
- }, [widgetAdapterArgs.token]);
-
- const callAdapterArgs = useMemo(() => {
- return {
- userId: widgetAdapterArgs.userId,
- credential: credential,
- locator: {participantIds: [`28:orgid:${widgetAdapterArgs.teamsAppIdentifier.teamsAppId}`]},
- displayName: displayName
+ setDisplayName(undefined);
+ setWidgetState("new");
+ setConsentToData(false);
+ setAdapter(undefined);
+ adapter.dispose();
+ });
+
+ adapter.on("transferAccepted", (e) => {
+ console.log("transferAccepted", e);
+ });
+
+ adapter.onStateChange((state: CallAdapterState) => {
+ if (state?.call?.id && callIdRef.current !== state?.call?.id) {
+ callIdRef.current = state?.call?.id;
+ console.log(`Call Id: ${callIdRef.current}`);
}
- }, [widgetAdapterArgs.userId, widgetAdapterArgs.teamsAppIdentifier.teamsAppId, credential, displayName]);
-
- const afterCreate = useCallback(async (adapter: CallAdapter): Promise<CallAdapter> => {
- adapter.on('callEnded', () => {
- setDisplayName(undefined);
- setWidgetState('new');
- });
- return adapter;
- }, [])
-
- const adapter = useAzureCommunicationCallAdapter(callAdapterArgs as AzureCommunicationCallAdapterArgs, afterCreate);
-
- // Widget template for when widget is open, put any fields here for user information desired
- if (widgetState === 'setup' ) {
- return (
- <Stack styles={callingWidgetSetupContainerStyles(theme)} tokens={{ childrenGap: '1rem' }}>
- <IconButton
- styles={collapseButtonStyles}
- iconProps={{ iconName: 'Dismiss' }}
- onClick={() => setWidgetState('new')} />
- <Stack tokens={{ childrenGap: '1rem' }} styles={logoContainerStyles}>
- <Stack style={{ transform: 'scale(1.8)' }}>{onRenderLogo && onRenderLogo()}</Stack>
- </Stack>
- <TextField
- label={'Name'}
- required={true}
- placeholder={'Enter your name'}
- onChange={(_, newValue) => {
- setDisplayName(newValue);
- }} />
- <Checkbox
- styles={checkboxStyles(theme)}
- label={'Use video - Checking this box will enable camera controls and screen sharing'}
- onChange={(_, checked?: boolean | undefined) => {
- setUseLocalVideo(true);
- }}
- ></Checkbox>
- <Checkbox
- required={true}
- styles={checkboxStyles(theme)}
- label={'By checking this box, you are consenting that we will collect data from the call for customer support reasons'}
- onChange={(_, checked?: boolean | undefined) => {
- setConsentToData(!!checked);
- }}
- ></Checkbox>
- <PrimaryButton
- styles={startCallButtonStyles(theme)}
- onClick={() => {
- if (displayName && consentToData && adapter && widgetAdapterArgs.teamsAppIdentifier) {
- setWidgetState('inCall');
- adapter.startCall([widgetAdapterArgs.teamsAppIdentifier]);
- }
- }}
- >
- StartCall
- </PrimaryButton>
- </Stack>
- );
- }
-
- if (widgetState === 'inCall' && adapter) {
- return (
- <Stack styles={callingWidgetInCallContainerStyles(theme)}>
- <CallComposite
- adapter={adapter}
- options={{
- callControls: {
- cameraButton: useLocalVideo,
- screenShareButton: useLocalVideo,
- moreButton: false,
- peopleButton: false,
- displayType: 'compact'
- },
- localVideoTile: !useLocalVideo ? false : { position: 'floating' }
- }}/>
- </Stack>
- )
+ });
}
+ }, [adapter]);
+ /** widget template for when widget is open, put any fields here for user information desired */
+ if (widgetState === "setup") {
return (
- <Stack
- horizontalAlign="center"
- verticalAlign="center"
- styles={callingWidgetContainerStyles(theme)}
+ <Stack
+ styles={callingWidgetSetupContainerStyles(theme)}
+ tokens={{ childrenGap: "1rem" }}
+ >
+ <IconButton
+ styles={collapseButtonStyles}
+ iconProps={{ iconName: "Dismiss" }}
onClick={() => {
- setWidgetState('setup');
+ setDisplayName(undefined);
+ setConsentToData(false);
+ setUseLocalVideo(false);
+ setWidgetState("new");
}}
- >
- <Stack
- horizontalAlign="center"
- verticalAlign="center"
- style={{ height: '4rem', width: '4rem', borderRadius: '50%', background: theme.palette.themePrimary }}
- >
- <Icon iconName="callAdd" styles={callIconStyles(theme)} />
- </Stack>
+ />
+ <Stack tokens={{ childrenGap: "1rem" }} styles={logoContainerStyles}>
+ <Stack style={{ transform: "scale(1.8)" }}>
+ {onRenderLogo && onRenderLogo()}
+ </Stack>
</Stack>
+ <TextField
+ label={"Name"}
+ required={true}
+ placeholder={"Enter your name"}
+ onChange={(_, newValue) => {
+ setDisplayName(newValue);
+ }}
+ />
+ <Checkbox
+ styles={checkboxStyles(theme)}
+ label={
+ "Use video - Checking this box will enable camera controls and screen sharing"
+ }
+ onChange={(_, checked?: boolean | undefined) => {
+ setUseLocalVideo(!!checked);
+ setUseLocalVideo(true);
+ }}
+ ></Checkbox>
+ <Checkbox
+ required={true}
+ styles={checkboxStyles(theme)}
+ disabled={displayName === undefined}
+ label={
+ "By checking this box, you are consenting that we will collect data from the call for customer support reasons"
+ }
+ onChange={async (_, checked?: boolean | undefined) => {
+ setConsentToData(!!checked);
+ if (callAdapterArgs && callAdapterArgs.credential) {
+ setAdapter(
+ await createAzureCommunicationCallAdapter({
+ displayName: displayName ?? "",
+ userId: callAdapterArgs.userId,
+ credential: callAdapterArgs.credential,
+ targetCallees: callAdapterArgs.targetCallees,
+ options: callAdapterArgs.options,
+ })
+ );
+ }
+ }}
+ ></Checkbox>
+ <PrimaryButton
+ styles={startCallButtonStyles(theme)}
+ onClick={() => {
+ if (displayName && consentToData && adapter) {
+ setWidgetState("inCall");
+ adapter?.startCall(callAdapterArgs.targetCallees, {
+ audioOptions: { muted: false },
+ });
+ }
+ }}
+ >
+ {!consentToData && `Enter your name`}
+ {consentToData && !adapter && (
+ <Spinner ariaLive="assertive" labelPosition="top" />
+ )}
+ {consentToData && adapter && `StartCall`}
+ </PrimaryButton>
+ </Stack>
);
+ }
+
+ if (widgetState === "inCall" && adapter) {
+ return (
+ <Stack styles={callingWidgetInCallContainerStyles(theme)}>
+ <CallComposite
+ adapter={adapter}
+ options={{
+ callControls: {
+ cameraButton: useLocalVideo,
+ screenShareButton: useLocalVideo,
+ moreButton: false,
+ peopleButton: false,
+ displayType: "compact",
+ },
+ localVideoTile: !useLocalVideo ? false : { position: "floating" },
+ }}
+ />
+ </Stack>
+ );
+ }
+
+ return (
+ <Stack
+ horizontalAlign="center"
+ verticalAlign="center"
+ styles={callingWidgetContainerStyles(theme)}
+ onClick={() => {
+ setWidgetState("setup");
+ }}
+ >
+ <Stack
+ horizontalAlign="center"
+ verticalAlign="center"
+ style={{
+ height: "4rem",
+ width: "4rem",
+ borderRadius: "50%",
+ background: theme.palette.themePrimary,
+ }}
+ >
+ <Icon iconName="callAdd" styles={callIconStyles(theme)} />
+ </Stack>
+ </Stack>
+ );
}; ``` #### Style the widget
-We need to write some styles to make sure the widget looks appropriate and can hold our call composite. These styles should already be used in the widget if copying the snippet above.
+We need to write some styles to make sure the widget looks appropriate and can hold our call composite. These styles should already be used in the widget if copying the snippet we added to the file `CallingWidgetComponent.tsx`.
-lets make a new folder called `src/styles` in this folder create a file called `CallingWidgetComponent.styles.ts`. The file should look like the following snippet:
+Lets make a new folder called `src/styles` in this folder, create a file called `CallingWidgetComponent.styles.ts`. The file should look like the following snippet:
```ts
-import { IButtonStyles, ICheckboxStyles, IIconStyles, IStackStyles, Theme } from '@fluentui/react';
+import {
+ IButtonStyles,
+ ICheckboxStyles,
+ IIconStyles,
+ IStackStyles,
+ Theme,
+} from "@fluentui/react";
export const checkboxStyles = (theme: Theme): ICheckboxStyles => { return {
export const callingWidgetContainerStyles = (theme: Theme): IStackStyles => {
}; };
-export const callingWidgetSetupContainerStyles = (theme: Theme): IStackStyles => {
+export const callingWidgetSetupContainerStyles = (
+ theme: Theme
+): IStackStyles => {
return { root: { width: "18rem",
export const callingWidgetSetupContainerStyles = (theme: Theme): IStackStyles =>
position: "absolute", overflow: "hidden", cursor: "pointer",
- background: theme.palette.white
+ background: theme.palette.white,
}, }; };
export const collapseButtonStyles: IButtonStyles = {
}, };
-export const callingWidgetInCallContainerStyles = (theme: Theme): IStackStyles => {
+export const callingWidgetInCallContainerStyles = (
+ theme: Theme
+): IStackStyles => {
return { root: {
- width: '35rem',
- height: '25rem',
- padding: '0.5rem',
+ width: "35rem",
+ height: "25rem",
+ padding: "0.5rem",
boxShadow: theme.effects.elevation16, borderRadius: theme.effects.roundedCorner6, bottom: 0,
- right: '1rem',
- position: 'absolute',
- overflow: 'hidden',
- cursor: 'pointer',
- background: theme.semanticColors.bodyBackground
- }
- }
-}
+ right: "1rem",
+ position: "absolute",
+ overflow: "hidden",
+ cursor: "pointer",
+ background: theme.semanticColors.bodyBackground,
+ },
+ };
+};
``` ### Run the app
Then when you action the widget button, you should see a little menu:
![Screenshot of calling widget sample app home page widget open.](../media/calling-widget/sample-app-splash-widget-open.png)
-after you fill out your name click start call and the call should begin. The widget should look like so after starting a call:
+After you fill out your name click start call and the call should begin. The widget should look like so after starting a call:
![Screenshot of click to call sample app home page with calling experience embedded in widget.](../media/calling-widget/calling-widget-embedded-start.png) ## Next steps
-If you haven't had the chance, check out our documentation on Teams auto attendants and Teams call queues.
+For more information about Teams voice applications, check out our documentation on Teams auto attendants and Teams call queues.
> [!div class="nextstepaction"]
confidential-computing Virtual Machine Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions.md
Confidential VMs run on specialized hardware, so you can only [resize confidenti
It's not possible to resize a non-confidential VM to a confidential VM.
-### Disk encryption
+### Guest Operating System Support
OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include: - Ubuntu 20.04 LTS (AMD SEV-SNP supported only) - Ubuntu 22.04 LTS
+- Red Hat Enterprise Linux 9.3 (AMD SEV-SNP supported only)
- Windows Server 2019 Datacenter - x64 Gen 2 (AMD SEV-SNP supported only) - Windows Server 2019 Datacenter Server Core - x64 Gen 2 (AMD SEV-SNP supported only) - Windows Server 2022 Datacenter - x64 Gen 2
OS images for confidential VMs have to meet certain security and compatibility r
- Windows 11 Enterprise, version 22H2 -x64 Gen 2 - Windows 11 Enterprise multi-session, version 22H2 -x64 Gen 2
+As we work to onboard more OS images with confidential OS disk encryption, there are various images available in early preview that can be tested. You can sign up below:
+
+- [Red Hat Enterprise Linux 9.3 (Support for Intel TDX)](https://aka.ms/tdx-rhel-93-preview)
+- [SUSE Enterprise Linux 15 SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)
+- [SUSE Enterprise Linux 15 SAP SP5 (Support for Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)
+ For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md). ### High availability and disaster recovery
Azure Resource Manager is the deployment and management service for Azure. You c
Make sure to specify the following properties for your VM in the parameters section (`parameters`): - VM size (`vmSize`). Choose from the different [confidential VM families and sizes](#sizes).-- OS image name (`osImageName`). Choose from the [qualified OS images](#disk-encryption).
+- OS image name (`osImageName`). Choose from the qualified OS images.
- Disk encryption type (`securityType`). Choose from VMGS-only encryption (`VMGuestStateOnly`) or full OS disk pre-encryption (`DiskWithVMGuestState`), which might result in longer provisioning times. For Intel TDX instances only we also support another security type (`NonPersistedTPM`) which has no VMGS or OS disk encryption. ## Next steps
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
ms.suite: integration
Previously updated : 02/12/2024 Last updated : 02/15/2024 # Built-in connectors in Azure Logic Apps
Built-in connectors provide ways for you to control your workflow's schedule and
For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multitenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type and not the other.
-For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob Storage, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors.
+For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob Storage, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, FTP, IBM DB2, IBM MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, and Azure App Services, while a Standard workflow doesn't have these built-in connectors.
Also, in Standard workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Microsoft Entra ID, or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multitenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
The following table lists the current and expanding galleries of built-in connec
| Consumption | Standard | |-|-|
-| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure Automation* <br>Azure Blob Storage* <br>Azure Cosmos DB* <br>Azure File Storage* <br>Azure Functions <br>Azure Queue Storage* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Grid Publisher* <br>Event Hubs* <br>File System* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>JDBC* <br>Key Vault* <br>Liquid operations <br>MQ* <br>Request <br>SAP* <br>Schedule <br>Service Bus* <br>SFTP* <br>SMTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations |
+| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure AI Search* <br>Azure Automation* <br>Azure Blob Storage* <br>Azure Cosmos DB* <br>Azure Event Grid Publisher* <br>Azure Event Hubs* <br>Azure File Storage* <br>Azure Functions <br>Azure Key Vault* <br>Azure OpenAI* <br>Azure Queue Storage* <br>Azure Service Bus* <br>Azure Table Storage* <br>Batch Operations <br>Control <br>Data Mapper Operations <br>Data Operations <br>Date Time <br>EDIFACT <br>File System* <br>Flat File <br>FTP* <br>HTTP <br>IBM 3270* <br>IBM CICS* <br>IBM DB2* <br>IBM Host File* <br>IBM IMS* <br>IBM MQ* <br>Inline Code <br>Integration Account <br>JDBC* <br>Liquid Operations <br>Request <br>RosettaNet <br>SAP* <br>Schedule <br>SFTP* <br>SMTP* <br>SQL Server* <br>SWIFT <br>Variables <br>Workflow Operations <br>X12 <br>XML Operations |
<a name="service-provider-interface-implementation"></a>
You can use the following built-in connectors to perform general tasks, for exam
[![Batch icon][batch-icon]][batch-doc] \ \
- [**Batch**][batch-doc]<br>(*Consumption workflow only*)
+ [**Batch**][batch-doc]
\ \ [**Batch messages**][batch-doc]: Trigger a workflow that processes messages in batches.
You can use the following built-in connectors to perform general tasks, for exam
You can use the following built-in connectors to access specific services and systems. In Standard workflows, some of these built-in connectors are also informally known as *service providers*, which can differ from their managed connector counterparts in some ways. :::row:::
+ :::column:::
+ [![Azure AI Search icon][azure-ai-search-icon]][azure-ai-search-doc]
+ \
+ \
+ [**Azure API Search**][azure-ai-search-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Connect to AI Search so that you can perform document indexing and search operations in your workflow.
+ :::column-end:::
:::column::: [![Azure API Management icon][azure-api-management-icon]][azure-api-management-doc] \
You can use the following built-in connectors to access specific services and sy
\ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents. :::column-end:::
+ :::column:::
+ [![Azure Event Grid Publisher icon][azure-event-grid-publisher-icon]][azure-event-grid-publisher-doc]
+ \
+ \
+ [**Azure Event Grid Publisher**][azure-event-grid-publisher-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Connect to Azure Event Grid for event-based programming using pub-sub semantics.
+ :::column-end:::
:::column::: [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc] \
You can use the following built-in connectors to access specific services and sy
[![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc] \ \
- [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption workflow*) <br><br>-or-<br><br>**Workflow operations**<br>(*Standard workflow*)
+ [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption workflow*) <br><br>-or-<br><br>**Workflow Operations**<br>(*Standard workflow*)
\ \ Call other workflows that start with the Request trigger named **When a HTTP request is received**. :::column-end:::
+ :::column:::
+ [![Azure OpenAI icon][azure-openai-icon]][azure-openai-doc]
+ \
+ \
+ [**Azure OpenAI**][azure-openai-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Connect to Azure Open AI to perform operations on large language models.
+ :::column-end:::
:::column::: [![Azure Service Bus icon][azure-service-bus-icon]][azure-service-bus-doc] \
You can use the following built-in connectors to access specific services and sy
\ Connect to your Azure Storage account so that you can create, update, and manage queues. :::column-end:::
+ :::column:::
+ [![IBM 3270 icon][ibm-3270-icon]][ibm-3270-doc]
+ \
+ \
+ [**IBM 3270**][ibm-3270-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Call 3270 screen-driven apps on IBM mainframes from your workflow.
+ :::column-end:::
+ :::column:::
+ [![IBM CICS icon][ibm-cics-icon]][ibm-cics-doc]
+ \
+ \
+ [**IBM CICS**][ibm-cics-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Call CICS programs on IBM mainframes from your workflow.
+ :::column-end:::
:::column::: [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc] \
You can use the following built-in connectors to access specific services and sy
\ Connect to IBM Host File and generate or parse contents. :::column-end:::
+ :::column:::
+ [![IBM IMS icon][ibm-ims-icon]][ibm-ims-doc]
+ \
+ \
+ [**IBM IMS**][ibm-ims-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Call IMS programs on IBM mainframes from your workflow.
+ :::column-end:::
:::column::: [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc] \
You can use the following built-in connectors to access specific services and sy
\ Connect to IBM MQ on-premises or in Azure to send and receive messages. :::column-end::: :::column::: [![JDBC icon][jdbc-icon]][jdbc-doc] \
You can use the following built-in connectors to access specific services and sy
\ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. :::column-end:::
- :::column:::
- :::column-end:::
:::row-end::: ## Run code from workflows
Azure Logic Apps provides the following built-in actions for running your own co
[**Inline Code**][inline-code-doc] \ \
- [**Execute JavaScript Code**][inline-code-doc]: Add and run your own inline JavaScript *code snippets* within your workflow.
+ [Add and run inline JavaScript code snippets](../logic-apps/logic-apps-add-run-inline-code.md) from your workflow.
:::column-end::: :::column:::
+ [![Local Function Operations icon][local-function-icon]][local-function-doc]
+ \
+ \
+ [**Local Function Operations**][local-function-doc]<br>(Standard workflow only)
+ \
+ \
+ [Create and run .NET Framework code](../logic-apps/create-run-custom-code-functions.md) from your workflow.
:::column-end::: :::column::: :::column-end:::
Azure Logic Apps provides the following built-in actions for structuring and con
[![Scope action icon][scope-icon]][scope-doc] \ \
- [**Name**][scope-doc]
+ [**Scope**][scope-doc]
\ \ Group actions into *scopes*, which get their own status after the actions in the scope finish running.
Azure Logic Apps provides the following built-in actions for working with data o
:::column-end::: :::row-end:::
-<a name="integration-account-built-in"></a>
+<a name="b2b-built-in-operations"></a>
-## Integration account built-in connectors
+## Business-to-business (B2B) built-in operations
-Integration account operations support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, and others, you can use integration account built-in actions to encode and decode messages, transform content, and more.
+Azure Logic Apps supports business-to-business (B2B) communication scenarios through various B2B built-in operations. Based on whether you have a Consumption or Standard workflow and the B2B operations that you want to use, [you might have to create and link an integration account to your logic app resource](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). You then use this integration account to define your B2B artifacts, such as trading partners, agreements, maps, schemas, certificates, and so on.
* Consumption workflows
- Before you use any integration account operations in a workflow, [link your logic app resource to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+ Before you can use any B2B operations in a workflow, [you must create and link an integration account to your logic app resource](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). After you create your integration account, you must then define your B2B artifacts, such as trading partners, agreements, maps, schemas, certificates, and so on. You can then use the B2B operations to encode and decode messages, transform content, and more.
* Standard workflows
- While most integration account operations don't require that you link your logic app resource to your integration account, linking lets you share artifacts across multiple Standard workflows and their child workflows. Based on the integration account operation that you want to use, complete one of the following steps before you use the operation:
+ Some B2B operations require that you [create and link an integration account to your logic app resource](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md). Linking lets you share artifacts across multiple Standard workflows and their child workflows. Based on the B2B operation that you want to use, complete one of the following steps before you use the operation:
* For operations that require maps or schemas, you can either:
For more information, review the following documentation:
:::row::: :::column:::
- [![AS2 Decode v2 icon][as2-v2-icon]][as2-doc]
+ [![AS2 v2 icon][as2-v2-icon]][as2-doc]
+ \
+ \
+ [**AS2 (v2)**][as2-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Encode and decode messages that use the AS2 protocol.
+ :::column-end:::
+ :::column:::
+ [![EDIFACT icon][edifact-icon]][edifact-doc]
\ \
- [**AS2 Decode (v2)**][as2-doc]<br>(*Standard workflow only*)
+ [**EDIFACT**][edifact-doc]
\ \
- Decode messages received using the AS2 protocol.
+ Encode and decode messages that use the EDIFACT protocol.
:::column-end::: :::column:::
- [![AS2 Encode (v2) icon][as2-v2-icon]][as2-doc]
+ [![Flat File icon][flat-file-icon]][flat-file-doc]
\ \
- [**AS2 Encode (v2)**][as2-doc]<br>(*Standard workflow only*)
+ [**Flat File**][flat-file-doc]
\ \
- Encode messages sent using the AS2 protocol.
+ Encode and decode XML messages between trading partners.
:::column-end::: :::column:::
- [![Flat file decoding icon][flat-file-decode-icon]][flat-file-decode-doc]
+ [![Integration account icon][integration-account-icon]][integration-account-doc]
\ \
- [**Flat file decoding**][flat-file-decode-doc]
+ [**Integration Account Artifact Lookup**][integration-account-doc]
\ \
- Encode XML before sending the content to a trading partner.
+ Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account.
:::column-end::: :::column:::
- [![Flat file encoding icon][flat-file-encode-icon]][flat-file-encode-doc]
+ [![Liquid Operations icon][liquid-icon]][liquid-transform-doc]
\ \
- [**Flat file encoding**][flat-file-encode-doc]
+ [**Liquid Operations**][liquid-transform-doc]
\ \
- Decode XML after receiving the content from a trading partner.
+ Convert the following formats by using Liquid templates: <br><br>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT
:::column-end::: :::row-end::: :::row::: :::column:::
- [![Integration account icon][integration-account-icon]][integration-account-doc]
+ [![RosettaNet icon][rosettanet-icon]][rosettanet-doc]
\ \
- [**Integration Account Artifact Lookup**][integration-account-doc]<br>(*Consumption workflow only*)
+ [**RosettaNet**][rosettanet-doc]
\ \
- Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account.
+ Encode and decode messages that use the RosettaNet protocol.
:::column-end::: :::column:::
- [![Liquid operations icon][liquid-icon]][json-liquid-transform-doc]
+ [![SWIFT icon][swift-icon]][swift-doc]
\ \
- [**Liquid operations**][json-liquid-transform-doc]
+ [**SWIFT**][swift-doc]<br>(*Standard workflow only*)
\ \
- Convert the following formats by using Liquid templates: <br><br>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT
+ Encode and decode Society for Worldwide Interbank Financial Telecommuncation (SIWFT) transactions in flat-file XML message format.
:::column-end::: :::column::: [![Transform XML icon][xml-transform-icon]][xml-transform-doc]
For more information, review the following documentation:
\ Convert the source XML format to another XML format. :::column-end:::
+ :::column:::
+ [![X12 icon][x12-icon]][x12-doc]
+ \
+ \
+ [**X12**][x12-doc]
+ \
+ \
+ Encode and decode messages that use the X12 protocol.
+ :::column-end:::
:::column::: [![XML validation icon][xml-validate-icon]][xml-validate-doc] \ \
- [**XML validation**][xml-validate-doc]
+ [**XML Validation**][xml-validate-doc]
\ \ Validate XML documents against the specified schema.
For more information, review the following documentation:
> [Create custom APIs that you can call from Azure Logic Apps](../logic-apps/logic-apps-create-api-app.md) <!-- Built-in icons -->
+[azure-ai-search-icon]: ./media/apis-list/azure-ai-search.png
[azure-api-management-icon]: ./media/apis-list/azure-api-management.png [azure-app-services-icon]: ./media/apis-list/azure-app-services.png [azure-automation-icon]: ./media/apis-list/azure-automation.png [azure-blob-storage-icon]: ./media/apis-list/azure-blob-storage.png [azure-cosmos-db-icon]: ./media/apis-list/azure-cosmos-db.png
+[azure-event-grid-publisher-icon]: ./media/apis-list/azure-event-grid-publisher.png
[azure-event-hubs-icon]: ./media/apis-list/azure-event-hubs.png [azure-file-storage-icon]: ./media/apis-list/azure-file-storage.png [azure-functions-icon]: ./media/apis-list/azure-functions.png [azure-key-vault-icon]: ./media/apis-list/azure-key-vault.png [azure-logic-apps-icon]: ./media/apis-list/azure-logic-apps.png
-[azure-queue-storage-icon]: ./media/apis-list/azure-queues.png
+[azure-openai-icon]: ./media/apis-list/azure-openai.png
+[azure-queue-storage-icon]: ./media/apis-list/azure-queue-storage.png
[azure-service-bus-icon]: ./media/apis-list/azure-service-bus.png [azure-table-storage-icon]: ./media/apis-list/azure-table-storage.png [batch-icon]: ./media/apis-list/batch.png
For more information, review the following documentation:
[http-response-icon]: ./media/apis-list/response.png [http-swagger-icon]: ./media/apis-list/http-swagger.png [http-webhook-icon]: ./media/apis-list/http-webhook.png
+[ibm-3270-icon]: ./media/apis-list/ibm-3270.png
+[ibm-cics-icon]: ./media/apis-list/ibm-cics.png
[ibm-db2-icon]: ./media/apis-list/ibm-db2.png [ibm-host-file-icon]: ./media/apis-list/ibm-host-file.png
+[ibm-ims-icon]: ./media/apis-list/ibm-ims.png
[ibm-mq-icon]: ./media/apis-list/ibm-mq.png [inline-code-icon]: ./media/apis-list/inline-code.png [jdbc-icon]: ./media/apis-list/jdbc.png
+[local-function-icon]: ./media/apis-list/local-function.png
[sap-icon]: ./media/apis-list/sap.png [schedule-icon]: ./media/apis-list/recurrence.png [scope-icon]: ./media/apis-list/scope.png [sftp-ssh-icon]: ./media/apis-list/sftp.png [smtp-icon]: ./media/apis-list/smtp.png [sql-server-icon]: ./media/apis-list/sql.png
+[swift-icon]: ./media/apis-list/swift.png
[switch-icon]: ./media/apis-list/switch.png [terminate-icon]: ./media/apis-list/terminate.png [until-icon]: ./media/apis-list/until.png [variables-icon]: ./media/apis-list/variables.png
-<!--Built-in integration account connector icons -->
+<!--B2B built-in operation icons -->
[as2-v2-icon]: ./media/apis-list/as2-v2.png
-[flat-file-encode-icon]: ./media/apis-list/flat-file-encoding.png
-[flat-file-decode-icon]: ./media/apis-list/flat-file-decoding.png
+[edifact-icon]: ./media/apis-list/edifact.png
+[flat-file-icon]: ./media/apis-list/flat-file-decoding.png
[integration-account-icon]: ./media/apis-list/integration-account.png [liquid-icon]: ./media/apis-list/liquid-transform.png
+[rosettanet-icon]: ./media/apis-list/rosettanet.png
+[x12-icon]: ./media/apis-list/x12.png
[xml-transform-icon]: ./media/apis-list/xml-transform.png [xml-validate-icon]: ./media/apis-list/xml-validation.png <!--Built-in doc links-->
+[azure-ai-search-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/public-preview-of-azure-openai-and-ai-search-in-app-connectors/ba-p/4049584 "Connect to AI Search so that you can perform document indexing and search operations in your workflow"
[azure-api-management-doc]: ../api-management/get-started-create-service-instance.md "Create an Azure API Management service instance for managing and publishing your APIs" [azure-app-services-doc]: ../logic-apps/logic-apps-custom-api-host-deploy-call.md "Integrate logic app workflows with App Service API Apps" [azure-automation-doc]: /azure/logic-apps/connectors/built-in/reference/azureautomation/ "Connect to your Azure Automation accounts so you can create and manage Azure Automation jobs" [azure-blob-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azureblob/ "Manage files in your blob container with Azure Blob storage" [azure-cosmos-db-doc]: /azure/logic-apps/connectors/built-in/reference/azurecosmosdb/ "Connect to Azure Cosmos DB so you can access and manage Azure Cosmos DB documents"
+[azure-event-grid-publisher-doc]: /azure/logic-apps/connectors/built-in/reference/eventgridpublisher/ "Connect to Azure Event Grid for event-based programming using pub-sub semantics"
[azure-event-hubs-doc]: /azure/logic-apps/connectors/built-in/reference/eventhub/ "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs" [azure-file-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurefile/ "Connect to Azure File Storage so you can create and manage files in your Azure storage account" [azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic app workflows with Azure Functions" [azure-key-vault-doc]: /azure/logic-apps/connectors/built-in/reference/keyvault/ "Connect to Azure Key Vault to securely store, access, and manage secrets"
+[azure-openai-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/public-preview-of-azure-openai-and-ai-search-in-app-connectors/ba-p/4049584 "Connect to Azure Open AI to perform operations on large language models"
[azure-queue-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurequeues/ "Connect to Azure Storage so you can create and manage queue entries and queues" [azure-service-bus-doc]: /azure/logic-apps/connectors/built-in/reference/servicebus/ "Manage messages from Service Bus queues, topics, and topic subscriptions" [azure-table-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azuretables/ "Connect to Azure Storage so you can create, update, and query tables and more" [batch-doc]: ../logic-apps/logic-apps-batch-process-send-receive-messages.md "Process messages in groups, or as batches" [condition-doc]: ../logic-apps/logic-apps-control-flow-conditional-statement.md "Evaluate a condition and run different actions based on whether the condition is true or false" [data-operations-doc]: ../logic-apps/logic-apps-perform-data-operations.md "Perform data operations such as filtering arrays or creating CSV and HTML tables"
-[event-grid-publisher-doc]: /azure/logic-apps/connectors/built-in/reference/eventgridpublisher/ "Connect to Azure Event Grid for event-based programming using pub-sub semantics"
[file-system-doc]: /azure/logic-apps/connectors/built-in/reference/filesystem/ "Connect to a file system on your network machine to create and manage files" [for-each-doc]: ../logic-apps/logic-apps-control-flow-loops.md#foreach-loop "Perform the same actions on every item in an array" [ftp-doc]: /azure/logic-apps/connectors/built-in/reference/ftp/ "Connect to an FTP or FTPS server for FTP tasks, like uploading, getting, deleting files, and more"
For more information, review the following documentation:
[http-response-doc]: ./connectors-native-reqres.md "Respond to HTTP requests from your logic app workflows" [http-swagger-doc]: ./connectors-native-http-swagger.md "Call REST endpoints from your logic app workflows" [http-webhook-doc]: ./connectors-native-webhook.md "Wait for specific events from HTTP or HTTPS endpoints"
+[ibm-3270-doc]: /azure/connectors/integrate-3270-apps-ibm-mainframe?tabs=standard "Integrate IBM 3270 screen-driven apps with Azure"
+[ibm-cics-doc]: /azure/connectors/integrate-cics-apps-ibm-mainframe "Integrate CICS programs on IBM mainframes with Azure"
[ibm-db2-doc]: /azure/logic-apps/connectors/built-in/reference/db2/ "Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more" [ibm-host-file-doc]: /azure/logic-apps/connectors/built-in/reference/hostfile/ "Connect to your IBM host to work with offline files"
+[ibm-ims-doc]: /azure/connectors/integrate-ims-apps-ibm-mainframe "Integrate IMS programs on IBM mainframes with Azure"
[ibm-mq-doc]: /azure/logic-apps/connectors/built-in/reference/mq/ "Connect to IBM MQ on-premises or in Azure to send and receive messages" [inline-code-doc]: ../logic-apps/logic-apps-add-run-inline-code.md "Add and run JavaScript code snippets from your logic app workflows" [jdbc-doc]: /azure/logic-apps/connectors/built-in/reference/jdbc/ "Connect to a relational database using JDBC drivers"
+[local-function-doc]: ../logic-apps/create-run-custom-code-functions.md "Create and run .NET Framework code from your workflow"
[nested-logic-app-doc]: ../logic-apps/logic-apps-http-endpoint.md "Integrate logic app workflows with nested workflows" [query-doc]: ../logic-apps/logic-apps-perform-data-operations.md#filter-array-action "Select and filter arrays with the Query action" [sap-doc]: /azure/logic-apps/connectors/built-in/reference/sap/ "Connect to SAP so you can send or receive messages and invoke actions"
For more information, review the following documentation:
[smtp-doc]: /azure/logic-apps/connectors/built-in/reference/smtp/ "Connect to your SMTP server so you can send email" [sql-server-doc]: /azure/logic-apps/connectors/built-in/reference/sql/ "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table" [switch-doc]: ../logic-apps/logic-apps-control-flow-switch-statement.md "Organize actions into cases, which are assigned unique values. Run only the case whose value matches the result from an expression, object, or token. If no matches exist, run the default case"
+[swift-doc]: https://techcommunity.microsoft.com/t5/azure-integration-services-blog/announcement-public-preview-of-swift-message-processing-using/ba-p/3670014 "Encode and decode SWIFT transactions in flat-file XML format"
[terminate-doc]: ../logic-apps/logic-apps-workflow-actions-triggers.md#terminate-action "Stop or cancel an actively running workflow for your logic app workflow" [until-doc]: ../logic-apps/logic-apps-control-flow-loops.md#until-loop "Repeat actions until the specified condition is true or some state has changed" [variables-doc]: ../logic-apps/logic-apps-create-variables-store-values.md "Perform operations with variables, such as initialize, set, increment, decrement, and append to string or array variable"
-<!--Built-in integration account doc links-->
+<!--B2B built-in operation doc links-->
[as2-doc]: ../logic-apps/logic-apps-enterprise-integration-as2.md "Encode and decode messages that use the AS2 protocol"
-[flat-file-decode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Decode XML content with a flat file schema"
-[flat-file-encode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Encode XML content with a flat file schema"
+[edifact-doc]: ../logic-apps/logic-apps-enterprise-integration-edifact.md "Encode and decode messages that use the EDIFACT protocol"
+[flat-file-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Encode and decode XML content with a flat file schema"
[integration-account-doc]: ../logic-apps/logic-apps-enterprise-integration-metadata.md "Manage metadata for integration account artifacts"
-[json-liquid-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-liquid-transform.md "Transform JSON or XML content with Liquid templates"
+[liquid-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-liquid-transform.md "Transform JSON or XML content with Liquid templates"
+[rosettanet-doc]: ../logic-apps/logic-apps-enterprise-integration-rosettanet.md "Exchange RosettaNet messages in your workflow"
+[x12-doc]: ../logic-apps/logic-apps-enterprise-integration-x12.md "Encode and decode messages that use the X12 protocol"
[xml-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-transform.md "Transform XML content" [xml-validate-doc]: ../logic-apps/logic-apps-enterprise-integration-xml-validation.md "Validate XML content"
connectors Connect Common Data Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md
Last updated 12/14/2023
-tags: connectors
# Connect to Microsoft Dataverse (previously Common Data Service) from workflows in Azure Logic Apps
connectors Connectors Azure Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-application-insights.md
ms.suite: integration
Last updated 01/10/2024
-tags: connectors
# Customer intent: As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps.
connectors Connectors Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-monitor-logs.md
ms.suite: integration
Last updated 02/08/2024
-tags: connectors
# Customer intent: As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps.
connectors Connectors Create Api Azure Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azure-event-hubs.md
ms.suite: integration
Last updated 01/04/2024
-tags: connectors
# Connect to an event hub from workflows in Azure Logic Apps
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration
Last updated 01/18/2024
-tags: connectors
# Connect to Azure Blob Storage from workflows in Azure Logic Apps
connectors Connectors Create Api Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-container-instances.md
ms.
-tags: connectors
Last updated 01/04/2024
connectors Connectors Create Api Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-cosmos-db.md
Last updated 01/04/2024
-tags: connectors
# Process and create Azure Cosmos DB documents using Azure Logic Apps
connectors Connectors Create Api Crmonline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-crmonline.md
ms.suite: integration
Last updated 01/04/2024
-tags: connectors
# Connect to Dynamics 365 from workflows in Azure Logic Apps (Deprecated)
connectors Connectors Create Api Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-db2.md
ms.suite: integration
Last updated 01/04/2024
-tags: connectors
# Access and manage IBM DB2 resources by using Azure Logic Apps
connectors Connectors Create Api Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-ftp.md
ms.suite: integration
Last updated 01/04/2024
-tags: connectors
# Connect to an FTP server from workflows in Azure Logic Apps
connectors Connectors Create Api Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-informix.md
Last updated 01/04/2024
-tags: connectors
# Manage IBM Informix database resources by using Azure Logic Apps
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
Last updated 01/10/2024
-tags: connectors
# Connect to an IBM MQ server from a workflow in Azure Logic Apps
connectors Connectors Create Api Office365 Outlook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-office365-outlook.md
ms.suite: integration
Last updated 01/10/2024
-tags: connectors
# Connect to Office 365 Outlook from Azure Logic Apps
connectors Connectors Create Api Oracledatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-oracledatabase.md
ms.suite: integration
Last updated 01/04/2024
-tags: connectors
# Connect to Oracle Database from Azure Logic Apps
connectors Connectors Create Api Servicebus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md
Last updated 12/12/2023
-tags: connectors
# Connect to Azure Service Bus from workflows in Azure Logic Apps
connectors Connectors Create Api Smtp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-smtp.md
ms.suite: integration
Last updated 01/04/2024
-tags: connectors
# Connect to your SMTP account from Azure Logic Apps
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
ms.suite: integration
Last updated 01/10/2024
-tags: connectors
## As a developer, I want to access my SQL database from my logic app workflow.
connectors Connectors Integrate Security Operations Create Api Microsoft Graph Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
Last updated 01/04/2024
-tags: connectors
# Improve threat protection by integrating security operations with Microsoft Graph Security & Azure Logic Apps
connectors Connectors Native Delay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-delay.md
ms.suite: integration
Last updated 01/04/2024
-tags: connectors
# Delay running the next action in Azure Logic Apps
connectors Connectors Native Http Swagger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http-swagger.md
ms.suite: integration
Last updated 12/13/2023
-tags: connectors
# Connect or call REST API endpoints from workflows in Azure Logic Apps
connectors Connectors Native Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-http.md
ms.suite: integration
Last updated 01/22/2024
-tags: connectors
# Call external HTTP or HTTPS endpoints from workflows in Azure Logic Apps
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
ms.suite: integration
ms.reviewers: estfan, azla Last updated 01/10/2024
-tags: connectors
# Receive and respond to inbound HTTPS calls to workflows in Azure Logic Apps
connectors Connectors Native Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-webhook.md
ms.suite: integration
Last updated 02/09/2024
-tags: connectors
# Subscribe and wait for events to run workflows using HTTP webhooks in Azure Logic Apps
connectors Integrate 3270 Apps Ibm Mainframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/integrate-3270-apps-ibm-mainframe.md
Last updated 11/02/2023
-tags: connectors
# Integrate 3270 screen-driven apps on IBM mainframes with Azure using Azure Logic Apps and IBM 3270 connector
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
Last updated 01/04/2024
Managed connectors provide ways for you to access other services and systems where built-in connectors aren't available. You can use these triggers and actions to create workflows that integrate data, apps, cloud-based services, and on-premises systems. Different from built-in connectors, managed connectors are usually tied to a specific service or system such as Office 365, SharePoint, Azure Key Vault, Salesforce, Azure Automation, and so on. Managed by Microsoft and hosted in Azure, managed connectors usually require that you first create a connection from your workflow and authenticate your identity.
-For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multi-tenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type, and not the other.
+For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multitenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type, and not the other.
-For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors. For more information, review [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
+For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors. For more information, review [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multitenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
This article provides a general overview about managed connectors and the way they're organized in the Consumption workflow designer versus the Standard workflow designer with examples. For technical reference information about each managed connector in Azure Logic Apps, review [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
For more information, review the following documentation:
## ISE connectors
-In an integration service environment (ISE), these managed connectors also have [ISE versions](introduction.md#ise-and-connectors), which have different capabilities than their multi-tenant versions:
+In an integration service environment (ISE), these managed connectors also have [ISE versions](introduction.md#ise-and-connectors), which have different capabilities than their multitenant versions:
> [!NOTE] >
For more information, see these topics:
[azure-key-vault-icon]: ./media/apis-list/azure-key-vault.png [azure-ml-icon]: ./media/apis-list/azure-ml.png [azure-monitor-logs-icon]: ./media/apis-list/azure-monitor-logs.png
-[azure-queues-icon]: ./media/apis-list/azure-queues.png
+[azure-queues-icon]: ./media/apis-list/azure-queue-storage.png
[azure-resource-manager-icon]: ./media/apis-list/azure-resource-manager.png [azure-service-bus-icon]: ./media/apis-list/azure-service-bus.png [azure-sql-data-warehouse-icon]: ./media/apis-list/azure-sql-data-warehouse.png
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
If you define more than one scale rule, the container app begins to scale once t
## HTTP
-With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. [Container Apps jobs](jobs.md) don't support HTTP scaling rules.
+With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales. Every 15 seconds, the number of concurrent requests is calculated as the number of requests in the past 15 seconds divided by 15. [Container Apps jobs](jobs.md) don't support HTTP scaling rules.
In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling property is set to 100 concurrent requests per second.
az containerapp create \
## TCP
-With a TCP scaling rule, you have control over the threshold of concurrent TCP connections that determines how your app scales. [Container Apps jobs](jobs.md) don't support TCP scaling rules.
+With a TCP scaling rule, you have control over the threshold of concurrent TCP connections that determines how your app scales. Every 15 seconds, the number of concurrent connections is calculated as the number of connections in the past 15 seconds divided by 15. [Container Apps jobs](jobs.md) don't support TCP scaling rules.
In the following example, the container app revision scales out up to five replicas and can scale in to zero. The scaling threshold is set to 100 concurrent connections per second.
container-apps Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/services.md
For more information on the service commands and arguments, see the
- Add-ons are in public preview. - Any container app created before May 23, 2023 isn't eligible to use add-ons. - Add-ons come with minimal guarantees. For instance, they're automatically restarted if they crash, however there's no formal quality of service or high-availability guarantees associated with them. For production workloads, use Azure-managed services.
+- If you use your own VNET, you must use a workload profiles environment. The Add-ons feature is not supported in consumption only environments that use custom VNETs.
## Next steps
cosmos-db Get Started Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-started-change-data-capture.md
First, create a straightforward [Azure Blob Storage](../storage/blobs/index.yml)
:::image type="content" source="media/get-started-change-data-capture/sink-container-name.png" alt-text="Screenshot of the blob container named output set as the sink target.":::
-1. Locate the **Update method** section and change the selections to only allow **delete** and **update** operations. Also, specify the **Key columns** as a **List of columns** using the field `_{rid}` as the unique identifier.
+1. Locate the **Update method** section and change the selections to only allow **delete** and **update** operations. Also, specify the **Key columns** as a **List of columns** using the field `{_rid}` as the unique identifier.
:::image type="content" source="media/get-started-change-data-capture/sink-methods-columns.png" alt-text="Screenshot of update methods and key columns being specified for the sink.":::
cosmos-db How To Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-private-link.md
After you create the private endpoint, you can integrate it with a private DNS z
```azurecli-interactive #Zone name differs based on the API type and group ID you are using.
-zoneName="privatelink.mongocluster.azure.com"
+zoneName="privatelink.mongocluster.cosmos.azure.com"
az network private-dns zone create \ --resource-group $ResourceGroupName \
cost-management-billing Migrate Ea Reporting Arm Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/migrate-ea-reporting-arm-apis-overview.md
The following information describes the differences between the older Azure Ente
| Use | Azure Enterprise Reporting APIs | Microsoft Cost Management APIs | | | | |
-| Authentication | API key provisioned in the Enterprise Agreement (EA) portal | Microsoft Entra authentication using user tokens or service principals. Service principals take the place of API keys. |
+| Authentication | API key provisioned in the [Azure portal](../manage/enterprise-rest-apis.md#api-key-generation) | Microsoft Entra authentication using user tokens or service principals. Service principals take the place of API keys. |
| Scopes and permissions | All requests are at the enrollment scope. API Key permission assignments will determine whether data for the entire enrollment, a department, or a specific account is returned. No user authentication. | Users or service principals are assigned access to the enrollment, department, or account scope. | | URI Endpoint | `https://consumption.azure.com` | `https://management.azure.com` | | Development status | In maintenance mode. On the path to deprecation. | In active development |
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
The following reports are available in the app.
- Azure Marketplace charges - Overages and total charges
-The Billing account overview page might show costs that differ from costs shown in the EA portal.
>[!NOTE] >The **Select date range** selector doesnΓÇÖt affect or change overview tiles. Instead, the overview tiles show the costs for the current billing month. This behavior is intentional.
Here's how values in the overview tiles are calculated.
- The value shown in the **New purchase amount** tile is calculated as the sum of `newPurchases`. - The value shown in the **Total charges** tile is calculated as the sum of (`adjustments` + `ServiceOverage` + `chargesBilledseparately` + `azureMarketplaceServiceCharges`).
-The EA portal doesn't show the Total charges column. The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure Marketplace service charges as Total charges.
-
-The Prepayment Usage shown in the EA portal isn't available in the Template app as part of the total charges.
+The Power BI template app includes Adjustments, Service Overage, Charges billed separately, and Azure Marketplace service charges as Total charges.
**Usage by Subscriptions and Resource Groups** - Provides a cost over time view and charts showing cost by subscription and resource group.
cost-management-billing Assign Access Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/assign-access-acm-data.md
Title: Assign access to Cost Management data
-description: This article walks you though assigning permission to Cost Management data for various access scopes.
+description: This article walks you through assigning permission to Cost Management data for various access scopes.
Previously updated : 05/12/2023 Last updated : 02/13/2024
# Assign access to Cost Management data
-For users with Azure Enterprise agreements, a combination of permissions granted in the Azure portal and the Enterprise (EA) portal define a user's level of access to Cost Management data. For users with other Azure account types, defining a user's level of access to Cost Management data is simpler by using Azure role-based access control (Azure RBAC). This article walks you through assigning access to Cost Management data. After the combination of permissions is assigned, the user views data in Cost Management based on their access scope and on the scope that they select in the Azure portal.
+For users with Azure Enterprise agreements, a combination of permissions granted in the Azure portal and the Enterprise (EA) portal define a user's level of access to Cost Management data. For users with other Azure account types, defining a user's level of access to Cost Management data is simpler by using Azure role-based access control (RBAC). This article walks you through assigning access to Cost Management data. After the combination of permissions is assigned, the user views data in Cost Management based on their access scope and on the scope that they select in the Azure portal.
-The scope that a user selects is used throughout Cost Management to provide data consolidation and to control access to cost information. When using scopes, users don't multi-select them. Instead, they select a larger scope that child scopes roll up to and then they filter-down to what they want to view. Data consolidation is important to understand because some people shouldn't access a parent scope that child scopes roll up to.
+The scope that a user selects is used throughout Cost Management to provide data consolidation and to control access to cost information. When scopes are used, users don't multi-select them. Instead, they select a larger scope that child scopes roll up to and then they filter-down to what they want to view. Data consolidation is important to understand because some people shouldn't access a parent scope that child scopes roll up to.
Watch the [Cost Management controlling access](https://www.youtube.com/watch?v=_uQzQ9puPyM) video to learn about assigning access to view costs and charges with Azure role-based access control (Azure RBAC). To watch other videos, visit the [Cost Management YouTube channel](https://www.youtube.com/c/AzureCostManagement).
Watch the [Cost Management controlling access](https://www.youtube.com/watch?v=_
## Cost Management scopes
-Cost management supports a variety of Azure account types. To view the full list of supported account types, see [Understand Cost Management data](understand-cost-mgt-data.md). The type of account determines available scopes.
+Cost management supports various Azure account types. To view the full list of supported account types, see [Understand Cost Management data](understand-cost-mgt-data.md). The type of account determines available scopes.
### Azure EA subscription scopes
To view cost data for Azure EA subscriptions, a user must have at least read acc
| **Scope** | **Defined at** | **Required access to view data** | **Prerequisite EA setting** | **Consolidates data to** | | | | | | |
-| Billing account┬╣ | [https://ea.azure.com](https://ea.azure.com/) | ΓÇó Enterprise Admin<br> ΓÇó Enrollment reader (Enterprise admin read-only) | None | All subscriptions from the enterprise agreement |
-| Department | [https://ea.azure.com](https://ea.azure.com/) | Department Admin | **DA view charges** enabled | All subscriptions belonging to an enrollment account that is linked to the department |
-| Enrollment account┬▓ | [https://ea.azure.com](https://ea.azure.com/) | Account Owner | **AO view charges** enabled | All subscriptions from the enrollment account |
+| Billing account┬╣ | [https://portal.azure.com](https://portal.azure.com/) | ΓÇó Enterprise Admin<br> ΓÇó Enrollment reader (Enterprise admin read-only) | None | All subscriptions from the enterprise agreement |
+| Department | [https://portal.azure.com](https://portal.azure.com/) | Department Admin | **DA view charges** enabled | All subscriptions belonging to an enrollment account that is linked to the department |
+| Enrollment account┬▓ | [https://portal.azure.com](https://portal.azure.com/) | Account Owner | **AO view charges** enabled | All subscriptions from the enrollment account |
| Management group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All subscriptions below the management group | | Subscription | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources/resource groups in the subscription | | Resource group | [https://portal.azure.com](https://portal.azure.com/) | Cost Management Reader (or Contributor) | **AO view charges** enabled | All resources in the resource group |
To view cost data for Azure EA subscriptions, a user must have at least read acc
┬▓ The enrollment account is also referred to as the account owner.
-Direct enterprise administrators can assign the billing account, department, and enrollment account scope the in the [Azure portal](https://portal.azure.com/). For more information, see [Azure portal administration for direct Enterprise Agreements](../manage/direct-ea-administration.md).
+Enterprise administrators can assign the billing account, department, and enrollment account scope in the [Azure portal](https://portal.azure.com/). For more information, see [Azure portal administration for direct Enterprise Agreements](../manage/direct-ea-administration.md).
## Other Azure account scopes
To view cost data for other Azure subscriptions, a user must have at least read
- Subscription - Resource group
-Various scopes are available after partners onboard customers to a Microsoft Customer Agreement. CSP customers can then use Cost Management features when enabled by their CSP partner. For more information, see [Get started with Cost Management for partners](get-started-partners.md).
+Various scopes are available after partners onboard customers to a Microsoft Customer Agreement. Cloud solution providers (CSP) customers can then use Cost Management features when enabled by their CSP partner. For more information, see [Get started with Cost Management for partners](get-started-partners.md).
## Enable access to costs in the Azure portal
-The department scope requires the **Department admins can view charges** (DA view charges) option set to **On**. Configure the option in either the Azure portal or the EA portal. All other scopes require the **Account owners can view charges** (AO view charges) option set to **On**.
+The department scope requires the **Department admins can view charges** (DA view charges) option set to **On**. Configure the option in the Azure portal. All other scopes require the **Account owners can view charges** (Account owner (AO) view charges) option set to **On**.
To enable an option in the Azure portal:
To enable an option in the Azure portal:
After the view charge options are enabled, most scopes also require Azure role-based access control (Azure RBAC) permission configuration in the Azure portal.
-## Enable access to costs in the EA portal
-
-> [!NOTE]
-> The information in the section applies only to users that have an Enterprise Agreement with a Microsoft partner (indirect EA).
-
-The department scope requires the **DA view charges** option **Enabled** in the EA portal. Configure the option in either the Azure portal or the EA portal. All other scopes require the **AO view charges** option **Enabled** in the EA portal.
-
-To enable an option in the EA portal:
-
-1. Sign in to the EA portal at [https://ea.azure.com](https://ea.azure.com) with an enterprise administrator account.
-2. Select **Manage** in the left pane.
-3. For the cost management scopes that you want to provide access to, enable the charge option to **DA view charges** and/or **AO view charges**.
- ![Enrollment tab showing DA and AO view charges options](./media/assign-access-acm-data/ea-portal-enrollment-tab.png)
-
-After the view charge options are enabled, most scopes also require Azure role-based access control (Azure RBAC) permission configuration in the Azure portal.
- ## Enterprise administrator role By default, an enterprise administrator can access the billing account (Enterprise Agreement/enrollment) and all other scopes, which are child scopes. The enterprise administrator assigns access to scopes for other users. As a best practice for business continuity, you should always have two users with enterprise administrator access. The following sections are walk-through examples of the enterprise administrator assigning access to scopes for other users. ## Assign billing account scope access
-Access to the billing account scope requires enterprise administrator permission in the EA portal. The enterprise administrator can view costs across the entire EA enrollment or multiple enrollments. No action is required in the Azure portal for the billing account scope.
-
-1. Sign in to the EA portal at [https://ea.azure.com](https://ea.azure.com) with an enterprise administrator account.
-2. Select **Manage** in the left pane.
-3. On the **Enrollment** tab, select the enrollment that you want to manage.
- ![select your enrollment in the EA portal](./media/assign-access-acm-data/ea-portal.png)
-4. Select **+ Add Administrator**.
-5. In the Add Administrator box, select the authentication type and type the user's email address.
-6. If the user should have read-only access to cost and usage data, under **Read-only**, select **Yes**. Otherwise, select **No**.
-7. Select **Add** to create the account.
- ![example information shown in the Add administrator box](./media/assign-access-acm-data/add-admin.png)
+Access to the billing account scope requires enterprise administrator permission. The enterprise administrator can view costs across the entire EA enrollment or multiple enrollments. The enterprise administrator can assign access to the billing account scope to another user with read only access. For more information, see [Add another enterprise administrator](../manage/direct-ea-administration.md#add-another-enterprise-administrator).
-It may take up to 30 minutes before the new user can access data in Cost Management.
+It might take up to 30 minutes before the user can access data in Cost Management.
### Assign department scope access
-Access to the department scope requires department administrator (DA view charges) access in the EA portal. The department administrator can view costs and usage data associated with a department or to multiple departments. Data for the department includes all subscriptions belonging to an enrollment account that are linked to the department. No action is required in the Azure portal.
+Access to the department scope requires department administrator (DA view charges) access. The department administrator can view costs and usage data associated with a department or to multiple departments. Data for the department includes all subscriptions belonging to an enrollment account that are linked to the department.
-1. Sign in to the EA portal at [https://ea.azure.com](https://ea.azure.com) with an enterprise administrator account.
-2. Select **Manage** in the left pane.
-3. On the **Enrollment** tab, select the enrollment that you want to manage.
-4. Select the **Department** tab and then select **Add Administrator**.
-5. In the Add Department Administrator box, select the authentication type and then type the user's email address.
-6. If the user should have read-only access to cost and usage data, under **Read-only**, select **Yes**. Otherwise, select **No**.
-7. Select the departments that you want to grant department administrative permission to.
-8. Select **Add** to create the account.
- ![enter required information in the Add department administrator box](./media/assign-access-acm-data/add-depart-admin.png)
-
-Direct enterprise administrators can assign department administrator access in the Azure portal. For more information, see [Add a department administrator in the Azure portal](../manage/direct-ea-administration.md#add-a-department-administrator).
+Enterprise administrators can assign department administrator access. For more information, see [Add a department administrator](../manage/direct-ea-administration.md#add-a-department-administrator).
## Assign enrollment account scope access
-Access to the enrollment account scope requires account owner (AO view charges) access in the EA portal. The account owner can view costs and usage data associated with the subscriptions created from that enrollment account. No action is required in the Azure portal.
-
-1. Sign in to the EA portal at [https://ea.azure.com](https://ea.azure.com) with an enterprise administrator account.
-2. Select **Manage** in the left pane.
-3. On the **Enrollment** tab, select the enrollment that you want to manage.
-4. Select the **Account** tab and then select **Add Account**.
-5. In the Add Account box, select the **Department** to associate the account to, or leave it as unassigned.
-6. Select the authentication type and type the account name.
-7. Type the user's email address and then optionally type the cost center.
-8. Select on **Add** to create the account.
- ![enter required information in the Add account box for an enrollment account](./media/assign-access-acm-data/add-account.png)
-
-After completing the steps above, the user account becomes an enrollment account in the Enterprise portal and can create subscriptions. The user can access cost and usage data for subscriptions that they create.
-
-Direct enterprise administrators can assign account owner access in the Azure portal. For more information, see [Add an account owner in the Azure portal](../manage/direct-ea-administration.md#add-an-account-and-account-owner).
+Access to the enrollment account scope requires account owner (AO view charges) access. The account owner can view costs and usage data associated with the subscriptions created from that enrollment account. Enterprise administrators can assign account owner access. For more information, see [Add an account owner in the Azure portal](../manage/direct-ea-administration.md#add-an-account-and-account-owner).
## Assign management group scope access
-Access to view the management group scope requires at least the Cost Management Reader (or Reader) permission. You can configure permissions for a management group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the management group to enable access for others. And for Azure EA accounts, you must also have enabled the **AO view charges** setting in the EA portal.
+Access to view the management group scope requires at least the Cost Management Reader (or Reader) permission. You can configure permissions for a management group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the management group to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
-- Assign the Cost Management Reader (or reader) role to a user at the management group scope.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+You can assign the Cost Management Reader (or reader) role to a user at the management group scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Assign subscription scope access
-Access to a subscription requires at least the Cost Management Reader (or Reader) permission. You can configure permissions to a subscription in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the subscription to enable access for others. And for Azure EA accounts, you must also have enabled the **AO view charges** setting in the EA portal.
+Access to a subscription requires at least the Cost Management Reader (or Reader) permission. You can configure permissions to a subscription in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the subscription to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
-- Assign the Cost Management Reader (or reader) role to a user at the subscription scope.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+You can assign the Cost Management Reader (or reader) role to a user at the subscription scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Assign resource group scope access
-Access to a resource group requires at least the Cost Management Reader (or Reader) permission. You can configure permissions to a resource group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the resource group to enable access for others. And for Azure EA accounts, you must also have enabled the **AO view charges** setting in the EA portal.
-
+Access to a resource group requires at least the Cost Management Reader (or Reader) permission. You can configure permissions to a resource group in the Azure portal. You must have at least the User Access Administrator (or Owner) permission for the resource group to enable access for others. And for Azure EA accounts, you must also enable the **AO view charges** setting.
-- Assign the Cost Management Reader (or reader) role to a user at the resource group scope.
- For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+You can assign the Cost Management Reader (or reader) role to a user at the resource group scope. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Cross-tenant authentication issues
-Currently, Cost Management has limited support for cross-tenant authentication. In some circumstances when you try to authenticate across tenants, you may receive an **Access denied** error in cost analysis. This issue might occur if you configure Azure role-based access control (Azure RBAC) to another tenant's subscription and then try to view cost data.
+Currently, Cost Management provides limited support for cross-tenant authentication. In some circumstances when you try to authenticate across tenants, you may receive an **Access denied** error in cost analysis. This issue might occur if you configure Azure role-based access control (Azure RBAC) to another tenant's subscription and then try to view cost data.
-*To work around the problem*: After you configure cross-tenant Azure RBAC, wait an hour. Then, try to view costs in cost analysis or grant Cost Management access to users in both tenants.
+*To work around the problem*: After you configure cross-tenant Azure RBAC, wait an hour. Then, try to view costs in cost analysis or grant Cost Management access to users in both tenants.
## Next steps -- If you haven't already completed the first quickstart for Cost Management, read it at [Start analyzing costs](quick-acm-cost-analysis.md).
+- If you haven't read the first quickstart for Cost Management, read it at [Start analyzing costs](quick-acm-cost-analysis.md).
cost-management-billing Cost Mgt Alerts Monitor Usage Spending https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md
Credit alerts notify you when your Azure Prepayment (previously called monetary
## Department spending quota alerts
-Department spending quota alerts notify you when department spending reaches a fixed threshold of the quota. Spending quotas are configured in the EA portal. Whenever a threshold is met it generates an email to department owners and is shown in cost alerts. For example, 50% or 75% of the quota.
+Department spending quota alerts notify you when department spending reaches a fixed threshold of the quota. Spending quotas are configured in the Azure portal. Whenever a threshold is met it generates an email to department owners and is shown in cost alerts. For example, 50% or 75% of the quota.
## Supported alert features by offer categories
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
description: This article helps you better understand data included in Cost Management. It also explains how frequently data is processed, collected, shown, and closed. Previously updated : 01/24/2024 Last updated : 02/16/2024
The following information shows the currently supported [Microsoft Azure offers]
| **Category** | **Offer name** | **Quota ID** | **Offer number** | **Data available from** | | | | | | |
-| **Azure Government** | Azure Government Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-USGOV-0017P | May 2014┬╣ |
+| **Azure Government** | Azure Government Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-USGOV-0017P | May 2014 |
| **Azure Government** | Azure Government pay-as-you-go | Pay-as-you-go_2014-09-01 | MS-AZR-USGOV-0003P | October 2, 2018 |
-| **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014┬╣ |
-| **Enterprise Agreement (EA)** | Microsoft Azure Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-0017P | May 2014┬╣ |
-| **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | N/A | March 2019┬▓ |
-| **Microsoft Customer Agreement** | Microsoft Azure Plan for Dev/Test | MSDNDevTest_2014-09-01 | N/A | March 2019┬▓ |
-| **Microsoft Customer Agreement supported by partners** | Microsoft Azure Plan | CSP_2015-05-01, CSP_MG_2017-12-01, and CSPDEVTEST_2018-05-01⁴ | N/A | October 2019 |
-| **Microsoft Developer Network (MSDN)** | MSDN Platforms┬│ | MSDN_2014-09-01 | MS-AZR-0062P | October 2, 2018 |
+| **Enterprise Agreement (EA)** | Enterprise Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0148P | May 2014 |
+| **Enterprise Agreement (EA)** | Microsoft Azure Enterprise | EnterpriseAgreement_2014-09-01 | MS-AZR-0017P | May 2014 |
+| **Microsoft Customer Agreement** | Microsoft Azure Plan | EnterpriseAgreement_2014-09-01 | N/A | March 2019┬╣ |
+| **Microsoft Customer Agreement** | Microsoft Azure Plan for Dev/Test | MSDNDevTest_2014-09-01 | N/A | March 2019┬╣ |
+| **Microsoft Customer Agreement supported by partners** | Microsoft Azure Plan | CSP_2015-05-01, CSP_MG_2017-12-01, and CSPDEVTEST_2018-05-01┬│ | N/A | October 2019 |
+| **Microsoft Developer Network (MSDN)** | MSDN Platforms┬▓ | MSDN_2014-09-01 | MS-AZR-0062P | October 2, 2018 |
| **Pay-as-you-go** | Pay-as-you-go | Pay-as-you-go_2014-09-01 | MS-AZR-0003P | October 2, 2018 | | **Pay-as-you-go** | Pay-as-you-go Dev/Test | MSDNDevTest_2014-09-01 | MS-AZR-0023P | October 2, 2018 |
-| **Pay-as-you-go** | Microsoft Cloud Partner Program | MPN_2014-09-01 | MS-AZR-0025P | October 2, 2018 |
-| **Pay-as-you-go** | Free Trial┬│ | FreeTrial_2014-09-01 | MS-AZR-0044P | October 2, 2018 |
-| **Pay-as-you-go** | Azure in Open┬│ | AzureInOpen_2014-09-01 | MS-AZR-0111P | October 2, 2018 |
-| **Pay-as-you-go** | Azure Pass┬│ | AzurePass_2014-09-01 | MS-AZR-0120P, MS-AZR-0122P - MS-AZR-0125P, MS-AZR-0128P - MS-AZR-0130P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Enterprise ΓÇô MPN┬│ | MPN_2014-09-01 | MS-AZR-0029P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Professional┬│ | MSDN_2014-09-01 | MS-AZR-0059P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Test Professional┬│ | MSDNDevTest_2014-09-01 | MS-AZR-0060P | October 2, 2018 |
-| **Visual Studio** | Visual Studio Enterprise┬│ | MSDN_2014-09-01 | MS-AZR-0063P | October 2, 2018 |
+| **Pay-as-you-go** | Microsoft Cloud Partner Program (MPN) | MPN_2014-09-01 | MS-AZR-0025P | October 2, 2018 |
+| **Pay-as-you-go** | Free Trial┬▓ | FreeTrial_2014-09-01 | MS-AZR-0044P | October 2, 2018 |
+| **Pay-as-you-go** | Azure in Open┬▓ | AzureInOpen_2014-09-01 | MS-AZR-0111P | October 2, 2018 |
+| **Pay-as-you-go** | Azure Pass┬▓ | AzurePass_2014-09-01 | MS-AZR-0120P, MS-AZR-0122P - MS-AZR-0125P, MS-AZR-0128P - MS-AZR-0130P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Enterprise ΓÇô MPN┬▓ | MPN_2014-09-01 | MS-AZR-0029P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Professional┬▓ | MSDN_2014-09-01 | MS-AZR-0059P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Test Professional┬▓ | MSDNDevTest_2014-09-01 | MS-AZR-0060P | October 2, 2018 |
+| **Visual Studio** | Visual Studio Enterprise┬▓ | MSDN_2014-09-01 | MS-AZR-0063P | October 2, 2018 |
-_┬╣ For data before May 2014, visit the [Azure Enterprise portal](https://ea.azure.com)._
-_┬▓ Microsoft Customer Agreements started in March 2019 and don't have any historical data before this point._
+_┬╣ Microsoft Customer Agreements started in March 2019 and don't have any historical data before this point._
-_┬│ Historical data for credit-based and pay-in-advance subscriptions might not match your invoice. See the following [Historical data might not match invoice](#historical-data-might-not-match-invoice) section._
+_┬▓ Historical data for credit-based and pay-in-advance subscriptions might not match your invoice. See the following [Historical data might not match invoice](#historical-data-might-not-match-invoice) section._
-_⁴ Quota IDs are the same across Microsoft Customer Agreement and classic subscription offers. Classic CSP subscriptions aren't supported._
+_┬│ Quota IDs are the same across Microsoft Customer Agreement and classic subscription offers. Classic Cloud Solution Provider (CSP) subscriptions aren't supported._
The following offers aren't supported yet:
The following offers aren't supported yet:
| **Cloud Solution Provider (CSP)** | Azure Government CSP | CSP_2015-05-01 | MS-AZR-USGOV-0145P | | **Cloud Solution Provider (CSP)** | Azure Germany in CSP for Microsoft Cloud Germany | CSP_2015-05-01 | MS-AZR-DE-0145P | | **Pay-as-you-go** | Azure for Students Starter | DreamSpark_2015-02-01 | MS-AZR-0144P |
-| **Pay-as-you-go** | Azure for Students┬│ | AzureForStudents_2018-01-01 | MS-AZR-0170P |
+| **Pay-as-you-go** | Azure for Students┬▓ | AzureForStudents_2018-01-01 | MS-AZR-0170P |
| **Pay-as-you-go** | Microsoft Azure Sponsorship | Sponsored_2016-01-01 | MS-AZR-0036P | | **Support Plans** | Standard support | Default_2014-09-01 | MS-AZR-0041P | | **Support Plans** | Professional Direct support | Default_2014-09-01 | MS-AZR-0042P |
The following table shows included and not included data in Cost Management. All
| **Included** | **Not included** | | | |
-| Azure service usage (including deleted resources)⁵ | Unbilled services (for example, free tier resources) |
-| Marketplace offering usage⁶ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Marketplace purchases⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Reservation purchases⁷ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
-| Amortization of reservation purchases⁷ | |
-| New Commerce non-Azure products (Microsoft 365 and Dynamics 365) ⁸ | |
+| Azure service usage (including deleted resources)⁴ | Unbilled services (for example, free tier resources) |
+| Marketplace offering usage⁵ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Marketplace purchases⁵ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Reservation purchases⁶ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). |
+| Amortization of reservation purchases⁶ | |
+| New Commerce non-Azure products (Microsoft 365 and Dynamics 365) ⁷ | |
-_⁵ Azure service usage is based on reservation and negotiated prices._
+_⁴ Azure service usage is based on reservation and negotiated prices._
-_⁶ Marketplace purchases aren't available for MSDN and Visual Studio offers at this time._
+_⁵ Marketplace purchases aren't available for MSDN and Visual Studio offers at this time._
-_⁷ Reservation purchases are only available for Enterprise Agreement (EA) and Microsoft Customer Agreement accounts at this time._
+_⁶ Reservation purchases are only available for Enterprise Agreement (EA) and Microsoft Customer Agreement accounts at this time._
-_⁸ Only available for specific offers._
+_⁷ Only available for specific offers._
Cost Management data only includes the usage and purchases from services and resources that are actively running. The cost data you see is based on past records. It includes resources, resource groups, and subscriptions that might be stopped, deleted, or canceled. So, it might not match with the current resources, resource groups, and subscriptions you see in tools like Azure Resource Manager or Azure Resource Graph. They only display currently deployed resources in your subscriptions. Not all resources emit usage and therefore might not be represented in the cost data. Similarly, Azure Resource Manager doesn't track some resources so they might not be represented in subscription resources.
Cost Management receives tags as part of each usage record submitted by the indi
- Some deployed resources might not support tags or might not include tags in usage data. - Resource tags are only included in usage data while the tag is applied ΓÇô tags aren't applied to historical data. - Resource tags are only available in Cost Management after the data is refreshed.-- Resource tags are only available in Cost Management when the resource is active/running and producing usage records. For example, when a VM is deallocated.-- Managing tags requires contributor access to each resource or the [tag contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) Azure RBAC role.
+- Resource tags are only available in Cost Management when the resource is active/running and producing usage records. For example, when a virtual machine (VM) is deallocated.
+- Managing tags requires contributor access to each resource or the [tag contributor](../../role-based-access-control/built-in-roles.md#tag-contributor) Azure role-based-access-control (RBAC) role.
- Managing tag policies requires either owner or policy contributor access to a management group, subscription, or resource group. If you don't see a specific tag in Cost Management, consider the following questions:
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
Previously updated : 05/31/2023 Last updated : 02/15/2024 # Assign Enterprise Agreement roles to service principals
-You can manage your Enterprise Agreement (EA) enrollment in the [Azure Enterprise portal](https://ea.azure.com/). Direct Enterprise customer can now manage Enterprise Agreement (EA) enrollment in [Azure portal](https://portal.azure.com/).
-You can create different roles to manage your organization, view costs, and create subscriptions. This article helps you automate some of those tasks by using Azure PowerShell and REST APIs with Microsoft Entra ID service principals.
+You can manage your Enterprise Agreement (EA) enrollment in the [Azure portal](https://portal.azure.com/). You can create different roles to manage your organization, view costs, and create subscriptions. This article helps you automate some of those tasks by using Azure PowerShell and REST APIs with Microsoft Entra ID service principals.
> [!NOTE] > If you have multiple EA billing accounts in your organization, you must grant the EA roles to Microsoft Entra ID service principals individually in each EA billing account.
Later in this article, you'll give permission to the Microsoft Entra app to act
| DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a | | SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 | -- An EnrollmentReader role can be assigned to a service principal only by a user who has an enrollment writer role. The EnrollmentReader role assigned to a service principal isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
+- An EnrollmentReader role can be assigned to a service principal only by a user who has an enrollment writer role. The EnrollmentReader role assigned to a service principal isn't shown in the Azure portal. It's created by programmatic means and is only for programmatic use.
- A DepartmentReader role can be assigned to a service principal only by a user who has an enrollment writer or department writer role.-- A SubscriptionCreator role can be assigned to a service principal only by a user who is the owner of the enrollment account (EA administrator). The role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.-- The EA purchaser role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
+- A SubscriptionCreator role can be assigned to a service principal only by a user who is the owner of the enrollment account (EA administrator). The role isn't shown in the Azure portal. It's created by programmatic means and is only for programmatic use.
+- The EA purchaser role isn't shown in the Azure portal. It's created by programmatic means and is only for programmatic use.
When you grant an EA role to a service principal, you must use the `billingRoleAssignmentName` required property. The parameter is a unique GUID that you must provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command. You can also use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
A service principal can have only one role.
| `properties.principalTenantId` | See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). | | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` |
- The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and Azure portal.
+ The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the Azure portal.
Notice that `24f8edb6-1668-4659-b5e2-40bb5f3a7d7e` is a billing role definition ID for an EnrollmentReader.
For the EA purchaser role, use the same steps for the enrollment reader. Specify
| `properties.principalTenantId` | See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). | | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountName}/billingRoleDefinitions/db609904-a47f-4794-9be8-9bd86fbffd8a` |
- The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and Azure portal.
+ The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the Azure portal.
The billing role definition ID of `db609904-a47f-4794-9be8-9bd86fbffd8a` is for a department reader.
Now you can use the service principal to automatically access EA APIs. The servi
| `properties.principalTenantId` | See [Find your service principal and tenant IDs](#find-your-service-principal-and-tenant-ids). | | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/{enrollmentAccountID}/billingRoleDefinitions/a0bcee42-bf30-4d1b-926a-48d21664ef71` |
- The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and the Azure portal.
+ The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the Azure portal.
The billing role definition ID of `a0bcee42-bf30-4d1b-926a-48d21664ef71` is for the subscription creator role.
Now you can use the service principal to automatically access EA APIs. The servi
## Verify service principal role assignments
-Service princnipal role assignments are not visible in the Azure portal. You can view enrollment account role assignments, including the subscription creator role, with the [Billing Role Assignments - List By Enrollment Account - REST API (Azure Billing)](/rest/api/billing/2019-10-01-preview/billing-role-assignments/list-by-enrollment-account) API. Use the API to verify that the role assignment was successful.
+Service principal role assignments are not visible in the Azure portal. You can view enrollment account role assignments, including the subscription creator role, with the [Billing Role Assignments - List By Enrollment Account - REST API (Azure Billing)](/rest/api/billing/2019-10-01-preview/billing-role-assignments/list-by-enrollment-account) API. Use the API to verify that the role assignment was successful.
## Troubleshoot
If you receive the following error when making your API call, then you may be in
## Next steps
-Learn more about [Azure EA portal administration](ea-portal-administration.md).
+[Get started with your Enterprise Agreement billing account](ea-direct-portal-get-started.md).
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
Previously updated : 06/06/2023 Last updated : 02/13/2024
Before you transfer billing ownership for a subscription, read [Azure subscripti
If you want to keep your billing ownership but change subscription type, see [Switch your Azure subscription to another offer](switch-azure-offer.md). To control who can access resources in the subscription, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-If you're an Enterprise Agreement (EA) customer, your enterprise administrator can transfer billing ownership of your subscriptions between accounts. For more information, [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership).
+If you're an Enterprise Agreement (EA) customer, your enterprise administrator can transfer billing ownership of your subscriptions between accounts.
Only the billing administrator of an account can transfer ownership of a subscription.
cost-management-billing Cost Management Automation Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cost-management-automation-scenarios.md
The following APIs are for Enterprise only:
### What's the difference between the Enterprise Reporting APIs and the Consumption APIs? When should I use each? These APIs have a similar set of functionality and can answer the same broad set of questions in the billing and cost management space. But they target different audiences: -- Enterprise Reporting APIs are available to customers who have signed an Enterprise Agreement with Microsoft that grants them access to negotiated Azure Prepayment (previously called monetary commitment) and custom pricing. The APIs require a key that you can get from the [Enterprise Portal](https://ea.azure.com). For a description of these APIs, see [Overview of Reporting APIs for Enterprise customers](enterprise-api.md).
+- Enterprise Reporting APIs are available to customers who have signed an Enterprise Agreement with Microsoft that grants them access to negotiated Azure Prepayment (previously called monetary commitment) and custom pricing. The APIs require a key that you can get from the Azure portal. For more information, see [API key generation](enterprise-rest-apis.md#api-key-generation). For a description of these APIs, see [Overview of Reporting APIs for Enterprise customers](enterprise-api.md).
- Consumption APIs are available to all customers, with a few exceptions. For more information, see [Cost Management automation overview](../automate/automation-overview.md). We recommend the provided APIs as the solution for the latest development scenarios.
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 01/11/2024 Last updated : 02/14/2024
# EA Billing administration on the Azure portal
-This article explains the common tasks that an Enterprise Agreement (EA) administrator accomplishes in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). A direct enterprise agreement is signed between Microsoft and an enterprise agreement customer. Conversely, an indirect EA is one where a customer signs an agreement with a Microsoft partner. This article is applicable for both direct and indirect EA customers.
- > [!NOTE]
-> On November 15, 2023, the Azure Enterprise portal is retiring for EA enrollments in the Commercial cloud and is becoming read-only for EA enrollments in the Azure Government cloud.
-> Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
->
-> Since August 14, 2023, EA customers is not be able to manage their Azure Government EA enrollments from the [Azure portal](https://portal.azure.com). Instead, they can manage it from the Azure Government portal at [https://portal.azure.us](https://portal.azure.us). The functionality mentioned in this article is same as the Azure Government portal.
+> On February 15, 2024, the [EA portal](https://ea.azure.com) retired. It's now read only. All EA customers and partners use Cost Management + Billing in the Azure portal to manage their enrollments.
+
+This article explains the common tasks that an Enterprise Agreement (EA) administrator accomplishes in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). A direct enterprise agreement is signed between Microsoft and an enterprise agreement customer. Conversely, an indirect EA is one where a customer signs an agreement with a Microsoft partner. This article is applicable for both direct and indirect EA customers.
## Manage your enrollment
If your enterprise administrator can't assist you, create anΓÇ»[Azure support re
>[!NOTE] > - We recommend that you have at least one active Enterprise Administrator at all times. If no active Enterprise Administrator is available, contact your partner to change the contact information on the Volume License agreement. Your partner can make changes to the customer contact information by using the Contact Information Change Request (CICR) process available in the eAgreements (VLCM) tool.
-> - Any new EA administrator account created using the CICR process is assigned read-only permissions to the enrollment in the EA portal and Azure portal. To elevate access, create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+> - Any new EA administrator account created using the CICR process is assigned read-only permissions to the enrollment in the Azure portal. To elevate access, create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
## Create an Azure enterprise department
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for EA enrollm
description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 01/24/2024 Last updated : 02/14/2024
This article explains how partner administrators of indirect enrollments and enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Charges are presented at the summary level across all accounts and subscriptions of the enrollment.
-> [!NOTE]
->On February 15, 2024, the Azure Enterprise portal is retiring for EA enrollments in the Azure Government cloud. The Azure Enterprise portal is already retired for EA enrollments in the commercial cloud. Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
->
Check out the [EA admin manage consumption and invoices](https://www.youtube.com/watch?v=bO8V9eLfQHY) video. It's part of the [Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm) series of videos. >[!VIDEO https://www.youtube.com/embed/bO8V9eLfQHY]
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
For most subscriptions, you can download your invoice from the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade) or have it sent in email.
-If you're an Azure customer with a direct Enterprise Agreement (EA customer), you can download your organization's invoices using the information at [Download or view your Azure billing invoice](direct-ea-azure-usage-charges-invoices.md#download-or-view-your-azure-billing-invoice). For indirect EA customers, see [Azure Enterprise enrollment invoices](ea-portal-enrollment-invoices.md).
+Azure EA customers can download their organization's invoices using the information at [Download or view your Azure billing invoice](direct-ea-azure-usage-charges-invoices.md#download-or-view-your-azure-billing-invoice).
Only certain roles have permission to get billing invoice, like the Account Administrator or Enterprise Administrator. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
cost-management-billing Ea Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-azure-marketplace.md
Title: Azure Marketplace
-description: Describes how EA customers can use Azure Marketplace
+description: Describes how EA customers can use Azure Marketplace.
Previously updated : 08/26/2022 Last updated : 02/13/2024
This article explains how EA customers and partners can view marketplace charges
## Azure Marketplace for EA customers
-For direct customers, Azure Marketplace charges are visible on the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Azure Marketplace purchases and consumption are billed outside of Azure Prepayment on a quarterly or monthly cadence and in arrears. See [Manage Azure Marketplace on Azure portal](direct-ea-administration.md#enable-azure-marketplace-purchases).
+Azure Marketplace charges are visible on the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Azure Marketplace purchases and consumption are billed outside of Azure Prepayment on a quarterly or monthly cadence and in arrears. See [Manage Azure Marketplace on Azure portal](direct-ea-administration.md#enable-azure-marketplace-purchases).
-Indirect customers can find their Azure Marketplace subscriptions on the **Manage Subscriptions** page of the Azure Enterprise portal, but pricing will be hidden. Customers should contact their Licensing Solutions Provider (LSP) for information on Azure Marketplace charges.
+Customers should contact their Licensing Solutions Provider (LSP) for information on Azure Marketplace charges.
-New monthly or annually recurring Azure Marketplace purchases are billed in full during the period when Azure Marketplace items are purchased. These items will autorenew in the following period on the same day of the original purchase.
+New monthly or annually recurring Azure Marketplace purchases are billed in full during the period when Azure Marketplace items are purchased. These items autorenew in the following period on the same day of the original purchase.
-Existing, monthly recurring charges will continue to renew on the first of each calendar month. Annual charges will renew on the anniversary of the purchase date.
+Existing, monthly recurring charges continue to renew on the first of each calendar month. Annual charges renew on the anniversary of the purchase date.
Some third-party reseller services available on Azure Marketplace now consume your Enterprise Agreement (EA) Azure Prepayment balance. Previously these services were billed outside of EA Azure Prepayment and were invoiced separately. EA Azure Prepayment for these services in Azure Marketplace helps simplify customer purchase and payment management. For a complete list of services that now consume Azure Prepayment, see the [March 06, 2018 update on the Azure website](https://azure.microsoft.com/updates/azure-marketplace-third-party-reseller-services-now-use-azure-monetary-commitment/).
-### Partners
-
-> [!NOTE]
-> The Azure Marketplace price list feature in the EA portal is retired. Currently, EA customers can't get a Marketplace price sheet.
### Enabling Azure Marketplace purchases
-Enterprise administrators can disable or enable Azure Marketplace purchases for all Azure subscriptions under their enrollment. If the enterprise administrator disables purchases, and there are Azure subscriptions that already have Azure Marketplace subscriptions, those Azure Marketplace subscriptions won't be canceled or affected.
-
-Although customers can convert their direct Azure subscriptions to Azure EA by associating them to their enrollment in the Azure Enterprise portal, this action doesn't automatically convert the child subscriptions.
+Enterprise administrators can disable or enable Azure Marketplace purchases for all Azure subscriptions under their enrollment. If the enterprise administrator disables purchases, and there are Azure subscriptions that already have Azure Marketplace subscriptions, those Azure Marketplace subscriptions aren't canceled or affected.
-To enable Azure Marketplace purchases on Azure Enterprise Portal:
-
-1. Sign in to the Azure Enterprise portal as an enterprise administrator.
-1. Go to **Manage**.
-1. Under **Enrollment Detail**, select the pencil icon next to the **Azure Marketplace** line item.
-1. Toggle **Enabled/Disabled** or Free **BYOL SKUs Only** as appropriate.
-1. Select **Save**.
+Although customers can convert their direct Azure subscriptions to Azure EA by associating them to their enrollment in the Azure portal, this action doesn't automatically convert the child subscriptions.
-Direct customer can enable Azure Marketplace purchase in Azure portal:
+To enable Azure Marketplace purchase in the Azure portal:
1. Sign in to the Azure portal. 1. Navigate to **Cost Management + Billing**.
Direct customer can enable Azure Marketplace purchase in Azure portal:
### Services billed hourly for Azure EA
-The following services are billed hourly under an Enterprise Agreement instead of the monthly rate in MOSP:
+The following services are billed hourly under an Enterprise Agreement instead of the monthly rate in a Microsoft Online Services Program (MOSP) account:
- Application Delivery Network - Web Application Firewall
cost-management-billing Ea Direct Portal Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-direct-portal-get-started.md
Previously updated : 09/06/2023 Last updated : 02/13/2024 # Get started with your Enterprise Agreement billing account
+>[!NOTE]
+> On February 15, 2024, the [EA portal](https://ea.azure.com) retired. It's now read only. All EA customers and partners use Cost Management + Billing in the Azure portal to manage their enrollments.
+ This article helps direct and indirect Azure Enterprise Agreement (Azure EA) customers with their billing administration on the [Azure portal](https://portal.azure.com). Get basic information about: - Roles used to manage the Enterprise billing account in the Azure portal - Subscription creation - Cost analysis in the Azure portal
-> [!NOTE]
-> The Azure Enterprise portal (EA portal) is getting deprecated. We recommend that EA customers and partners use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal or Azure Government portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
->
-> - The EA portal is retiring on November 15, 2023, for EA enrollments in the Azure commercial cloud.
-> - Starting November 15, 2023, indirect EA customers and partners wonΓÇÖt be able to manage their Azure Government EA enrollment in the EA portal. Instead, they must use the Azure Government portal.
-> - The Azure Government portal is accessed only with Azure Government credentials. For more information, see [Access your EA billing account in the Azure Government portal](../../azure-government/documentation-government-how-to-access-enterprise-agreement-billing-account.md).
- We have several videos that walk you through getting started with the Azure portal for Enterprise Agreements. Check out the series at [Enterprise Customer Billing Experience in the Azure portal](https://www.youtube.com/playlist?list=PLeZrVF6SXmsoHSnAgrDDzL0W5j8KevFIm). Here's the first video.
If you'd like to know about how Azure reservations for VM reserved instances can
[Azure Marketplace](./ea-azure-marketplace.md) explains how EA customers and partners can view marketplace charges and enable Azure Marketplace purchases.
-For explanations about the common tasks that a partner EA administrator accomplishes in the Azure EA portal, see [Azure EA portal administration for partners](ea-partner-portal-administration.md).
+For explanations about the common tasks that a partner EA administrator accomplishes in the Azure portal, see [EA billing administration for partners in the Azure portal](ea-billing-administration-partners.md).
## Next steps - Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about getting started with the EA billing administration. - Azure Enterprise administrators should read [Azure EA billing administration](direct-ea-administration.md) to learn about common administrative tasks.-- If you need help with troubleshooting Azure Enterprise portal issues, see [Troubleshoot Azure Enterprise portal access](ea-portal-troubleshoot.md).
cost-management-billing Ea Partner Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-partner-portal-administration.md
- Title: Azure EA portal administration for partners
-description: Describes portal administration topics pertaining to Partners
----- Previously updated : 04/05/2023---
-# Azure EA portal administration for partners
-
-This article explains the common tasks that a partner EA administrator accomplishes in the Azure EA portal (https://ea.azure.com).
-
-## Manage partner administrators
-
-Each partner administrator within the Azure EA Portal has the ability to add or remove other partner administrators. Partner administrators are associated to the partner organizations of indirect enrollments and aren't associated directly to the enrollments.
-
-### Add a partner administrator
-
-To view a list of all enrollments that are associated to the same partner organization as the current user, select the **Enrollment** tab and select a desired enrollment box.
-
-1. Sign in as a partner administrator.
-1. Select **Manage** on the left navigation.
-1. Select the **Partner** tab.
-1. Select **+ Add Administrator** and fill in the email address, notification contact, and notification details.
-1. Select **Add**.
-
-### Remove a partner administrator
-
-To view a list of all enrollments that are associated to the same partner organization as the current user, select the **Enrollment** tab and select a desired enrollment box.
-
-1. Sign in as a partner administrator.
-1. Select **Manage** on the left navigation.
-1. Select the **Partner** tab.
-1. Under the Administrator section, select the appropriate row for the administrator you wish to remove.
-1. Select the X symbol on the right.
-1. Confirm that you want to delete.
-
-## Manage partner notifications
-
-Partner Administrators can manage the frequency that they receive usage notifications for their enrollments. They automatically receive weekly notifications of their unbilled balance. They can change the frequency of individual notifications to monthly, weekly, daily, or turn them off completely.
-
-If a notification isn't received by a user, verify that the user's notification settings are correct with the following steps.
-
-1. Sign in to the Azure EA portal as a Partner Administrator.
-2. Select **Manage** and then select the **Partner** tab.
-3. View the list of administrators under the Administrator section.
-4. To edit notification preferences, hover over the appropriate administrator and select the pencil symbol.
-5. Increase the notification frequency and lifecycle notifications as needed.
-6. Add a contact if needed and select **Add**.
-7. Select **Save**.
-
-![Example showing Add Contact box ](./media/ea-partner-portal-administration/create-ea-manage-partner-notification.png)
-
-## View enrollments for partner administrators
-
-Partner administrators can see a list view of all their direct and indirect enrollments in the Azure EA Portal. Boxes containing an overview of every enrollment will be displayed with the enrollment number, enrollment name, balance, and overage amounts.
-
-### View a List of Enrollments
-
-1. Sign in as a partner administrator.
-1. Select **Manage** on the navigation on the left side of the page.
-1. Select the **Enrollment** tab.
-1. Select the box for the enrollment.
-
-A view of all enrollments remains at the top of the page, with boxes for each enrollment. Additionally, you can navigate between enrollments by selecting the current enrollment number on the navigation on the left side of the page. A pop out will appear allowing you to search enrollments or select a different enrollment by selecting the appropriate box.
-
-## Next steps
--- For introductory information about the Azure EA portal, see the [Get started with the Azure EA portal](ea-portal-get-started.md) article.-- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
- Title: Azure EA portal administration
-description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal.
-- Previously updated : 01/04/2024------
-# Azure EA portal administration
-
-This article explains the common tasks that an administrator accomplishes in the Azure EA portal (https://ea.azure.com). The Azure EA portal is an online management portal that helps customers manage the cost of their Azure EA services. For introductory information about the Azure EA portal, see the [Get started with the Azure EA portal](ea-portal-get-started.md) article.
-
-> [!NOTE]
-> On February 15, 2024, the Azure Enterprise portal is retiring for EA enrollments in the Azure Government cloud. The Azure Enterprise portal is already retired for EA enrollments in the commercial cloud.
-> Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
-
-## Activate your enrollment
-
-To activate your enrollment, the initial enterprise administrator signs in to the [Azure Enterprise portal](https://ea.azure.com) using their work, school, or Microsoft account.
-
-If you've been set up as the enterprise administrator, you don't need to receive the activation email. Go to [Azure Enterprise portal](https://ea.azure.com) and sign in with your work, school, or Microsoft account email address and password.
-
-If you have more than one enrollment, choose one to activate. By default, only active enrollments are shown. To view enrollment history, clear the **Active** option in the top right of the Azure Enterprise portal.
-
-Under **Enrollment**, the status shows **Active**.
-
-![Example showing an active enrollment](./media/ea-portal-administration/ea-enrollment-status.png)
-
-Only existing Azure enterprise administrators can create other enterprise administrators.
-
-### Create another enterprise administrator
-
-Use one of the following options, based on your situation.
-
-#### If you're already an enterprise administrator
-
-1. Sign in to the [Azure Enterprise portal](https://ea.azure.com).
-1. Go to **Manage** > **Enrollment Detail**.
-1. Select **+ Add Administrator** at the top right.
-
-Make sure that you have the user's email address and preferred authentication method, such as a work, school, or Microsoft account.
-
-#### If you're not an enterprise administrator
-
-If you're not an enterprise administrator, contact an enterprise administrator to request that they add you to an enrollment. The enterprise administrator uses the preceding steps to add you as an enterprise administrator. After you're added to an enrollment, you receive an activation email. After the account is registered, it's activated in about 5 to 10 minutes.
-
-#### If your enterprise administrator can't help you
-
-If your enterprise administrator can't assist you, create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Provide the following information:
--- Enrollment number-- Email address to add, and authentication type (work, school, or Microsoft account)-- Email approval from an existing enterprise administrator-
->[!NOTE]
-> - We recommend that you have at least one active Enterprise Administrator at all times. If no active Enterprise Administrator is available, contact your partner to change the contact information on the Volume License agreement. Your partner can make changes to the customer contact information by using the Contact Information Change Request (CICR) process available in the eAgreements (VLCM) tool.
-> - Any new EA administrator account created using the CICR process is assigned read-only permissions to the enrollment in the EA portal and Azure portal. To elevate access, create an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
--
-## Create an Azure Enterprise department
-
-Enterprise administrators and department administrators use departments to organize and report on enterprise Azure services and usage by department and cost center. The enterprise administrator can:
--- Add or remove departments.-- Associate an account to a department.-- Create department administrators.-- Allow department administrators to view price and costs.-
-A department administrator can add new accounts to their departments. They can remove accounts from their departments, but not from the enrollment.
-
-To add a department:
-
-1. Sign in to the Azure Enterprise portal.
-1. In the left pane, select **Manage**.
-1. Select the **Department** tab, then select **+ Add Department**.
-1. Enter the information.
- The department name is the only required field. It must be at least three characters.
-1. When complete, select **Add**.
-
-## Add a department administrator
-
-After a department is created, the enterprise administrator can add department administrators and associate each one to a department. Department administrators can perform the following actions for their departments:
--- Create other department administrators-- View and edit department properties such as name or cost center-- Add accounts-- Remove accounts-- Download usage details-- View the monthly usage and charges ┬╣-
-> ┬╣ An enterprise administrator must grant these permissions. If you were given permission to view department monthly usage and charges, but can't see them, contact your partner.
-
-### To add a department administrator
-
-As an enterprise administrator:
-
-1. Sign in to the Azure Enterprise portal.
-1. In the left pane, select **Manage**.
-1. Select the **Department** tab and then select the department.
-1. Select **+ Add Administrator** and add the required information.
-1. For read-only access, set the **Read-Only** option to **Yes** and then select **Add**.
-
-![Example showing the Add Department Administrator dialog box](./media/ea-portal-administration/ea-create-add-department-admin.png)
-
-### To set read-only access
-
-You can grant read-only access to department administrators.
--- When you create a new department administrator, set the read-only option to **Yes**.--- To edit an existing department administrator:
- 1. Select a department, and then select the pencil symbol next to the **Department Administrator** that you want to edit.
- 1. Set the read-only open to **Yes**, and then select **Save**.
-
-Enterprise administrators automatically get department administrator permissions.
-
-## Add an account
-
-The structure of accounts and subscriptions impact how they're administered and how they appear on your invoices and reports. Examples of typical organizational structures include business divisions, functional teams, and geographic locations.
-
-To add an account:
-
-1. In the Azure Enterprise portal, select **Manage** in the left navigation area and then select an enrollment.
-1. Select the **Account** tab. On the **Account** page, select **+Add Account**.
-1. Select a department, or leave it as unassigned, and then select the desired authentication type.
-1. Enter a friendly name to identify the account in reporting.
-1. Enter the **Account Owner Email** address to associate with the new account.
-1. Confirm the email address and then select **Add**.
-
-![Example showing the list of accounts and the +Add Account option](./media/ea-portal-administration/create-ea-add-an-account.png)
-
-To add another account, select **Add Another Account**, or select **Add** at the bottom-right corner of the left toolbar.
-
-To confirm account ownership:
-
-1. Sign in to the Azure Enterprise portal.
-1. View the status.
- The status changes from **Pending** to **Active**. When Active, dates shown under the **Start/End Date** column are the start and end dates of the agreement.
-1. When the **Warning** message pops up, the account owner needs to select **Continue** to activate the account the first time they sign in to the Azure Enterprise portal.
-
-<a name='add-an-account-from-another-azure-ad-tenant'></a>
-
-## Add an account from another Microsoft Entra tenant
-
-By default, an enrollment is associated with a specific Microsoft Entra tenant. Only accounts from that tenant are allowed to be used to establish an Azure enrollment account. However, you change the behavior to allow an account to get linked from any Microsoft Entra tenant.
-
-To add an account from any tenant:
-
-1. In the Azure Enterprise portal, select **Manage** in the left navigation area.
-1. Select the appropriate enrollment. Note the current setting for **Auth level**, if you want to restore the setting later.
-1. If not already configured, change the Auth level to **Work and School Account Cross Tenant**.
-1. Add the account using the Microsoft Entra sign-in information, as described in the previous section.
-1. Return the **Auth level** to its previous setting, or set it as **Work and School Account**.
-1. Sign in to the EA portal to verify that you can view the appropriate subscription offers so that you can then add subscriptions in the Azure portal.
-
-## Change Azure subscription or account ownership
-
-This section only applies when a subscription owner is being changed. Changing a subscription ownership doesn't require an Azure support ticket. Enterprise administrators can use the Azure Enterprise portal to transfer account ownership of selected or all subscriptions in an enrollment. They also have the option to change the subscription directory (tenant).
-
-However, an EA admin can't transfer an account from one enrollment to another enrollment. To transfer an account from one enrollment to another, a support request is required. For information about transferring an account from one enrollment to another enrollment, see [Transfer an enterprise account to a new enrollment](ea-transfers.md#transfer-an-enterprise-account-to-a-new-enrollment).
-
-Pay-as-you-go subscription administrators can also transfer account ownership of their subscriptions to an EA enrollment using this same process.
-
-When you complete a subscription or account ownership transfer, Microsoft updates the account owner.
-
-Before performing the ownership transfer, understand these Azure role-based access control (Azure RBAC) policies:
--- When performing subscription or account ownership transfers between two organizational IDs within the same tenant, Azure RBAC policies, existing service administrator, and co-administrator roles are preserved.-- Cross-tenant subscription or account ownership transfers result in losing your Azure RBAC policies and role assignments.-- Policies and administrator roles don't transfer across different directories. Service administrators are updated to the owner of destination account.-- To avoid loss of Azure RBAC policies and role assignments when transferring subscription between tenants, ensure that the **Move the subscriptions to the recipientΓÇÖs Microsoft Entra tenant** checkbox remains **unchecked**. This will retain the services, Azure roles, and policies on the current Microsoft Entra tenant and only transfer the billing ownership for the account.
- :::image type="content" source="./media/ea-portal-administration/unselected-checkbox-move-subscriptions-to-recipients-tenant.png" alt-text="Image showing unselected checkbox for moving subscriptions to Microsoft Entra tenant" lightbox="./media/ea-portal-administration/unselected-checkbox-move-subscriptions-to-recipients-tenant.png" :::
--
-Before changing an account owner:
-
-1. In the Azure Enterprise portal, view the **Account** tab and identify the source account. The source account must be active.
-1. Identify the destination account and make sure it's active.
-
-To transfer account ownership for all subscriptions:
-
-1. Sign in to the Azure Enterprise portal.
-1. In the left navigation area, select **Manage**.
-1. Select the **Account** tab and hover over an account.
-1. Select the change account owner icon on the right. The icon resembles a person.
- ![Image showing the Change Account Owner symbol](./media/ea-portal-administration/create-ea-create-sub-transfer-account-ownership-of-sub.png)
-1. Choose the destination account to transfer to and then select **Next**.
-1. If you want to transfer the account ownership across Microsoft Entra tenants, select the **Move the subscriptions to the recipient's Microsoft Entra tenant** checkbox.
- :::image type="content" source="./media/ea-portal-administration/selected-checkbox-move-subscriptions-to-recipients-tenant.png" alt-text="Image showing selected checkbox for moving subscriptions to Microsoft Entra tenant" lightbox="./media/ea-portal-administration/selected-checkbox-move-subscriptions-to-recipients-tenant.png" :::
-1. Confirm the transfer and select **Submit**.
-
-To transfer account ownership for a single subscription:
-
-1. Sign in to the Azure Enterprise portal.
-1. In the left navigation area, select **Manage**.
-1. Select the **Account** tab and hover over an account.
-1. Select the transfer subscriptions icon on the right. The icon resembles a page.
- ![Image showing the Transfer Subscriptions symbol](./media/ea-portal-administration/ea-transfer-subscriptions.png)
-1. Choose the destination account to transfer the subscription and then select **Next**.
-1. If you want to transfer the subscription ownership across Microsoft Entra tenants, select the **Move the subscriptions to the recipient's Microsoft Entra tenant** checkbox.
- :::image type="content" source="./media/ea-portal-administration/selected-checkbox-move-subscriptions-to-recipients-tenant.png" alt-text="Image showing selected checkbox for moving subscriptions to Microsoft Entra tenant" lightbox="./media/ea-portal-administration/selected-checkbox-move-subscriptions-to-recipients-tenant.png" :::
-1. Confirm the transfer and then select **Submit**.
--
-## Associate an account to a department
-
-Enterprise Administrators can associate existing accounts to Departments under the enrollment.
-
-### To associate an account to a department
-
-1. Sign in to the Azure EA Portal as an enterprise administrator.
-1. Select **Manage** on the left navigation.
-1. Select **Account**.
-1. Hover over the row with the account and select the pencil icon on the right.
-1. Select the department from the drop-down menu.
-1. Select **Save**.
-
-## Associate an existing account with your Pay-As-You-Go subscription
-
-If you already have an existing Microsoft Azure account on the Azure portal, enter the associated school, work, or Microsoft account in order to associate it with your Enterprise Agreement enrollment.
-
-### Associate an existing account
-
-1. In the Azure Enterprise portal, select **Manage**.
-1. Select the **Account** tab.
-1. Select **+Add an account**.
-1. Enter the work, school, or Microsoft account associated with the existing Azure account.
-1. Confirm the account associated with the existing Azure account.
-1. Provide a name you would like to use to identify this account in reporting.
-1. Select **Add**.
-1. To add an additional account, you can select the **+Add an Account** option again or return to the homepage by selecting the **Admin** button.
-1. If you view the **Account** page, the newly added account will appear in a **Pending** status.
-
-### Confirm account ownership
-
-1. Sign into the email account associated with the work, school, or Microsoft account you provided.
-1. Open the email notification titled _"Invitation to Activate your Account on the Microsoft Azure Service from Microsoft Volume Licensing"_.
-1. Select the **Log into the Microsoft Azure Enterprise Portal** link in the invitation.
-1. Select **Sign in**.
-1. Enter your work, school, or Microsoft account and password to sign in and confirm account ownership.
-
-### Azure Marketplace
-
-Although most subscriptions can convert from the Pay-as-You-Go environment to Azure Enterprise Agreement, Azure Marketplace services do not. In order to have a single view of all subscriptions and charges, we recommend you add the Azure Marketplace services to the Azure Enterprise portal.
-
-1. Sign in to the Azure Enterprise portal.
-1. Select **Manage** on the left navigation.
-1. Select the **EnrollmentTab**.
-1. View the **Enrollment Detail** section.
-1. To the right of the Azure Marketplace field, select the pencil icon to enable it. Select **Save**.
-
-The account owner can now purchase any Azure Marketplace services that were previously owned in the Pay-As-You-Go subscription.
-
-After the new Azure Marketplace subscriptions are activated under your Azure EA enrollment, cancel the Azure Marketplace services that were created in the Pay-As-You-Go environment. This step is critical so that your Azure Marketplace subscriptions do not fall into a bad state when your Pay-As-You-Go payment instrument expires.
-
-### MSDN
-
-MSDN subscriptions are automatically converted to MSDN Dev/Test and the Azure EA offer will lose any existing monetary credit.
-
-### Azure in Open
-
-If you associate an Azure in Open subscription with an Enterprise Agreement, you forfeit any unconsumed Azure in Open credits. Thus, we recommended that you consume all credit on an Azure in Open subscription before you add the account to your Enterprise Agreement.
-
-### Accounts with support subscriptions
-
-If your Enterprise Agreement doesn't have a support subscription and you add an existing account with a support subscription to the Azure Enterprise portal, your MOSA support subscription won't automatically transfer. You'll need to repurchase a support subscription in Azure EA during the grace period - by the end of the subsequent month.
-
-## Department spending quotas
-
-EA customers can set or change spending quotas for each department under an enrollment. The spending quota amount is set for the current Prepayment term. At the end of the current Prepayment term, the system will extend the existing spending quota to the next Prepayment term unless the values are updated.
-
-The department administrator can view the spending quota but only the enterprise administrator can update the quota amount. The enterprise administrator and the department administrator and will receive notifications once quota has reached 50%, 75%, 90%, and 100%.
-
-### Enterprise administrator to set the quota:
-
- 1. Open the Azure EA Portal.
- 1. Select **Manage** on the left navigation.
- 1. Select the **Department** Tab.
- 1. Select the Department.
- 1. Select the pencil symbol on the Department Details section, or select the **+ Add Department** symbol to add a spending quota along with a new department.
- 1. Under Department Details, enter a spending quota amount in the enrollment's currency in the Spending Quota $ box (must be greater than 0).
- - The Department Name and Cost Center can also be edited at this time.
- 1. Select **Save**.
-
-The department spending quota will now be visible in the Department List view under the Department tab. At the end of the current Prepayment, the Azure EA Portal will maintain the spending quotas for the next Prepayment term.
-
-The department quota amount is independent of the current Azure Prepayment, and the quota amount and alerts apply only to first party usage. The department spending quota is for informational purposes only and doesn't enforce spending limits.
-
-### Department administrator to view the quota:
-
-1. Open the Azure EA Portal.
-1. Select **Manage** on the left navigation.
-1. Select the **Department** tab and view the Department List view with spending quotas.
-
-If you're an indirect customer, cost features must be enabled by your channel partner.
-
-## Enterprise user roles
-
-The Azure EA portal helps you to administer your Azure EA costs and usage. There are three main roles in the Azure EA portal:
--- EA admin-- Department administrator-- Account owner-
-Each role has a different level of access and authority.
-
-For more information about user roles, see [Enterprise user roles](./understand-ea-roles.md#enterprise-user-roles).
-
-## Add an Azure EA account
-
-The Azure EA account is an organizational unit in the Azure EA portal. It's used to administer subscriptions and it's also used for reporting. To access and use Azure services, you need to create an account or have one created for you.
-
-For more information about Azure accounts, see [Add an account](#add-an-account).
-
-## Enterprise Dev/Test Offer
-
-As an Azure enterprise administrator, you can enable account owners in your organization to create subscriptions based on the EA Dev/Test offer. To do so, select the Dev/Test box for the account owner in the Azure EA Portal.
-
-Once you've checked the Dev/Test box, let the account owner know so that they can set up the EA Dev/Test subscriptions needed for their teams of Dev/Test subscribers.
-
-The offer enables active Visual Studio subscribers to run development and testing workloads on Azure at special Dev/Test rates. It provides access to the full gallery of Dev/Test images including Windows 8.1 and Windows 10.
-
-### To set up the Enterprise Dev/Test offer:
-
-1. Sign in as the enterprise administrator.
-1. Select **Manage** on the left navigation.
-1. Select the **Account** tab.
-1. Select the row for the account where you would like to enable Dev/Test access.
-1. Select the pencil symbol to the right of the row.
-1. Select the Dev/Test checkbox.
-1. Select **Save**.
-
-When a user is added as an account owner through the Azure EA Portal, any Azure subscriptions associated with the account owner that are based on either the PAYG Dev/Test offer or the monthly credit offers for Visual Studio subscribers will be converted to the EA Dev/Test offer. Subscriptions based on other offer types, such as PAYG, associated with the Account Owner will be converted to Microsoft Azure Enterprise offers.
-
-The Dev/Test Offer isn't applicable to Azure Gov customers at this time.
-
-## Create a subscription
-
-Account owners can view and manage subscriptions. You can use subscriptions to give teams in your organization access to development environments and projects. For example: test, production, development, and staging.
-
-When you create different subscriptions for each application environment, you help secure each environment.
--- You can also assign a different service administrator account for each subscription.-- You can associate subscriptions with any number of services.-- The account owner creates subscriptions and assigns a service administrator account to each subscription in their account.-
-### Add a subscription
-
-Use the following information to add a subscription.
-
-The first time you add a subscription to your account, you're asked to accept the Microsoft Online Subscription Agreement (MOSA) and a rate plan. Although they aren't applicable to Enterprise Agreement customers, the MOSA and the rate plan are required to create your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the above items and your contractual relationship doesn't change. When prompted, select the box that indicates you accept the terms.
-
-_Microsoft Azure Enterprise_ is the default name when a subscription is created. You can change the name to differentiate it from the other subscriptions in your enrollment, and to ensure that it's recognizable in reports at the enterprise level.
-
-To add a subscription:
-
-1. In the Azure Enterprise portal, sign in to the account.
-1. Select the **Admin** tab and then select **Subscription** at the top of the page.
-1. Verify that you're signed in as the account owner of the account.
-1. Select **+Add Subscription** and then select **Purchase**.
-
- The first time you add a subscription to an account, you must provide your contact information. When you add additional subscriptions, your contact information is added for you.
-
-1. Select **Subscriptions** and then select the subscription you created.
-1. Select **Edit Subscription Details**.
-1. Edit the **Subscription Name** and the **Service Administrator** and then select the check mark.
-
- The subscription name appears on reports. It's the name of the project associated with the subscription in the development portal.
-
-New subscriptions can take up to 24 hours to appear in the subscriptions list. After you've created a subscription, you can:
--- [Edit subscription details](https://portal.azure.com)-- [Manage subscription services](https://portal.azure.com/#home)-
-## Delete subscription
-
-To delete a subscription where you're the account owner:
-
-1. Sign in to the Azure portal with the credentials associated to your account.
-1. On the Hub menu, select **Subscriptions**.
-1. In the subscriptions tab in the upper left corner of the page, select the subscription you want to cancel and select **Cancel Sub** to launch the cancel tab.
-1. Enter the subscription name and choose a cancellation reason and select the **Cancel Sub** button.
-
-Only account administrators can cancel subscriptions.
-
-For more information, see [What happens after I cancel my subscription?](cancel-azure-subscription.md#what-happens-after-subscription-cancellation).
-
-## Delete an account
-
-Account removal can only be completed for active accounts with no active subscriptions.
-
-1. In the Enterprise portal, select **Manage** on the left navigation.
-1. Select the **Account** tab.
-1. From the Accounts table, select the Account you would like to delete.
-1. Select the X symbol at the right of the Account row.
-1. Once there are no active subscriptions under the account, select **Yes** under the Account row to confirm the Account removal.
-
-## Update notification settings
-
-Enterprise Administrators are automatically enrolled to receive usage notifications associated to their enrollment. Each Enterprise Administrator can change the interval of the individual notifications or can turn them off completely.
-
-Notification contacts are shown in the Azure EA portal in the **Notification Contact** section. Managing your notification contacts makes sure that the right people in your organization get Azure EA notifications.
-
-To View current notifications settings:
-
-1. In the Azure EA portal, navigate to **Manage** > **Notification Contact**.
-2. Email Address ΓÇô The email address associated with the Enterprise Administrator's Microsoft Account, Work, or School Account that receives the notifications.
-3. Unbilled Balance Notification Frequency ΓÇô The frequency that notifications are set to send to each individual Enterprise Administrator.
-
-To add a contact:
-
-1. Select **+Add Contact**.
-2. Enter the email address and then confirm it.
-3. Select **Save**.
-
-The new notification contact is displayed in the **Notification Contact** section. To change the notification frequency, select the notification contact and select the pencil symbol to the right of the selected row. Set the frequency to **daily**, **weekly**, **monthly**, or **none**.
-
-You can suppress _approaching coverage period end date_ and _disable and de-provision date approaching_ lifecycle notifications. Disabling lifecycle notifications suppresses notifications about the coverage period and agreement end date.
-
-## Azure sponsorship offer
-
-The Azure sponsorship offer is a limited sponsored Microsoft Azure account. It's available by e-mail invitation only to limited customers selected by Microsoft. If you're entitled to the Microsoft Azure sponsorship offer, you'll receive an e-mail invitation to your account ID.
-
-If you need assistance, create aΓÇ»[support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) in the Azure portal.
-
-## Conversion to work or school account authentication
-
-Azure Enterprise users can convert from a Microsoft Account (MSA or Live ID) to a Work or School Account (which uses Microsoft Entra ID) authentication type.
-
-To begin:
-
-1. Add the work or school account to the Azure EA Portal in the role(s) needed.
-1. If you get errors, the account may not be valid in Microsoft Entra ID. Azure uses User Principal Name (UPN), which isn't always identical to the email address.
-1. Authenticate to the Azure EA portal using the work or school account.
-
-### To convert subscriptions from Microsoft accounts to work or school accounts:
-
-1. Sign in to the management portal using the Microsoft account that owns the subscriptions.
-1. Use account ownership transfer to move to the new account.
-1. Now the Microsoft account should be free from any active subscriptions and can be deleted.
-1. Any deleted account will remain in view in the portal in an inactive status for historic billing reasons. You can filter it out of the view by selecting a check box to show only active accounts.
-
-## Azure EA term glossary
--- **Account**: An organizational unit on the Azure Enterprise portal. It is used to administer subscriptions and for reporting.-- **Account owner**: The person who manages subscriptions and service administrators on Azure. They can view usage data on this account and its associated subscriptions.-- **Amendment subscription**: A one-year, or coterminous subscription under the enrollment amendment.-- **Prepayment**: Prepayment of an annual monetary amount for Azure services at a discounted Prepayment rate for usage against this prepayment.-- **Department administrator**: The person who manages departments, creates new accounts and account owners, views usage details for the departments they manage, and can view costs when granted permissions.-- **Enrollment number**: A unique identifier supplied by Microsoft to identify the specific enrollment associated with an Enterprise Agreement.-- **Enterprise administrator**: The person who manages departments, department owners, accounts, and account owners on Azure. They have the ability to manage enterprise administrators as well as view usage data, billed quantities, and unbilled charges across all accounts and subscriptions associated with the enterprise enrollment.-- **Enterprise agreement**: A Microsoft licensing agreement for customers with centralized purchasing who want to standardize their entire organization on Microsoft technology and maintain an information technology infrastructure on a standard of Microsoft software.-- **Enterprise agreement enrollment**: An enrollment in the Enterprise Agreement program providing Microsoft products in volume at discounted rates.-- **Microsoft account**: A web-based service that enables participating sites to authenticate a user with a single set of credentials.-- **Microsoft Azure Enterprise Enrollment Amendment (enrollment amendment)**: An amendment signed by an enterprise, which provides them access to Azure as part of their enterprise enrollment.-- **Azure Enterprise portal**: The portal used by our enterprise customers to manage their Azure accounts and their related subscriptions.-- **Resource quantity consumed**: The quantity of an individual Azure service that was used in a month.-- **Service administrator**: The person who accesses and manages subscriptions and development projects on the Azure Enterprise portal.-- **Subscription**: Represents an Azure Enterprise portal subscription and is a container of Azure services managed by the same service administrator.-- **Work or school account**: For organizations that have set up Microsoft Entra ID with federation to the cloud and all accounts are on a single tenant.-
-### Enrollment statuses
--- **New**: This status is assigned to an enrollment that was created within 24 hours and will be updated to a Pending status within 24 hours.-- **Pending**: The enrollment administrator needs to sign in to the Azure Enterprise portal. Once signed in, the enrollment will switch to an Active status.-- **Active**: The enrollment is Active and accounts and subscriptions can be created in the Azure Enterprise portal. The enrollment will remain active until the Enterprise Agreement end date.-- **Indefinite extended term**: An indefinite extended term takes place after the Enterprise Agreement end date has passed. It enables Azure EA customers who are opted in to the extended term to continue to use Azure services indefinitely at the end of their Enterprise Agreement.-
- Before the Azure EA enrollment reaches the Enterprise Agreement end date, the enrollment administrator should decide which of the following options to take:
-
- - Renew the enrollment by adding additional Azure Prepayment.
- - Transfer to a new enrollment.
- - Migrate to the Microsoft Online Subscription Program (MOSP).
- - Confirm disablement of all services associated with the enrollment.
-- **Expired**: The Azure EA customer is opted out of the extended term, and the Azure EA enrollment has reached the Enterprise Agreement end date. The enrollment will expire, and all associated services will be disabled.-- **Transferred**: Enrollments where all associated accounts and services have been transferred to a new enrollment appear with a transferred status.
- >[!NOTE]
- > Enrollments don't automatically transfer if a new enrollment number is generated at renewal. You must include your prior enrollment number in your renewal paperwork to facilitate an automatic transfer.
-
-## Next steps
--- Read about how [virtual machine reservations](ea-portal-vm-reservations.md) can help save you money.-- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).-- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions about EA subscription ownership.
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
Title: Azure EA agreements and amendments
-description: This article explains how Azure EA agreements and amendments affect your Azure EA portal use.
+description: The article describes how Azure EA agreements and amendments might affect your access, use, and payments for Azure services.
Previously updated : 01/04/2024 Last updated : 02/13/2024
The Licensing Solution Partners (LSP) provides a single percentage number in the
- Customer signs an EA with Azure Prepayment of USD 100,000. - The meter rate for Service A is USD 10 / Hour.-- LSP sets markup percentage of 10% on the EA Portal.
+- LSP sets markup percentage of 10% on the Azure portal.
- The following example is how the customer sees the commercial information: - Monetary Balance: USD 110,000. - Meter rate for Service A: USD 11 / Hour.
You can add price markup on Azure portal with the following steps:
1. Accept the disclaimer and select **Publish** to publish the markup. 1. The customer can now view credits and charges details.
-You can add price markup in the Azure Enterprise portal with the following steps:
-
-#### First step - Add price markup
-
-1. In the Enterprise Portal, select **Reports** in the left navigation menu.
-1. Under _Usage Summary_, select the blue **Markup** link.
-1. Enter the markup percentage (between 0 to 100) and select **Preview**.
-
-#### Second step - Review and validate
-
-Review the markup price in the _Usage Summary_ for the Prepayment term in the customer view. The Microsoft price is still available in the partner view. The views can be toggled using the partner markup **People** toggle at the top right.
-
-1. Review the prices in the price sheet.
-1. Changes can be made before publishing by selecting **Edit** on _View Usage Summary > Customer View_ tab.
-
-Both the service prices and the Prepayment balances get marked up by the same percentages. If you have different percentages for monetary balance and meter rates, or different percentages for different services, then don't use this feature.
-
-#### Third step - Publish
-
-After pricing is reviewed and validated, select **Publish**.
-
-Pricing with markup is available to enterprise administrators immediately after selecting publish. Edits can't be made to markup. You must disable markup and begin from the first step.
- ### Which enrollments have a markup enabled? To check if an enrollment has a markup published, select **Manage** in the left navigation menu, then select the **Enrollment** tab. Select the enrollment box to check, and view the markup status under _Enrollment Detail_. It displays the current status of the markup feature for that EA as Disabled, Preview, or Published.
-To check markup status of an enrollment on Azure portal, follow the below steps:
+To check markup status of an enrollment in the Azure portal, follow the below steps:
1. In the Azure portal, sign in as a partner administrator. 1. Search for **Cost Management + Billing** and select it.
To check markup status of an enrollment on Azure portal, follow the below steps:
Once partner markup is published, the indirect customer has access to the balance and charge CSV monthly files and usage detail files. The usage detail files include the resource rate and extended cost.
-### How can I as partner apply markup to existing EA customer(s) that was earlier with another partner?
-Partners can use the markup feature (on Azure EA portal or Azure portal) after a Change of Channel Partner is processed; there's no need to wait for the next anniversary term.
+### How can I as partner apply markup to existing EA customers that were earlier with another partner?
+Partners can use the markup feature in the Azure portal after a Change of Channel Partner is processed. There's no need to wait for the next anniversary term.
## Resource Prepayment and requesting quota increases
Quotas described previously aren't Service Prepayment. You can determine the num
You can request a quota increase at any time by submitting an [online request](https://portal.azure.com/). To process your request, provide the following information: -- The Microsoft account or work or school account associated with the account owner of your subscription. It's the email address used to sign in to the Microsoft Azure portal to manage your subscription(s). Verify that the account is associated with an EA enrollment.-- The resource(s) and amount for which you desire a quota increase.
+- The Microsoft account or work or school account associated with the account owner of your subscription. It's the email address used to sign in to the Microsoft Azure portal to manage your subscriptions. Verify that the account is associated with an EA enrollment.
+- The resources and amount for which you desire a quota increase.
- The Azure Developer Portal Subscription ID associated with your service. - For information on how to obtain your subscription ID, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
Plan SKUs offer the ability to purchase a suite of integrated services together
One example would be the Operations Management Suite (OMS) subscription. OMS offers a simple way to access a full set of cloud-based management capabilities. It includes analytics, configuration, automation, security, backup, and disaster recovery. OMS subscriptions include rights to System Center components to provide a complete solution for hybrid cloud environments.
-Enterprise Administrators can assign Account Owners to prepare previously purchased Plan SKUs in the Enterprise portal by following these steps:
-
-### View the price sheet to check included quantity
-
-1. Sign in as an Enterprise Administrator.
-1. Select **Reports** on the left navigation.
-1. Select the **Price Sheet** tab.
-1. Select the **Download** symbol in the top-right corner of the page.
-1. Find the corresponding Plan SKU part numbers with filter on column **Included Quantity** and select values greater than 0 (zero).
-
-EA customer can view price sheet in Azure portal. See [view price sheet in Azure portal](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
-
-### Existing/New account owners to create new subscriptions
-
-**Step One: Sign in to account**
-1. From the Azure EA Portal, select the **Manage** tab and navigate to **Subscription** on the top menu.
-1. Verify that you're logged in as the account owner of this account.
-1. Select **+Add Subscription**.
-1. Select **Purchase**.
-
-The first time you add a subscription to an account, you need to provide your contact information. When you add more subscriptions later, your contact information is populated for you.
-
-The first time you add a subscription to your account, you're asked to accept the MOSA agreement and a Rate Plan. These sections aren't Applicable to Enterprise Agreement Customers, but are currently necessary to create your subscription. Your Microsoft Azure Enterprise Agreement Enrollment Amendment supersedes the above items and your contractual relationship doesn't change. Select the box indicating you accept the terms.
-
-**Step Two: Update subscription name**
-
-All new subscriptions are added with the default *Microsoft Azure Enterprise* subscription name. It's important to update the subscription name to differentiate it from the other subscriptions within your Enterprise Enrollment and ensure that it's recognizable on reports at the enterprise level.
-
-Select **Subscriptions**, select the subscription you created, and then select **Edit Subscription Details.**
-
-Update the subscription name and service administrator and select the checkmark. The subscription name appears on reports and it's also the name of the project associated with the subscription on the development portal.
-
-New subscriptions might take up to 24 hours to propagate in the subscriptions list.
-
-Only account owners can view and manage subscriptions.
-
-Direct customer can create and edit subscription in Azure portal. See [manage subscription in Azure portal](direct-ea-administration.md#create-a-subscription).
-
-### Troubleshooting
-
-**Account owner showing in pending status**
-
-When new Account Owners (AO) are added to the enrollment for the first time, they always have `pending` under status. When you receive the activation welcome email, the AO can sign in to activate their account. This activation updates their account status from `pending` to `active`.
-
-**Usages being charged after Plan SKUs are purchased**
-
-This scenario occurs when the customer deployed services under the wrong enrollment number or selected the wrong services.
-
-To validate if you're deploying under the right enrollment, you can check your included units information via the price sheet. Sign in as an Enterprise Administrator and select **Reports** on the left navigation and select **Price Sheet** tab. Select the Download symbol in the top-right corner and find the corresponding Plan SKU part numbers with filter on column **Included Quantity** and select values greater than "0."
-
-Ensure that your OMS plan is showing on the price sheet under included units. If there are no included units for OMS plan on your enrollment, your OMS plan might be under another enrollment. Contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport).
-
-If the included units for the services on the price sheet don't match with what you deployed, you may have deployed services that aren't covered by the plan. For example, Operational Insights Premium Data Analyzed vs. Operational Insights Standard Data Analyzed. In this example, contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport) so we can assist you further.
-
-**Provisioned Plan SKU services on wrong enrollment**
+### View the price sheet and check included quantity
-If you have multiple enrollments and you deployed services under the wrong enrollment number, which doesn't have an OMS plan, contact Azure Enterprise Portal Support at [https://aka.ms/AzureEntSupport](https://aka.ms/AzureEntSupport).
+You can view your price sheet in the Azure portal. For more information, see [Download pricing for an enterprise agreement](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
## Next steps -- To start using the Azure EA portal, see [Get started with the Azure EA portal](ea-portal-get-started.md).-- Azure EA portal administrators should read [Azure EA portal administration](ea-portal-administration.md) to learn about common administrative tasks.
+- [Get started with your Enterprise Agreement billing account](ea-direct-portal-get-started.md).
cost-management-billing Ea Portal Enrollment Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md
- Title: Azure Enterprise enrollment invoices
-description: This article explains how to manage and act on your Azure Enterprise invoice.
-- Previously updated : 07/29/2023------
-# Azure Enterprise enrollment invoices
-
-This article explains how to manage and act on your Azure Enterprise Agreement (Azure EA) invoice. Your invoice is a representation of your bill. Review it for accuracy. You should also get familiar with other tasks that might be needed to manage your invoice.
-
-> [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
->
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
->
-> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
-
-## View usage summary and download reports
-
-Enterprise administrators can view a summary of their usage data, Azure Prepayment consumed, and charges associated with additional usage in the Azure Enterprise portal. Charges are presented at the summary level across all accounts and subscriptions.
-
-To view detailed usage for specific accounts, download the usage detail report:
-
-1. Sign in to the Azure Enterprise portal.
-1. Select **Reports**.
-1. Select the **Download Usage** tab.
-1. In the list of reports, select **Download** for the monthly report that want to get.
-
- > [!NOTE]
- > The usage detail report doesn't include any applicable taxes.
- >
- > There may be a latency of up to eight hours from the time usage was incurred to when it's reflected on the report.
-
-To view the usage summary reports and graphs:
-
-1. Sign in to the Azure Enterprise portal.
-1. Select a Prepayment term.
- To change the date range for **Usage Summary**, you can toggle from **M** (Monthly) to **C** (Custom) on the top right of the page and then enter custom start and end dates.
- ![Create and view usage summary and download reports in custom view](./media/ea-portal-enrollment-invoices/create-ea-view-usage-summary-and-download-reports-custom-view.png)
-1. To view additional details, you can select a period or month on the graph.
- - The graph shows month-over-month usage with a breakdown of utilized usage, service overage, charges billed separately, and Azure Marketplace charges.
- - For the selected month, you can use the fields below the graph to filter by departments, accounts, and subscriptions.
- - You can toggle between **Charge by Services** and **Charge by Hierarchy**.
- - View details from **Azure Service**, **Charges Billed Separately**, and **Azure Marketplace** by expanding the relevant sections.
-
-View this video to see how to view usage:
-
-> [!VIDEO https://www.youtube.com/embed/Cv2IZ9QCn9E]
-
-### Download CSV reports
-
-Enterprise administrators use the Monthly Report Download page to download the following reports as CSV files:
--- Balance and charge-- Usage detail-- Azure Marketplace charges-- Price sheet-
-To download reports:
-
-1. In the Azure Enterprise portal, select **Reports**.
-2. Select **Download Usage** at the top of the page.
-3. Select **Download** next to the month's report.
-
- > [!NOTE]
- > There may be a latency of up to 72 hours between the incurred usage date and when usage is shown in the reports.
- >
- > Users downloading CSV files with Safari to Excel may experience formatting errors. To avoid errors, open the file using a text editor.
-
-![Example showing Download Usage page](./media/ea-portal-enrollment-invoices/create-ea-download-csv-reports.png)
-
-View this video to see how to download usage information:
-
-> [!VIDEO https://www.youtube.com/embed/eY797htT1qg]
-
-### Advanced report download
-
-You can use the advance report download to get reports that cover specific date ranges or accounts. The output file is in the CSV format to accommodate large record sets.
-
-1. In the Azure Enterprise portal, select **Advanced Report Download**.
-1. Select an appropriate date range and the appropriate accounts.
-1. Select **Request Usage Data**.
-1. Select the **Refresh** button until the report status updates to **Download**.
-1. Download the report.
-
-### Download usage reports and billing information for a prior enrollment
-
-You can download usage reports and billing information for a prior enrollment after an enrollment transfer has taken place. Historical reporting is available in both the Azure Enterprise portal and cost management.
-
-The Azure Enterprise portal filters inactive enrollments out of view. You'll need to uncheck the **Active** box to view inactive transferred enrollments.
-
-![Unchecking the active box allows user to see inactive enrollments](./media/ea-portal-enrollment-invoices/unchecked-active-box.png)
-
-## PO number management
-
-PO number management functionality in the EA portal is getting deprecated. It is currently read-only in the EA portal. Instead, an EA administrator can use the Azure portal to manage PO numbers. For more information, see [Update a PO number](direct-ea-azure-usage-charges-invoices.md#update-a-po-number-for-an-upcoming-overage-invoice).
-
-## Azure enterprise billing frequency
-
-Microsoft bills annually at the enrollment effective date for any Prepayment purchases of the Microsoft Azure services. For any usage exceeding the Prepayment amounts, Microsoft bills in arrears.
--- Prepayment fees are quoted based on a monthly rate and billed annually in advance.-- Overage fees are calculated each month and billed in arrears at the end of your billing period.-
-### Billing intervals
-
-You billing interval depends on how you choose to make your Prepayment purchases. Your annual Prepayment is coterminous with either:
--- Your enrollment anniversary date-- The effective date of your one-year Amendment Subscription.-
-The date you receive your overage invoice depends on your enrollment start date and set-up:
--- **Direct enrollments with a start date before May 1, 2018**:
- - If you're on a direct Enterprise Agreement (EA), you're on an annual billing cycle for Azure services, excluding Azure Marketplace services. Your billing cycle is based on the anniversary date: the date when your agreement became effective.
- - If you surpass 150% of your Azure EA Prepayment threshold, you'll automatically be converted to a quarterly billing cycle that is based on your anniversary date. You'll also receive an Azure service overage invoice.
- - If you don't surpass 150% of your Azure Prepayment threshold, your enrollment will remain on an annual billing cycle. The overage invoice will be received at the end of the Prepayment year.
--- **Direct enrollments with a start date after May 1, 2018**:
- - Your Azure consumption and charges billed separately invoices are on a monthly billing cycle.
- - Any charges not covered by your Azure Prepayment are due as an overage payment.
--- **Indirect enrollments with an enrollment that started before May 1, 2018**:-
- If you're an indirect Enterprise Agreement (EA) customer with a start date before May 1, 2018, you're set up on a quarterly billing cycle. The channel partner (CP) invoices you directly.
--- **Indirect enrollments with a start date after May 1, 2018**:-
- You're on a monthly billing cycle.
-
-### Increase your Azure Prepayment
-
-You can increase your Prepayment at any time. You'll be billed for the number of months remaining in that year's Prepayment period. For example, if you sign up for a one-year Amendment Subscription and then increase your Prepayment during month six, you'll be invoiced for the remaining six months of that term. Your Prepayment quantities will then be updated for the last six months of your Prepayment term. These new quantities will be used for determining any overage charges.
-
-### Overage
-
-For overage, you're billed for the usage or reservations that exceed your Prepayment during the billing period. To view a breakdown of how the overage quantities for individual items were calculated, refer to the usage summary report or contact your channel partner.
-
-For each item on the invoice, you'll see:
--- **Extended Amount**: the total charges-- **Prepayment Usage**: the amount of your Prepayment used to cover the charges-- **Net Amount**: the charges that exceed your Prepayment-
-Applicable taxes are computed only on the net amount that exceeds your Prepayment.
-
-Overage invoicing is automated. The timing of notifications and invoices depends on your billing period end date.
--- Overage notification is normally sent seven days following your billing end date.-- Overage invoices are sent seven to nine days after notification.-- You can review charges and update system-generated PO numbers during the seven days between the overage notification and invoicing.-
-### Azure Marketplace
-
-Effective from the April 2019 billing cycle, customers started to receive a single Azure invoice that combines all Azure and Azure Marketplace charges into a single invoice instead of two separate invoices. This change doesn't affect customers in Australia, Japan, or Singapore.
-
-During the transition to a combined invoice, you'll receive a partial Azure Marketplace invoice. This final separate invoice will show Azure Marketplace charges incurred before the date of your billing update. Azure Marketplace charges incurred after that date will appear on your Azure invoice. After the transition period, you'll see all Azure and Azure Marketplace charges on the combined invoice.
-
-### Adjust billing frequency
-
-A customer's billing frequency is annual, quarterly, or monthly. The billing cycle is determined when a customer signs their agreement. Monthly billing is smallest billing interval.
--- **Approval** from an enterprise administrator is required to change a billing cycle from annual to quarterly for direct enrollments. Approval from a partner administrator is required for indirect enrollments. The change becomes effective at the end of the current billing quarter.-- **An amendment** to the agreement is required to change a billing cycle from annual or quarterly to monthly. Any change to the existing enrollment billing cycle requires approval of an enterprise administrator or from your "Bill to Contact".-- **Submit** your approval to [Azure Enterprise portal support](https://support.microsoft.com/supportrequestform/cf791efa-485b-95a3-6fad-3daf9cd4027c). Select the issue category: **Billing and Invoicing**.-
-The change becomes effective at the end of the current billing quarter.
-
-### Request an invoice copy
-
-If you're an indirect enterprise agreement customer, contact your partner to request a copy of your invoice.
-
-## Credits and adjustments
-
-You can view all credits or adjustments applied to your enrollment in the **Reports** section of [the Azure Enterprise portal](https://ea.azure.com).
-
-To view credits:
-
-1. In [the Azure Enterprise portal](https://ea.azure.com), select the **Reports** section.
-1. Select **Usage Summary**.
-1. In the top-right corner, change the **M** to **C** view.
-1. Extend the adjustment field in the Azure service Prepayment table.
-1. You'll see credits applied to your enrollment and a short explanation. For example: Service Level Agreement Credit.
-
-## Pay your overage with your Azure Prepayment
-
-To apply your Azure Prepayment to overages, you must meet the following criteria:
--- You've incurred overage charges that haven't been paid and are within 3 months of the invoice bill date.-- Your available Azure Prepayment amount covers the full number of incurred charges, including all past unpaid Azure invoices.-- The billing term that you want to complete must be fully closed. Billing fully closes after the fifth day of each month.-- The billing period that you want to offset must be fully closed.-- Your Azure Prepayment Discount (APD) is based on the actual new Prepayment minus any funds planned for the previous consumption. This requirement applies only to overage charges incurred. It's only valid for services that consume Azure Prepayment, so doesn't apply to Azure Marketplace charges. Azure Marketplace charges are billed separately.-
-To complete an overage offset, you or the account team can open a support request. An emailed approval from your enterprise administrator or Bill to Contact is required.
-
-## Move charges to another enrollment
-
-Usage data is only moved when a transfer is backdated. There are two options to move usage data from one enrollment to another:
--- Account transfers from one enrollment to another enrollment-- Enrollment transfers from one enrollment to another enrollment-
-For either option, you must submit a [support request](https://support.microsoft.com/supportrequestform/cf791efa-485b-95a3-6fad-3daf9cd4027c) to the EA Support Team for assistance. ΓÇï
-
-## Enterprise Agreement usage calculations
-
-Refer to [Azure services](https://azure.microsoft.com/services/) and [Azure pricing](https://azure.microsoft.com/pricing/) for basic public pricing information, units of measure, FAQs, and usage reporting guidelines for each individual service. You can find more information about EA calculations in the following sections.
-
-### Enterprise Agreement units of measure
-
-The units of measure for Enterprise Agreements are often different than seen in our other programs such as the Microsoft Online Services Agreement program (MOSA). This disparity means that, for a number of services, the unit of measure is aggregated to provide the normalized pricing. The unit of measure shown in the Azure Enterprise portal's Usage Summary view is always the Enterprise measure. A full list of current units of measure and conversions for each service is provided by submitting a [support request](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
-
-### Conversion between usage detail report and the usage summary page
-
-In the download usage data report, you can see raw resource usage up to six decimal places. However, usage data shown in the Azure Enterprise portal is rounded to four decimal places for Prepayment units and truncated to zero decimals for overage units. Raw usage data is first rounded to four digits before conversion to units used in the Azure Enterprise portal. Then, the converted Enterprise units are rounded again to four digits. You can only see the actual consumed hours before conversion in the download usage report and not within the Azure Enterprise portal.
-
-For example: If 694.533404 actual SQL Server hours are reported in the usage detail report. These units are converted to 6.94533404 of 100 compute hours, and then rounded to 6.9453 and displayed in the Azure Enterprise portal.
--- To determine the extended billing amount, the displayed units are multiplied by the Prepayment Unit Price, and the result is truncated to two decimals. For Japanese Yen (JPY) and Korean Won (KRW), the extended amount is rounded to zero decimals.-- For overage, the billing units are truncated to six digits and then multiplied by the Overage Unit Price to determine the extended billing amount.-- For Managed Service Provider (MSP) billing, all usage associated to a department marked as MSP is truncated to zero decimals after conversion to the EA unit of measure. As a result, the sum of this usage could be lower than the sum total of all usage reported in the Azure Enterprise portal. It depends on if the MSP is within their Azure Prepayment balance or is in overage.-
-### Graduated pricing
-
-Enterprise Agreement pricing doesn't currently include graduated pricing where the charges per unit decreases as usage increases. When you move from a MOSA program with graduated pricing to an Enterprise Agreement, your prices are set commensurate with the service's base tier minus any applicable adjustments for Enterprise Agreement discounts.
-
-### Partial hour billing
-
-All billed usage is based on minutes converted to partial hours and not on whole hour increments. The listed hourly Enterprise rates are applied to total hours plus partial hours.
-
-### Average daily consumption
-
-Some services are priced on a monthly basis, but usage is reported on daily basis. In these cases, the usage is evaluated once per day, divided by 31, and summed across the number of days in that billing month. So, rates are never higher than expected for any month and are slightly lower for those months with less than 31 days.
-
-### Compute hours conversion
-
-Before January 28, 2016, usage for A0, A2, A3, and A4 Standard and Basic Virtual Machines and Cloud Services was emitted as A1 Virtual Machine meter minutes. A0 VMs counted as fractions of A1 VM minutes while A2s, A3s, and A4s were converted to multiples. Because this policy caused some confusion for our customers, we implemented a change to assign per-minute usage to dedicated A0, A2, A3, and A4 meters.
-
-The new metering took effect between January 27 and January 28, 2016. On the CSV download that shows usage during this transition period, you can see both:
--- Usage emitted on the A1 meter-- Usage emitted on the new dedicated meter corresponding with your deployment's size.-
-| **Deployment size** | **Usage emitted as multiple of A1 through January 26, 2016** | **Usage emitted on dedicated meter starting January 27, 2016** |
-| | | |
-| A0 | 0.25 of A1 hour | 1 of A0 hour |
-| A2 | 2 of A1 hour | 1 of A2 hour |
-| A3 | 4 of A1 hour | 1 of A3 hour |
-| A4 | 8 of A1 hour | 1 of A4 hour |
-
-### Regions
-
-For those services where zone and region affect pricing, see the following table for a map of geographies and regions:
-
-| Geo | Data Transfer Regions | CDN Regions |
-| | | |
-| Zone 1 | Europe North <br> Europe West <br> US West <br> US East <br> US North Central <br> US South Central <br> US East 2 <br> US Central | North America <br> Europe |
-| Zone 2 | Asia Pacific East <br> Asia Pacific Southeast <br> Japan East <br> Japan West <br> Australia East <br> Australia Southeast | Asia Pacific <br> Japan <br> Latin America <br> Middle East / Africa <br> Australia East <br> Australia Southeast |
-| Zone 3 | Brazil South | |
-
-There are no charges for data egress between services housed within the same data center. For example, Microsoft 365 and Azure.
-
-### Azure Prepayment and unbilled usage
-
-Azure Prepayment is an amount paid up front for Azure services. The Azure Prepayment is consumed as services are used. First-party Azure services are billed against the Azure Prepayment. However, some charges are billed separately, and Azure Marketplace services don't consume Azure Prepayment.
-
-### Charges billed separately
-
-Some products and services provided from third-party sources don't consume Azure Prepayment. Instead, these items are billed separately as part of the standard billing cycle's overage invoice.
-
-We've combined all Azure and Azure Marketplace charges into a single invoice that aligns with the enrollment's billing cycle. The combined invoice doesn't apply to customers in Australia, Japan, or Singapore.
-
-The combined invoice shows Azure usage first, followed by any Azure Marketplace charges. Customers in Australia, Japan, or Singapore see their Azure Marketplace charges on a separate invoice.
-
-If there's no overage usage at the end of the standard billing cycle, charges billed separately are invoiced separately. This process applies to customers in Australia, Japan, and Singapore.
-
-The following services are examples of charges billed separately. You can get a full list of the services where charges are billed separately by submitting a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
--- Canonical-- Citrix XenApp Essentials-- Citrix XenDesktop Registered User-- Openlogic-- Remote Access Rights XenApp Essentials Registered User-- Ubuntu Advantage-- Visual Studio Enterprise (Monthly)-- Visual Studio Enterprise (Annual)-- Visual Studio Professional (Monthly)-- Visual Studio Professional (Annual)-
-## What to expect after change of channel partner
-
-If the change of channel partner (COCP) happens in the middle of the month, a customer will receive an invoice for usage under the previous associated partner and another invoice for the usage under new partner.
-
-The invoices will be released following the month after the billing period ends. If the billing cadence is monthly, then September's invoice will be released in October for both partners. If the billing cycle is quarterly or annually, the customer can expect an invoice for the previous associated partner for the usage under their period and rest will be to the new partner based on the billing cadence.
-
-## Next steps
--- For information about understanding your invoice and charges, see [Understand your Azure Enterprise Agreement bill](../understand/review-enterprise-agreement-bill.md).-- To start using the Azure Enterprise portal, see [Get started with the Azure EA portal](ea-portal-get-started.md).
cost-management-billing Ea Portal Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-get-started.md
- Title: Get started with the Azure Enterprise portal
-description: This article explains how Azure Enterprise Agreement (Azure EA) customers use the Azure Enterprise portal.
----- Previously updated : 12/16/2022---
-# Get started with the Azure Enterprise portal
-
-This article helps direct and indirect Azure Enterprise Agreement (Azure EA) customers start to use the [Azure Enterprise portal](https://ea.azure.com). Get basic information about:
--- The structure of the Azure Enterprise portal.-- Roles used in the Azure Enterprise portal.-- Subscription creation.-- Cost analysis in the Azure Enterprise portal and the Azure portal.-
-> [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
->
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
->
-> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
-
-## Get started with EA onboarding
-
-For an Azure EA onboarding guide, see [Azure EA Onboarding Guide (PDF)](https://ea.azure.com/api/v3Help/v2AzureEAOnboardingGuide).
-
-View this video to watch a full Azure Enterprise portal onboarding session:
-
-> [!VIDEO https://www.youtube.com/embed/OiZ1GdBpo-I]
-
-## Understanding EA user roles and introduction to user hierarchy
-
-To help manage your organization's usage and spend, Azure customers with an Enterprise Agreement (EA) can assign five distinct administrative roles:
--- Enterprise Administrator-- Enterprise Administrator (read only)-- Department Administrator-- Department Administrator (read only)-- Account Owner-
-Each role has a varying degree of user limits and permissions. For more information, see [Organization structure and permissions by role](./understand-ea-roles.md#organization-structure-and-permissions-by-role).
-
-## Activate your enrollment, create a subscription, and other administrative tasks
-
-For more information regarding activating your enrollment, creating a department or subscription, adding administrators and account owners, and other administrative tasks, see [Azure EA portal administration](./ea-portal-administration.md).
-
-If youΓÇÖd like to know more about transferring an Enterprise subscription to a Pay-As-You-Go subscription, see [Azure Enterprise transfers](./ea-transfers.md).
-
-## View usage summary and download reports
-
-You can manage and act on your Azure EA invoice. Your invoice is a representation of your bill and should be reviewed for accuracy.
-
-To view usage summary, download reports, and manage enrollment invoices, see [Azure Enterprise enrollment invoices](./ea-portal-enrollment-invoices.md).
-
-## Now that you're familiar with the basics, here are some additional links to help you get onboarded
-
-[Azure EA pricing](./ea-pricing-overview.md) provides details on how usage is calculated and goes over charges for various Azure services in the Enterprise Agreement where the calculations are more complex.
-
-If you'd like to know about how Azure reservations for VM reserved instances can help you save money with your enterprise enrollment, see [Azure EA VM reserved instances](./ea-portal-vm-reservations.md).
-
-For information on which REST APIs to use with your Azure enterprise enrollment and an explanation for how to resolve common issues with REST APIs, see [Azure Enterprise REST APIs](./enterprise-rest-apis.md).
-
-[Azure EA agreements and amendments](./ea-portal-agreements.md) describes how Azure EA agreements and amendments might affect your access, use, and payments for Azure services.
-
-[Azure Marketplace](./ea-azure-marketplace.md) explains how EA customers and partners can view marketplace charges and enable Azure Marketplace purchases.
-
-For explanations regarding the common tasks that a partner EA administrator accomplishes in the Azure EA portal, see [Azure EA portal administration for partners](./ea-partner-portal-administration.md).
-
-## Next steps
--- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about getting started with the EA portal.-- Azure Enterprise portal administrators should read [Azure Enterprise portal administration](ea-portal-administration.md) to learn about common administrative tasks.-- If you need help with troubleshooting Azure Enterprise portal issues, see [Troubleshoot Azure Enterprise portal access](ea-portal-troubleshoot.md).
cost-management-billing Ea Portal Vm Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-vm-reservations.md
Title: Azure EA VM reserved instances
description: This article summaries how Azure reservations for VM reserved instances can help you save your money with your enterprise enrollment. Previously updated : 11/29/2022 Last updated : 02/13/2024
WeΓÇÖll issue a partial refund when EA customers return reservations that were p
To return a partial refund with Azure Prepayment:
-1. The refund amount is reflected on the purchase month. In the EA portal, navigate to **Usage Summary** > **Adjustments/Charge by services**.
-2. The refund is shown in the EA portal as a negative adjustment in the purchase month and a positive adjustment in the current month. It appears similar to a reservations exchange.
+You can view refund details in the Azure portal. To view refunds:
-To return a partial refund with an overage:
-
-1. View the refund amount displayed in the purchase month. In the EA portal, navigate to **Usage Summary** > **Charge by services**.
-2. The credit memo references the original invoice number. To reconcile the initial purchase with the credit memo, refer to the original invoice number.
-
-Direct Enterprise customers can view the refund details in the Azure portal. To view refunds:
-
-1. Navigate to **Cost Management + Billing** > select a billing scope > in the left menu under **Billing**, select **Reservation transactions** menu.
+1. Navigate to **Cost Management + Billing** > select a billing scope > in the left menu under **Billing**, select **Reservation transactions** menu.
1. In the list of reservation transactions, you'll see entries under **Type** labeled with `Refund`. ## Reservation costs and usage
For information about pricing, see [Linux Virtual Machines Pricing](https://azur
### Reservation prices
-Any reservation discounts that your organization might have negotiated aren't shown in the EA portal price sheet. Previously, the discounted rates were available in the EA portal, however that functionality was removed. If youΓÇÖve negotiated reduced reservation prices, currently the only way to view the discounted prices is in the purchase reservation purchase experience.
+If youΓÇÖve negotiated reduced reservation prices, currently the only way to view the discounted prices is in the purchase reservation purchase experience.
The prices for reservations aren't necessarily the same between retail rates and EA. They could be the same, but if youΓÇÖve negotiated a discount, the rates will differ.
Reserved instances can reduce your virtual machine costs up to 72 percent over P
### How to buy reserved virtual machine instances
-To purchase an Azure reserved virtual machine instance, an Enterprise Azure enrollment admin must enable the _Reserve Instance_ purchase option. The option is in the _Enrollment Detail_ section on the _Enrollment_ tab in the [Azure EA Portal](https://ea.azure.com/).
+To purchase an Azure reserved virtual machine instance, an Enterprise administrator must enable the _Reserved Instances_ purchase option. For more information, see [View and manage enrollment policies](direct-ea-administration.md#view-and-manage-enrollment-policies).
Once the EA enrollment is enabled to add reserved instances, any account owner with an active subscription associated to the EA enrollment can buy a reserved virtual machine instance in the [Azure portal](https://aka.ms/reservations). For more information, see [Prepay for virtual machines and save money with Reserved Virtual Machine Instances](../../virtual-machines/prepay-reserved-vm-instances.md). ### How to view reserved instance purchase details
-You can view your reserved instance purchase details via the _Reservations_ menu on the left side of the [Azure portal](https://aka.ms/reservations) or from the [Azure EA portal](https://ea.azure.com/). Select **Reports** from the left-side menu and scroll down to the _Charges by Services_ section on the _Usage Summary_ Tab. Scroll to the bottom of the section and your reserved instance purchases and usage will list at the end as indicated by the `1 year` or `3 years` designation next to the service name, for example: `Standard_DS1_v2 eastus 1 year` or `Standard_D2s_v3 eastus2 3 years`.
+You can view your reserved instance purchase details via the _Reservations_ menu on the left side of the [Azure portal](https://aka.ms/reservations). The _Reservations_ menu provides a summary of your reserved instance purchases, including the number of reserved instances purchased.
### How can I change the subscription associated with reserved instance or transfer my reserved instance benefits to a subscription under the same account?
For more information about changing the scope of a reservation, see [Change the
### How to view reserved instance usage details
-You can view your reserved instance usage detail in the [Azure portal](https://aka.ms/reservations) or in the [Azure EA portal](https://ea.azure.com/) (for EA customers who have access to view billing information) under _Reports_ > _Usage Summary_ > _Charges by Services_. Your reserved instances can be identified as service names containing 'Reservation', for example: `Reservation-Base VM or Virtual Machines Reservation-Windows Svr (1 Core)`.
+You can view your reserved instance usage detail in the [Azure portal](https://aka.ms/reservations).
Your usage detail and advanced report download CSV has more reserved instance usage information. The _Additional Info_ field helps you identify the reserved instance usage.
cost-management-billing Ea Pricing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-pricing-overview.md
Microsoft might drop the current Enterprise Agreement price for individual Azure
### Current effective pricing
-Customer and channel partners can view their current pricing for an enrollment by logging into the [Azure Enterprise portal](https://ea.azure.com/). Then view the price sheet page for that enrollment. Direct EA customers can now view and download **price sheet** in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). See [view price sheet](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
+Customer and channel partners can view their current pricing for an enrollment by signing in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Then view the price sheet page for that enrollment. Direct EA customers can now view and download **price sheet** in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). See [view price sheet](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
If you purchase Azure indirectly through one of our channel partners, you must get your pricing updates from your channel partner unless they enabled markup pricing to be displayed for your enrollment.
Enterprise administrators can enable account owners to create subscriptions base
## Next steps -- For more information, see [Enterprise enrollment invoices](ea-portal-enrollment-invoices.md).
+- Learn more about EA administrative tasks at [EA Billing administration on the Azure portal](direct-ea-administration.md).
cost-management-billing Ea Transfers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-transfers.md
Previously updated : 04/13/2023 Last updated : 02/13/2024
Keep the following points in mind when you transfer an enterprise account to a n
- Only the accounts specified in the request are transferred. If all accounts are chosen, then they're all transferred. - The source enrollment keeps its status as active or extended. You can continue using the enrollment until it expires.-- You can't change account ownership during a transfer. After the account transfer is complete, the current account owner can change account ownership in the EA portal. Keep in mind that an EA administrator can't change account ownership.
+- You can't change account ownership during a transfer. After the account transfer is complete, the current account owner can change account ownership in the Azure portal. Keep in mind that an EA administrator can't change account ownership.
### Prerequisites
When you request an account transfer with a support request, provide the followi
Other points to keep in mind before an account transfer: - Approval from a full EA Administrator, not a read-only EA administrator, is required for the target and source enrollment.
- - If you have only UPN (User Principal Name) entities configured as full EA administrators without access to e-mail, you must **either** create a temporary full EA administrator account in the EA portal **or** provide EA portal screenshot evidence of a user account associated with the UPN account.
+ - If you have only UPN (User Principal Name) entities configured as full EA administrators without access to e-mail, you must **either** create a temporary full EA administrator account in the Azure portal **or** provide an Azure portal screenshot evidence of a user account associated with the UPN account.
- You should consider an enrollment transfer if an account transfer doesn't meet your requirements. - Your account transfer moves all services and subscriptions related to the specific accounts. - Your transferred account appears inactive under the source enrollment and appears active under the target enrollment when the transfer is complete.
Other points to keep in mind before an account transfer:
An enrollment transfer is considered when: -- A current enrollment's Prepayment term has come to an end.
+- A current enrollment's Prepayment term ends.
- An enrollment is in expired/extended status and a new agreement is negotiated. - You have multiple enrollments and want to combine all the accounts and billing under a single enrollment.
This section is for informational purposes only. An enterprise administrator doe
When you request to transfer an old enterprise enrollment to a new enrollment, the following actions occur: -- Usage transferred may take up to 72 hours to be reflected in the new enrollment.-- If department administrator (DA) or account owner (AO) view charges were enabled on the old transferred enrollment, they must be enabled on the new enrollment.
+- Usage transferred might take up to 72 hours to be reflected in the new enrollment.
+- If department administrator (DA) or account owner (AO) view charges were enabled on the previously transferred enrollment, they must be enabled on the new enrollment.
- If you're using API reports or Power BI, [generate a new API access key](enterprise-rest-apis.md#api-key-generation) under your new enrollment. For API use, the API access key is used for authentication to older enterprise APIs that are retiring. For more information about retiring APIs that use the API access key, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](../automate/migrate-ea-reporting-arm-apis-overview.md). - All APIs use either the old enrollment or the new one, not both, for reporting purposes. If you need reports from APIs for the old and new enrollments, you must create your own reports. - All Azure services, subscriptions, accounts, departments, and the entire enrollment structure, including all EA department administrators, transfer to a new target enrollment.
Other points to keep in mind before an enrollment transfer:
- If an enrollment transfer doesn't meet your requirements, consider an account transfer. - The source enrollment status is updated to `Transferred` and is available for historic usage reporting purposes only. - There's no downtime during an enrollment transfer.-- Usage may take up to 24 - 48 hours to be reflected in the target enrollment.
+- Usage might take up to 24 - 48 hours to be reflected in the target enrollment.
- Cost view settings for department administrators or account owners don't carry over. - If previously enabled, settings must be enabled for the target enrollment. - Any API keys used in the source enrollment must be regenerated for the target enrollment.
Other points to keep in mind before an enrollment transfer:
- The enrollment or account transfer between different currencies affects monthly reservation purchases. The following image illustrates the effects. :::image type="content" source="./media/ea-transfers/cross-currency-reservation-transfer-effects.png" alt-text="Diagram illustrating the effects of cross currency reservation transfers." border="false" lightbox="./media/ea-transfers/cross-currency-reservation-transfer-effects.png"::: - When there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of next monthly payment for an individual reservation. This cancellation is intentional and affects only the monthly reservation purchases.
- - You may have to repurchase the canceled monthly reservations from the source enrollment using the new enrollment in the local or new currency. If you repurchase a reservation, the purchase term (one or three years) is reset. The repurchase doesn't continue under the previous term.
+ - You might have to repurchase the canceled monthly reservations from the source enrollment using the new enrollment in the local or new currency. If you repurchase a reservation, the purchase term (one or three years) is reset. The repurchase doesn't continue under the previous term.
- If there's a backdated enrollment transfer, any savings plan benefit is applicable from the transfer request submission date - not from the effective transfer date. ### Auto enrollment transfer
-You might see that an enrollment has the **Transferred** state, even if you haven't submitted a support ticket to request an enrollment transfer. The **Transferred** state results from the auto enrollment transfer process. In order for the auto enrollment transfer to occur during the renewal phrase, there are a few items that must be included in the new agreement:
+You might see that an enrollment has the **Transferred** state, even if you didn't submit a support ticket to request an enrollment transfer. The **Transferred** state results from the auto enrollment transfer process. In order for the auto enrollment transfer to occur during the renewal phrase, there are a few items that must be included in the new agreement:
-- Prior enrollment number (it must exist in EA portal)
+- Prior enrollment number (it must exist in Azure portal)
- Expiration date of the prior enrollment number is one day before the effective start date of the new agreement-- The new agreement has an invoiced Azure Prepayment order that has a current date or it's backdated-- The new enrollment is created in the EA portal
+- The new agreement has an invoiced Azure Prepayment order that has a current date or is backdated
+- The new enrollment is created in the Azure portal
-If there's no missing usage data in the EA portal between the prior enrollment and the new enrollment, then you don't have to create a transfer support ticket.
+If there's no missing usage data in the Azure portal between the prior enrollment and the new enrollment, then you don't have to create a transfer support ticket.
### Prepayment isn't transferrable
Prepayment isn't transferrable between enrollments. Prepayment balances are tied
There's no downtime during an account or enrollment transfer. It can be completed on the same day of your request if all requisite information is provided.
-## Transfer an Enterprise subscription to a Pay-As-You-Go subscription
+## Transfer an Enterprise subscription to a pay-as-you-go subscription
-To transfer an Enterprise subscription to an individual subscription with Pay-As-You-Go rates, you must create a new support request in the Azure Enterprise portal. To create a support request, select **+ New support request** in the **Help and Support** area.
+To transfer an Enterprise subscription to an individual subscription with pay-as-you-go rates, you must create a new support request in the Azure portal. To create a support request, select **+ New support request** in the **Help and Support** area.
## Change Azure subscription or account ownership
-The Azure EA portal can transfer subscriptions from one account owner to another. For more information, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership).
+Use the Azure portal to transfer subscriptions from one account owner to another. For more information, see [Change Azure subscription or account ownership](direct-ea-administration.md#change-azure-subscription-or-account-ownership).
## Subscription transfer effects
If the subscription is transferred to an account in a different Microsoft Entra
If the recipient needs to restrict, access to their Azure resources, they should consider updating any secrets associated with the service. Most resources can be updated by using the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. select **All resources** on the Hub menu.
+2. Select **All resources** on the Hub menu.
3. Select the resource.
-4. select **Settings** to view and update existing secrets on the resource page.
+4. Select **Settings** to view and update existing secrets on the resource page.
## Next steps -- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).
+- For more information about Azure product transfers, see [Azure product transfer hub](subscription-transfer.md).
cost-management-billing Enterprise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enterprise-api.md
Previously updated : 09/15/2021 Last updated : 02/14/2024
The Azure Enterprise Reporting APIs enable Enterprise Azure customers to program
All date and time parameters required for APIs must be represented as combined Coordinated Universal Time (UTC) values. Values returned by APIs are shown in UTC format. ## Enabling data access to the API
-* **Generate or retrieve the API key** - Sign in to the Enterprise portal, and navigate to Reports > Download Usage > API Access Key to generate or retrieve the API key.
+* **Generate or retrieve the API key** - For more information, see [API key generation](enterprise-rest-apis.md#api-key-generation).
* **Passing keys in the API** - The API key needs to be passed for each call for Authentication and Authorization. The following property needs to be to the HTTP headers. |Request Header Key | Value|
cost-management-billing Enterprise Rest Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enterprise-rest-apis.md
Title: Azure Enterprise REST APIs
description: This article describes the REST APIs for use with your Azure enterprise enrollment. Previously updated : 02/14/2023 Last updated : 02/14/2024
This article describes the REST APIs for use with your Azure enterprise enrollme
## Consumption and Usage APIs
-Microsoft Enterprise Azure customers can get usage and billing information through REST APIs. The role owner (Enterprise Administrator, Department Administrator, Account Owner) must enable access to the API by generating a key from the Azure EA portal. Then, anyone provided with the enrollment number and key can access the data through the API.
+Microsoft Enterprise Azure customers can get usage and billing information through REST APIs. The role owner (Enterprise Administrator, Department Administrator, Account Owner) must enable access to the API by generating a key from the Azure portal. Then, anyone provided with the enrollment number and key can access the data through the API.
## Available APIs
In the Manage API Access Keys window, you can perform the following tasks:
> [!NOTE] > 1. If you are on Enrollment Admin, then you can generate the keys from only Usage & Charges blade at enrollment level but not at Accounts & department level. > 2. If you are an Department owner only, then you can generate the keys at Department level and at the Account level for which you are an account owner for.
-> 3. If you are Account owner only, then you can generate the keys at Acount level only.
+> 3. If you are Account owner only, then you can generate the keys at Account level only.
### Generate the primary or secondary API key
You might receive 400 and 404 (unavailable) errors returned from an API call whe
## Next steps -- Azure EA portal administrators should read [Azure EA portal administration](ea-portal-administration.md) to learn about common administrative tasks.-- If you need help with troubleshooting Azure EA portal issues, see [Troubleshoot Azure EA portal access](ea-portal-troubleshoot.md).
+- Azure EA administrators should read [EA Billing administration on the Azure portal](direct-ea-administration.md).
cost-management-billing Manage Billing Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-access.md
You can provide others access to the billing information for your account in the Azure portal. The type of billing roles and the instructions to provide access to the billing information vary by the type of your billing account. To determine the type of your billing account, see [Check the type of your billing account](#check-the-type-of-your-billing-account).
-The article applies to customers with Microsoft Online Service program accounts. If you're an Azure customer with an Enterprise Agreement (EA) and are the Enterprise Administrator, you can give permissions to the Department Administrators and Account Owners in the Enterprise portal. For more information, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md). If you're a Microsoft Customer Agreement customer, see, [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md).
+The article applies to customers with Microsoft Online Service program accounts. If you're an Azure customer with an Enterprise Agreement (EA) and are the Enterprise Administrator, you can give permissions to the Department Administrators and Account Owners in the Azure portal. For more information, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md). If you're a Microsoft Customer Agreement customer, see, [Understand Microsoft Customer Agreement administrative roles in Azure](understand-mca-roles.md).
## Account administrators for Microsoft Online Service program accounts
These roles have access to billing information in the [Azure portal](https://por
To assign roles, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). > [!note]
-> If you're an EA customer, an Account Owner can assign the above role to other users of their team. But for these users to view billing information, the Enterprise Administrator must enable AO view charges in the Enterprise portal.
+> If you're an EA customer, an Account Owner can assign the above role to other users of their team. But for these users to view billing information, the Enterprise Administrator must enable AO view charges in the Azure portal.
### <a name="opt-in"></a> Allow users to download invoices
The Billing Reader feature is in preview, and doesn't yet support nonglobal clou
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). > [!NOTE]
-> If you're an EA customer, an Account Owner or Department Administrator can assign the Billing Reader role to team members. But for that Billing Reader to view billing information for the department or account, the Enterprise Administrator must enable **AO view charges** or **DA view charges** policies in the Enterprise portal.
+> If you're an EA customer, an Account Owner or Department Administrator can assign the Billing Reader role to team members. But for that Billing Reader to view billing information for the department or account, the Enterprise Administrator must enable **AO view charges** or **DA view charges** policies in the Azure portal.
## Check the type of your billing account [!INCLUDE [billing-check-account-type](../../../includes/billing-check-account-type.md)]
cost-management-billing Mca Enterprise Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-enterprise-operations.md
Previously updated : 09/06/2023 Last updated : 02/13/2024
The following changes apply to enterprise administrators on an Enterprise Agreem
- A billing profile is created for the enrollment. You use the billing profile to manage billing for your organization, like your Enterprise Agreement enrollment. For more information about billing profiles, [Understand billing profiles](../understand/mca-overview.md#billing-profiles). - An invoice section is created for each department in your Enterprise Agreement enrollment. You use the invoice sections to manage your departments. You can create new invoice sections to set up more departments. For more information about invoice sections, see [Understand invoice sections](../understand/mca-overview.md#invoice-sections). - You use the Azure subscription creator role on invoice sections to give others permission to create Azure subscriptions, like the accounts that were created in Enterprise Agreement enrollment.-- You use the [Azure portal](https://portal.azure.com) to manage billing for your organization instead of the Azure EA portal.
+- You use the [Azure portal](https://portal.azure.com) to manage billing for your organization.
You're given the following roles on the new billing account:
The following changes apply to department administrators on an Enterprise Agreem
- An invoice section is created for each department in your Enterprise Agreement enrollment. You use invoice sections to manage your departments. For more information about invoice sections, see [Understand invoice sections](../understand/mca-overview.md#invoice-sections). - You use the Azure subscription creator role on the invoice section to give others permission to create Azure subscriptions. The Azure subscription creator role is like an account that was created in an Enterprise Agreement enrollment.-- You use the Azure portal to manage billing for your organization instead of the Azure EA portal.
+- You use the Azure portal to manage billing for your organization.
You're given the following roles on the new billing account:
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Previously updated : 06/28/2023 Last updated : 02/13/2024 # Set up your billing account for a Microsoft Customer Agreement
-If your direct Enterprise Agreement enrollment has expired or about to expire, you can sign a Microsoft Customer Agreement to renew your enrollment. This article describes the changes to your existing billing after the setup and walks you through the setup of your new billing account. Currently, expiring indirect Enterprise Agreements can't get renewed with a Microsoft Customer Agreement.
+If your direct Enterprise Agreement enrollment expired or about to expire, you can sign a Microsoft Customer Agreement (MCA) to renew your enrollment. This article describes the changes to your existing billing after the setup and walks you through the setup of your new billing account. Currently, expiring indirect Enterprise Agreements can't get renewed with a Microsoft Customer Agreement.
The renewal includes the following steps: 1. Accept the new Microsoft Customer Agreement. Work with your Microsoft field representative to understand the details and accept the new agreement.
-2. Set up the new billing account that's created for the new Microsoft Customer Agreement.
+2. Set up the new billing account that gets created for the new Microsoft Customer Agreement.
To set up the billing account, you must transition the billing of Azure subscriptions from your Enterprise Agreement enrollment to the new account. The setup doesn't affect Azure services that are running in your subscriptions. However, it changes the way you manage the billing for your subscriptions. -- Instead of the [EA portal](https://ea.azure.com), you manage your Azure services and billing, in the [Azure portal](https://portal.azure.com).
+- You manage your Azure services and billing in the [Azure portal](https://portal.azure.com).
- You get a monthly, digital invoice for your charges. You can view and analyze the invoice in the Cost Management + Billing page.-- Instead of departments and account in your Enterprise Agreement enrollment, you use the billing structure and scopes from the new account to manage and organize your billing.
+- You use the billing structure and scopes from the new account to manage and organize your billing instead of departments and account in your Enterprise Agreement enrollment.
Before you start the setup, we recommend you do the following actions: -- Before you transition to the Microsoft Customer Agreement, **delete users using the EA portal that don't need access to the new billing account**.
+- Before you transition to the Microsoft Customer Agreement, **delete users that don't need access to the new billing account**.
- Deleting users simplifies the transition and improves the security of your new billing account. - **Understand your new billing account** - Your new account simplifies billing for your organization. [Get a quick overview of your new billing account](../understand/mca-overview.md)
You see the following page in the Azure portal if you have a billing account own
You have two options: -- Ask the enterprise administrator of the enrollment to give you the enterprise administrator role. For more information, see [Create another enterprise administrator](ea-portal-administration.md#create-another-enterprise-administrator).-- You can give an enterprise administrator the billing account owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
+- Ask the enterprise administrator of the enrollment to give you the enterprise administrator role. For more information, see [Add another enterprise administrator](direct-ea-administration.md#add-another-enterprise-administrator).
+- Give an enterprise administrator the billing account owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
If you're given the enterprise administrator role, copy the migration link. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send it to the enterprise administrator.
If you have billing account owner access to the correct Microsoft Customer Agree
You have two options: -- Ask an existing billing account owner to give you the billing account owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)-- Give the enterprise administrator role to an existing billing account owner. For more information, see [Create another enterprise administrator](ea-portal-administration.md#create-another-enterprise-administrator).
+- Ask an existing billing account owner to give you the billing account owner role. For more information, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
+- Give the enterprise administrator role to an existing billing account owner. For more information, see [Add another enterprise administrator](direct-ea-administration.md#add-another-enterprise-administrator).
If you're given the billing account owner role, copy the migration link. Open it in your web browser to continue setting up your Microsoft Customer Agreement. Otherwise, send the link to the billing account owner.
The following Enterprise Agreement's features are replaced with new features in
### Cost Management Power BI template app
-When you convert an EA enrollment to MCA, you canΓÇÖt use the Cost Management Power BI template app any longer because the app doesnΓÇÖt support MCA. However, the [Azure Cost Management connector for Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management) supports MCA accounts.
+When you convert an EA enrollment to MCA, you canΓÇÖt use the Cost Management Power BI template app any longer because the app doesnΓÇÖt support MCA. However, the [Cost Management connector for Power BI Desktop](/power-bi/connect-data/desktop-connect-azure-cost-management) supports MCA accounts.
### Enterprise Agreement accounts
Support benefits don't transfer as part of the transition. Purchase a new suppor
### Past charges and balance
-Charges and credits balance prior to transition can be viewed in your Enterprise Agreement enrollment through the Azure portal.
+Charges and credits balance before the transition can be viewed in your Enterprise Agreement enrollment through the Azure portal.
### When should the setup be completed?
-Complete the setup of your billing account before your Enterprise Agreement enrollment expires. If your enrollment expires, services in your Azure subscriptions continue to run without disruption. However, you are charged pay-as-you-go rates for the services.
+Complete the setup of your billing account before your Enterprise Agreement enrollment expires. If your enrollment expires, services in your Azure subscriptions continue to run without disruption. However, you're charged pay-as-you-go rates for the services.
### Changes to the Enterprise Agreement enrollment after the setup
-Azure subscriptions that are created for the Enterprise Agreement enrollment after the transition can be manually moved to the new billing account. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md). To move Azure reservations or savings plans that are purchased after the transition, [contact Azure Support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). You can also provide users access to the billing account after the transition. For more information, see [manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal)
+Azure subscriptions that are created for the Enterprise Agreement enrollment after the transition can be manually moved to the new billing account. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md). To move Azure reservations or savings plans that are purchased after the transition, [contact Azure Support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). You can also provide users access to the billing account after the transition. For more information, see [manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
### Revert the transition
To complete the setup, you need access to both the new billing account and the E
- The billing of your Azure subscriptions is transitioned to the new account. **There won't be any impact on your Azure services during this transition. They'll keep running without any disruption**. - If you have Azure reservations or savings plans, they're moved to your new billing account with no change to benefits or term. If you have savings plans under the Enterprise Agreement purchased in non-USD currency, then the savings plans are canceled. They're repurchased under the terms of the new Microsoft Customer Agreement in USD.
-4. You can monitor the status of the transition on the **Transition status** page. Canceled savings plans are shown in the Transition details.
+4. You can monitor the status of the transition on the **Transition status** page. Canceled savings plans are shown in the Transition details.
- If you had a savings plan that was repurchased, select the **new savings plan** link to view its details and to verify that it was created successfully. ![Screenshot that shows the transition status](./media/microsoft-customer-agreement-setup-account/ea-microsoft-customer-agreement-set-up-status.png)
To complete the setup, you need access to both the new billing account and the E
![Screenshot that shows list of subscriptions](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-subscriptions-post-transition.png)
-Azure subscriptions that are transitioned from your Enterprise Agreement enrollment to the new billing account are displayed on the Azure subscriptions page. If you believe any subscription is missing, transition the billing of the subscription manually in the Azure portal. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md)
+Azure subscriptions that are transitioned from your Enterprise Agreement enrollment to the new billing account are displayed on the Azure subscriptions page. If you believe any subscription is missing, transition the billing of the subscription manually in the Azure portal. For more information, see [get billing ownership of Azure subscriptions from other users](mca-request-billing-ownership.md).
### Access of enterprise administrators on the billing account
Enterprise administrators are listed as billing account owners while the enterpr
![Screenshot that shows Azure portal search](./media/microsoft-customer-agreement-setup-account/search-cmb.png)
-3. Select the billing profile created for your enrollment. Depending on your access, you may need to select a billing account. From the billing account, select Billing profiles and then the billing profile.
+3. Select the billing profile created for your enrollment. Depending on your access, you might need to select a billing account. From the billing account, select Billing profiles and then the billing profile.
4. Select **Access control (IAM)** from the left side.
Enterprise administrators are listed as billing profile owners while the enterpr
![Screenshot that shows Azure portal search](./media/microsoft-customer-agreement-setup-account/search-cmb.png)
-3. Select an invoice section. Invoice sections have the same name as their respective departments in Enterprise Agreement enrollments. Depending on your access, you may need to select a billing account. From the billing account, select **Billing profiles** and then select **Invoice sections**. From the invoice sections list, select an invoice section.
+3. Select an invoice section. Invoice sections have the same name as their respective departments in Enterprise Agreement enrollments. Depending on your access, you might need to select a billing account. From the billing account, select **Billing profiles** and then select **Invoice sections**. From the invoice sections list, select an invoice section.
![Screenshot that shows list of invoice section post transition](./media/microsoft-customer-agreement-setup-account/microsoft-customer-agreement-invoice-sections-post-transition.png)
cost-management-billing Mosp Ea Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mosp-ea-transfer.md
You must have one of the following roles to create an EA account owner. For more
### Set EA authentication level
-EAs have an authentication level set that determines which types of users can be added as EA account owners for the enrollment. There are four authentication levels available, as described at [Authentication level types](ea-portal-troubleshoot.md#authentication-level-types).
+EAs have an authentication level set that determines which types of users can be added as EA account owners for the enrollment. There are four authentication levels available.
Ensure that the authentication level set for the EA allows you to create a new EA account owner using the subscription account administrator noted previously. For example:
Ensure that the authentication level set for the EA allows you to create a new E
- If the subscription account administrator has an email address domain of `@<YourAzureADTenantPrimaryDomain.com>`, then the EA must have its authentication level set to either **Work or School Account** or **Work or School Account Cross Tenant**. The ability to create a new EA account owner depends on whether the EA's default domain is the same as the subscription account administrator's email address domain. > [!NOTE]
-> When set correctly, changing the authentication level doesn't impact the transfer process. For more information, see [Authentication level types](ea-portal-troubleshoot.md#authentication-level-types).
+> When set correctly, changing the authentication level doesn't impact the transfer process.
## Transfer the subscription to the EA
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
You can't create support plans programmatically. You can buy a new support plan
A user must have an Owner role on an Enrollment Account to create a subscription. There are two ways to get the role:
-* The Enterprise Administrator of your enrollment can [make you an Account Owner](https://ea.azure.com/helpdocs/addNewAccount) (sign in required) which makes you an Owner of the Enrollment Account.
+* The Enterprise Administrator of your enrollment can [make you an Account Owner](direct-ea-administration.md#add-an-account-and-account-owner) (sign in required) which makes you an Owner of the Enrollment Account.
* An existing Owner of the Enrollment Account can [grant you access](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put). To use a service principal to create an EA subscription, an Owner of the Enrollment Account must [grant that service principal the ability to create subscriptions](/rest/api/billing/2019-10-01-preview/enrollmentaccountroleassignments/put).
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
Previously updated : 05/17/2023 Last updated : 02/14/2024
Use the information in the following sections to create EA subscriptions.
You must have an Owner role on an Enrollment Account to create a subscription. There are two ways to get the role:
-* The Enterprise Administrator of your enrollment can [make you an Account Owner](https://ea.azure.com/helpdocs/addNewAccount) and sign in required which makes you an Owner of the Enrollment Account.
+* The Enterprise Administrator of your enrollment can [make you an Account Owner](direct-ea-administration.md#add-an-account-and-account-owner) (sign in required) which makes you an Owner of the Enrollment Account.
* An existing Owner of the Enrollment Account can [grant you access](grant-access-to-create-subscription.md). Similarly, to use a service principal to create an EA subscription, you must [grant that service principal the ability to create subscriptions](grant-access-to-create-subscription.md). ### Find accounts you have access to
POST https://management.azure.com/providers/Microsoft.Billing/enrollmentAccounts
| Element Name | Required | Type | Description | ||-|--|--| | `displayName` | No | String | The display name of the subscription. If not specified, it's set to the name of the offer, like "Microsoft Azure Enterprise." |
-| `offerType` | Yes | String | The offer of the subscription. The two options for EA are [MS-AZR-0017P](https://azure.microsoft.com/pricing/enterprise-agreement/) (production use) and [MS-AZR-0148P](https://azure.microsoft.com/offers/ms-azr-0148p/) (dev/test, needs to be [turned on using the EA portal](https://ea.azure.com/helpdocs/DevOrTestOffer)). |
+| `offerType` | Yes | String | The offer of the subscription. The two options for EA are [MS-AZR-0017P](https://azure.microsoft.com/pricing/enterprise-agreement/) (production use) and [MS-AZR-0148P](https://azure.microsoft.com/offers/ms-azr-0148p/) (dev/test, needs to be [enabled in the Azure portal](direct-ea-administration.md#enable-the-enterprise-devtest-offer)). |
| `owners` | No | String | The Object ID of any user to be added as an Azure RBAC Owner on the subscription when it's created. | In the response, as part of the header `Location`, you get back a url that you can query for status on the subscription creation operation. When the subscription creation is finished, a GET on `Location` url will return a `subscriptionLink` object, which has the subscription ID. For more details, refer [Subscription API documentation](/rest/api/subscription/)
New-AzSubscription -OfferType MS-AZR-0017P -Name "Dev Team Subscription" -Enroll
| Element Name | Required | Type | Description | ||-|--|-| | `Name` | No | String | The display name of the subscription. If not specified, it's set to the name of the offer, like *Microsoft Azure Enterprise*. |
-| `OfferType` | Yes | String | The subscription offer. The two options for EA are [MS-AZR-0017P](https://azure.microsoft.com/pricing/enterprise-agreement/) (production use) and [MS-AZR-0148P](https://azure.microsoft.com/offers/ms-azr-0148p/) (dev/test, needs to be [turned on using the EA portal](https://ea.azure.com/helpdocs/DevOrTestOffer)). |
+| `OfferType` | Yes | String | The subscription offer. The two options for EA are [MS-AZR-0017P](https://azure.microsoft.com/pricing/enterprise-agreement/) (production use) and [MS-AZR-0148P](https://azure.microsoft.com/offers/ms-azr-0148p/) (dev/test, needs to be [enabled in the Azure portal](direct-ea-administration.md#enable-the-enterprise-devtest-offer)). |
| `EnrollmentAccountObjectId` | Yes | String | The Object ID of the enrollment account that the subscription is created under and billed to. The value is a GUID that you get from `Get-AzEnrollmentAccount`. | | `OwnerObjectId` | No | String | The Object ID of any user to add as an Azure RBAC Owner on the subscription when it's created. | | `OwnerSignInName` | No | String | The email address of any user to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `OwnerObjectId`.|
az account create --offer-type "MS-AZR-0017P" --display-name "Dev Team Subscript
| Element Name | Required | Type | Description | ||-|--|| | `display-name` | No | String | The display name of the subscription. If not specified, it's set to the name of the offer, like *Microsoft Azure Enterprise*.|
-| `offer-type` | Yes | String | The offer of the subscription. The two options for EA are [MS-AZR-0017P](https://azure.microsoft.com/pricing/enterprise-agreement/) (production use) and [MS-AZR-0148P](https://azure.microsoft.com/offers/ms-azr-0148p/) (dev/test, needs to be [turned on using the EA portal](https://ea.azure.com/helpdocs/DevOrTestOffer)). |
+| `offer-type` | Yes | String | The offer of the subscription. The two options for EA are [MS-AZR-0017P](https://azure.microsoft.com/pricing/enterprise-agreement/) (production use) and [MS-AZR-0148P](https://azure.microsoft.com/offers/ms-azr-0148p/) (dev/test, needs to be [enabled in the Azure portal](direct-ea-administration.md#enable-the-enterprise-devtest-offer)). |
| `enrollment-account-object-id` | Yes | String | The Object ID of the enrollment account that the subscription is created under and billed to. The value is a GUID that you get from `az billing enrollment-account list`. | | `owner-object-id` | No | String | The Object ID of any user to add as an Azure RBAC Owner on the subscription when it's created. | | `owner-upn` | No | String | The email address of any user to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `owner-object-id`.|
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Previously updated : 11/28/2023 Last updated : 02/13/2024
This article also helps you understand the things you should know _before_ you t
If you want to keep the billing ownership but change the type of product, see [Switch your Azure subscription to another offer](switch-azure-offer.md). To control who can access resources in the product, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-If you're an Enterprise Agreement (EA) customer, your enterprise administrators can transfer billing ownership of your products between accounts in the EA portal. For more information, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership).
+If you're an Enterprise Agreement (EA) customer, your enterprise administrators can transfer billing ownership of your products between accounts in the Azure portal. For more information, see [Change Azure subscription or account ownership](direct-ea-administration.md#change-azure-subscription-or-account-ownership).
This article focuses on product transfers. However, resource transfer is also discussed because it's required for some product transfer scenarios.
Dev/Test products aren't shown in the following table. Transfers for Dev/Test pr
| | | | | EA | MOSP (PAYG) | ΓÇó Transfer from an EA enrollment to a MOSP subscription requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. | | EA | MCA - individual | ΓÇó For details, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md).<br><br> ΓÇó Self-service reservation and savings plan transfers with no currency change are supported. <br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. However, you can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. |
-| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans automatically get transferred during EA to EA transfers, except in transfers with a currency change.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change EA subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership). |
+| EA | EA | ΓÇó Transferring between EA enrollments requires a [billing support ticket](https://azure.microsoft.com/support/create-ticket/).<br><br> ΓÇó Reservations and savings plans automatically get transferred during EA to EA transfers, except in transfers with a currency change.<br><br> ΓÇó Transfer within the same enrollment is the same action as changing the account owner. For details, see [Change Azure subscription or account ownership](direct-ea-administration.md#change-azure-subscription-or-account-ownership). |
| EA | MCA - Enterprise | ΓÇó Transferring all enrollment products is completed as part of the MCA transition process from an EA. For more information, see [Complete Enterprise Agreement tasks in your billing account for a Microsoft Customer Agreement](mca-enterprise-operations.md).<br><br> ΓÇó If you want to transfer specific products but not all of the products in an enrollment, see [Transfer Azure subscription billing ownership for a Microsoft Customer Agreement](mca-request-billing-ownership.md). <br><br>ΓÇó Self-service reservation transfers with no currency change are supported. When there's is a currency change during or after an enrollment transfer, reservations paid for monthly are canceled for the source enrollment. Cancellation happens at the time of the next monthly payment for an individual reservation. The cancellation is intentional and only affects monthly reservation purchases. For more information, see [Transfer Azure Enterprise enrollment accounts and subscriptions](../manage/ea-transfers.md#prerequisites-1).<br><br> ΓÇó You can't transfer a savings plan purchased under an Enterprise Agreement enrollment that was bought in a non-USD currency. You can [change the savings plan scope](../savings-plan/manage-savings-plan.md#change-the-savings-plan-scope) so that it applies to other subscriptions. | | EA | MPA | ΓÇó Transfer is only allowed for direct EA to MPA. A direct EA is signed between Microsoft and an EA customer.<br><br>ΓÇó Only CSP direct bill partners certified as an [Azure Expert Managed Services Provider (MSP)](https://partner.microsoft.com/membership/azure-expert-msp) can request to transfer Azure products for their customers that have a Direct Enterprise Agreement (EA). For more information, see [Get billing ownership of Azure subscriptions to your MPA account](mpa-request-ownership.md). Product transfers are allowed only for customers who have accepted a Microsoft Customer Agreement (MCA) and purchased an Azure plan with the CSP Program.<br><br> ΓÇó Transfer from EA Government to MPA isn't supported.<br><br>ΓÇó There are limitations and restrictions. For more information, see [Transfer EA subscriptions to a CSP partner](transfer-subscriptions-subscribers-csp.md#transfer-ea-or-mca-enterprise-subscriptions-to-a-csp-partner). | | MCA - individual | MOSP (PAYG) | ΓÇó For details, see [Transfer billing ownership of an Azure subscription to another account](billing-subscription-transfer.md).<br><br> ΓÇó Reservations and savings plans don't automatically transfer and transferring them isn't supported. |
cost-management-billing Switch Azure Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/switch-azure-offer.md
Previously updated : 04/05/2023 Last updated : 02/13/2024
On the day you switch, an invoice is generated for all outstanding charges. Then
### Can I migrate from a subscription with pay-as-you-go rates to Cloud Solution Provider (CSP) or Enterprise Agreement (EA)? * To migrate to CSP, see [Transfer Azure subscriptions between subscribers and CSPs](transfer-subscriptions-subscribers-csp.md).
-* If you have a pay-as-you-go subscription (Azure offer ID MS-AZR-0003P) or an Azure plan with pay-as-you-go rates (Azure offer ID MS-AZR-0017G) and you want to migrate to an EA enrollment, have your Enrollment Admin add your account into the EA. Follow instructions in the invitation email to have your subscriptions moved under the EA enrollment. For more information, see [Change Azure subscription or account ownership](ea-portal-administration.md#change-azure-subscription-or-account-ownership).
+* If you have a pay-as-you-go subscription (Azure offer ID MS-AZR-0003P) or an Azure plan with pay-as-you-go rates (Azure offer ID MS-AZR-0017G) and you want to migrate to an EA enrollment, have your Enrollment Admin add your account into the EA. Follow instructions in the invitation email to have your subscriptions moved under the EA enrollment. For more information, see [Change Azure subscription or account ownership](direct-ea-administration.md#change-azure-subscription-or-account-ownership).
### Can I migrate data and services to a new subscription?
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Previously updated : 04/24/2023 Last updated : 02/13/2024
To help manage your organization's usage and spend, Azure customers with an Ente
- Department Administrator (read only) - Account Owner┬▓
-┬╣ The Bill-To contact of the EA contract will be under this role.
+┬╣ The Bill-To contact of the EA contract is under this role.
-┬▓ The Bill-To contact cannot be added or changed in the Azure EA Portal and will be added to the EA enrollment based on the user who is set up as the Bill-To contact on agreement level. To change the Bill-To contact, a request needs to be made through a partner/software advisor to the Regional Operations Center (ROC).
+┬▓ The Bill-To contact can't be added or changed in the Azure portal. It gets added to the EA enrollment based on the user who is set up as the Bill-To contact on agreement level. To change the Bill-To contact, a request needs to be made through a partner/software advisor to the Regional Operations Center (ROC).
-The first enrollment administrator that is set up during the enrollment provisioning determines the authentication type of the Bill-to contact account. When the bill-to contact gets added to the EA Portal as a read-only administrator, they are given Microsoft account authentication.
+The first enrollment administrator that is set up during the enrollment provisioning determines the authentication type of the Bill-to contact account. When the bill-to contact gets added to the Azure portal as a read-only administrator, they're given Microsoft account authentication.
-For example, if the initial authentication type is set to Mixed, the EA will be added as a Microsoft account and the Bill-to contact will have read-only EA admin privileges. If the EA admin doesnΓÇÖt approve Microsoft account authorization for an existing Bill-to contact, the EA admin may delete the user in question and ask the customer to add the user back as a read-only administrator with a Work or School account Only set at enrollment level in the EA portal.
+For example, if the initial authentication type is set to Mixed, the EA is added as a Microsoft account and the Bill-to contact has read-only EA admin privileges. If the EA admin doesnΓÇÖt approve Microsoft account authorization for an existing Bill-to contact, the EA admin can delete the user in question. Then they can ask the customer to add the user back as a read-only administrator with a Work or School account Only set at enrollment level in the Azure portal.
These roles are specific to managing Azure Enterprise Agreements and are in addition to the built-in roles Azure has to control access to resources. For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
-> [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
->
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
->
-> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
- ## Azure portal for Cost Management and Billing The Azure portal hierarchy for Cost Management consists of: -- **Azure portal for Cost Management** - an online management portal that helps you manage costs for your Azure EA services. You can:
+- **Azure portal for Cost Management** - an online management portal that helps you manage costs for your Azure EA services. You can do the following tasks.
- Create an Azure EA hierarchy with departments, accounts, and subscriptions. - Reconcile the costs of your consumed services, download usage reports, and view price lists.
The following administrative user roles are part of your enterprise enrollment:
- Service administrator - Notification contact
-Use the Azure portal's Cost Management blade the [Azure portal](https://portal.azure.com) to manage Azure Enterprise Agreement roles.
+Use Cost Management in the [Azure portal](https://portal.azure.com) so you can manage Azure Enterprise Agreement roles.
Direct EA customers can complete all administrative tasks in the Azure portal. You can use the [Azure portal](https://portal.azure.com) to manage billing, costs, and Azure services.
-User roles are associated with a user account. To validate user authenticity, each user must have a valid work, school, or Microsoft account. Ensure that each account is associated with an email address that's actively monitored. Enrollment notifications are sent to the email address.
+User roles are associated with a user account. To validate user authenticity, each user must have a valid work, school, or Microsoft account. Ensure that each account is associated with an email address to actively monitor it. Enrollment notifications are sent to the email address.
-> [!NOTE]
+> [!NOTE]
> The Account Owner role is often assigned to a service account that doesn't have an actively monitored email. When setting up users, you can assign multiple accounts to the enterprise administrator role. An enrollment can have multiple account owners, for example, one per department. Also, you can assign both the enterprise administrator and account owner roles to a single account.
Users with this role have the highest level of access to the Enrollment. They ca
- View and manage all reservation orders and reservations that apply to the Enterprise Agreement. - Enterprise administrator (read-only) can view reservation orders and reservations. They can't manage them.
-You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators.
+You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators.
-The EA administrator role automatically inherits all access and privilege of the department administrator role. So thereΓÇÖs no need to manually give an EA administrator the department administrator role.
+The EA administrator role automatically inherits all access and privilege of the department administrator role. So thereΓÇÖs no need to manually give an EA administrator the department administrator role.
The enterprise administrator role can be assigned to multiple accounts. ### EA purchaser
-Users with this role have permissions to purchase Azure services, but are not allowed to manage accounts. They can:
+Users with this role have permissions to purchase Azure services, but aren't allowed to manage accounts. They can:
- Purchase Azure services, including reservations. - View usage across all accounts.
Users with this role can:
- Manage service administrators. - View usage for subscriptions.
-Each account requires a unique work, school, or Microsoft account. For more information about Azure Enterprise portal administrative roles, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md).
+Each account requires a unique work, school, or Microsoft account. For more information about Azure portal administrative roles, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md).
There can be only one account owner per account. However, there can be multiple accounts in an EA enrollment. Each account has a unique account owner.
The following sections describe the limitations and capabilities of each role.
||| |Enterprise Administrator|Unlimited| |Enterprise Administrator (read only)|Unlimited|
-| EA purchaser assigned to an SPN | Unlimited |
+| EA purchaser assigned to a service principal name (SPN) | Unlimited |
|Department Administrator|Unlimited| |Department Administrator (read only)|Unlimited| |Account Owner|1 per account┬│|
The following sections describe the limitations and capabilities of each role.
- ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement. - ⁵ Task is limited to accounts in your department.-- ⁶ A subscription owner or reservation purchaser may purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation purchase enabled flag. Enterprise administrators may purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) may view all purchased reservations and savings plans. Neither EA administrator role is governed by the reservation purchase enabled flag. While the Enterprise Admin (read-only) role holder is not permitted to make purchases, as it is not governed by reservation purchase enabled, if a user with that role also holds either a subscription owner or reservation purchaser permission, that user may purchase reservations and savings plans even if the reservation purchase enabled flag is set to false
+- ⁶ A subscription owner or reservation purchaser can purchase and manage reservations and savings plans within the subscription, and only if permitted by the reservation purchase enabled flag. Enterprise administrators can purchase and manage reservations and savings plans across the billing account. Enterprise administrators (read-only) can view all purchased reservations and savings plans. The reservation purchase enabled flag doesn't affect the EA administrator roles. The Enterprise Admin (read-only) role holder isn't permitted to make purchases. However, if a user with that role also holds either a subscription owner or reservation purchaser permission, the user can purchase reservations and savings plans, regardless of the flag.
## Add a new enterprise administrator
-Enterprise administrators have the most privileges when managing an Azure EA enrollment. The initial Azure EA admin was created when the EA agreement was set up. However, you can add or remove new admins at any time. New admins are only added by existing admins. For more information about adding additional enterprise admins, see [Create another enterprise admin](ea-portal-administration.md#create-another-enterprise-administrator). Direct EA customers can use the Azure portal to add EA admins, see [Create another enterprise admin on Azure portal](direct-ea-administration.md#add-another-enterprise-administrator). For more information about billing profile roles and tasks, see [Billing profile roles and tasks](understand-mca-roles.md#billing-profile-roles-and-tasks).
+Enterprise administrators have the most privileges when managing an Azure EA enrollment. The initial Azure EA admin was created when the EA agreement was set up. However, you can add or remove new admins at any time. Existing admins create new admins. For more information about adding more enterprise admins, see [Create another enterprise admin on Azure portal](direct-ea-administration.md#add-another-enterprise-administrator). For more information about billing profile roles and tasks, see [Billing profile roles and tasks](understand-mca-roles.md#billing-profile-roles-and-tasks).
## Update account owner state from pending to active
-When new Account Owners (AO) are added to an Azure EA enrollment for the first time, their status appears as _pending_. When a new account owner receives the activation welcome email, they can sign in to activate their account.
+When new Account Owners (AO) are added to an Azure EA enrollment for the first time, their status appears as _pending_. When a new account owner receives the activation welcome email, they can sign in to activate their account.
> [!NOTE]
-> If the Account Owner is a service account and doesn't have an email, use an In-Private session to sign in to the Azure portal and navigate to Cost Management to be prompted to accept the activation welcome email.
+> If the Account Owner is a service account and doesn't have an email, use an In-Private session to sign in to the Azure portal and navigate to Cost Management to be prompted to accept the activation welcome email.
-Once they activate their account, the account status is updated from _pending_ to _active_. The account owner needs to read the 'Warning' message and select **Continue**. New users might get prompted to enter their first and last name to create a Commerce Account. If so, they must add the required information to continue and then the account is activated.
+Once they activate their account, the account status is updated from _pending_ to _active_. The account owner needs to read the `Warning` message and select **Continue**. New users might get prompted to enter their first and last name to create a Commerce Account. If so, they must add the required information to continue and then the account is activated.
> [!NOTE] > A subscription is associated with one and only one account. The warning message includes details that warn the Account Owner that accepting the offer will move the subscriptions associated with the Account to the new Enrollment.
Direct EA admins can add department admins in the Azure portal. For more informa
|View usage and cost details|✔|✔|✔|✔⁷|✔⁷|✔⁸|✔| |Manage resources in Azure portal|✘|✘|✘|✘|✘|✔|✘| -- ⁷ Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department.-- ⁸ Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
+- ⁷ Requires that the Enterprise Administrator enables **DA view charges** policy in the Azure portal. The Department Administrator can then see cost details for the department.
+- ⁸ Requires that the Enterprise Administrator enables **AO view charges** policy in the Azure portal. The Account Owner can then see cost details for the account.
## See pricing for different user roles
-You may see different pricing in the Azure portal depending on your administrative role and how the view charges policies are set by the Enterprise Administrator. Enabling Department Administrator and Account Owner Roles to see the charges can be restricted by restricting access to billing information.
+You might see different pricing in the Azure portal depending on your administrative role and how the Enterprise Administrator sets the view charges policies. Enabling Department Administrator and Account Owner Roles to see the charges can be restricted by restricting access to billing information.
To learn how to set these policies, see [Manage access to billing information for Azure](manage-billing-access.md).
-The following table shows the relationship between the Enterprise Agreement admin roles, the view charges policy, the Azure role in the Azure portal, and the pricing that you see in the Azure portal. The Enterprise Administrator always sees usage details based on the organization's EA pricing. However, the Department Administrator and Account Owner see different pricing views based on the view charge policy and their Azure role. The Department Admin role listed in the following table refers to both Department Admin and Department Admin (read only) roles.
+The following table shows the relationship between:
+
+- Enterprise Agreement admin roles
+- View charges policy
+- Azure role in the Azure portal
+- Pricing that you see in the Azure portal
+
+The Enterprise Administrator always sees usage details based on the organization's EA pricing. However, the Department Administrator and Account Owner see different pricing views based on the view charge policy and their Azure role. The Department Admin role listed in the following table refers to both Department Admin and Department Admin (read only) roles.
|Enterprise Agreement admin role|View charges policy for role|Azure role|Pricing view| |||||
The following table shows the relationship between the Enterprise Agreement admi
|Account Owner OR Department Admin|Γ£ÿ Disabled |none|No pricing| |None|Not applicable |Owner|No pricing|
-You set the Enterprise admin role and view charges policies in the Enterprise portal. The Azure role can be updated in the Azure portal. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+You set the Enterprise admin role and view charges policies in the Azure portal. The Azure role-based-access-control (RBAC) role can be updated with information at [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Next steps - [Manage access to billing information for Azure](manage-billing-access.md) - [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)-- [Azure built-in roles](../../role-based-access-control/built-in-roles.md)
+- Assign [Azure built-in roles](../../role-based-access-control/built-in-roles.md)
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
When you move from a pay-as-you-go or an enterprise agreement to a Microsoft Cus
## Complete outstanding payments
-Make sure that you complete any outstanding payments for your older [pay-as-you-go](../understand/download-azure-invoice.md) or [EA](../manage/ea-portal-enrollment-invoices.md) contract subscription invoices. For more information, see [Understand your Microsoft Customer Agreement Invoice in Azure](../understand/mca-understand-your-invoice.md#billing-period).
+Make sure that you complete any outstanding payments for your older [pay-as-you-go](../understand/download-azure-invoice.md) or [EA](../manage/direct-ea-billing-invoice-documents.md) contract subscription invoices. For more information, see [Understand your Microsoft Customer Agreement Invoice in Azure](../understand/mca-understand-your-invoice.md#billing-period).
## Cancel support plan
cost-management-billing Buy Vm Software Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/buy-vm-software-reservation.md
Previously updated : 11/17/2023 Last updated : 02/14/2023
When you prepay for your virtual machine software usage (available in the Azure
You can buy virtual machine software reservation in the Azure portal. To buy a reservation: - You must have the owner role for at least one Enterprise or individual subscription with pay-as-you-go pricing.-- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin for the subscription.
+- For Enterprise subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). If the setting is disabled, you must be an EA Admin for the subscription.
- For the Cloud Solution Provider (CSP) program, the admin agents or sales agents can buy the software plans. ## Buy a virtual machine software reservation (VMSR)
cost-management-billing Charge Back Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/charge-back-usage.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
Information in the following table about metric and filter can help solve for co
## Download the usage CSV file with new data
-If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
+If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal.
In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
cost-management-billing Fabric Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/fabric-capacity.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
For pricing information, see the [Fabric pricing page](https://azure.microsoft.c
You can buy a Fabric capacity reservation in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). To buy a reservation: - You must have the owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement for at least one subscription.-- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin to enable it.
+- For Enterprise subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). If the setting is disabled, you must be an EA Admin to enable it.
- Direct Enterprise customers can update the **Reserved Instances** policy settings in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the **Policies** menu to change settings. - For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Fabric capacity reservations.
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
- ignite-2023 Previously updated : 10/31/2023 Last updated : 02/14/2024
Cloud solution providers can use the Azure portal orΓÇ»[Partner Center](/partner
You can't buy a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in Owner or built-in Reservation Purchaser role.
-Enterprise Agreement (EA) customers can limit purchases to EA admins by disabling the **Add Reserved Instances** option in the EA Portal. Direct EA customers can now disable Reserved Instance setting in [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to Policies menu to change settings.
+Enterprise Agreement (EA) customers can limit purchases to EA admins by disabling the **Reserved Instances** policy option in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to Policies menu to change settings.
EA admins must have owner or reservation purchaser access on at least one EA subscription to purchase a reservation. The option is useful for enterprises that want a centralized team to purchase reservations.
cost-management-billing Prepay App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-app-service.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
Your usage file shows your charges by billing period and daily usage. For inform
You can buy a reserved Premium v3 reserved instance in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22VirtualMachines%22%7D). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). These requirements apply to buying a Premium v3 reserved instance: - You must be in an Owner role for at least one EA subscription or a subscription with a pay-as-you-go rate.-- For EA subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin for the subscription. Direct EA customers can now update **Reserved Instances** settings in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the Policies menu to change settings.
+- For EA subscriptions, the **Reserved Instances** option must be enabled in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings.
- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can buy reservations. To buy an instance:
If you have an EA agreement, you can use the **Add more option** to quickly add
You can buy a reserved Isolated v2 reserved instance in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22VirtualMachines%22%7D). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). These requirements apply to buying a Isolated v2 reserved instance: - You must be in an Owner role for at least one EA subscription or a subscription with a pay-as-you-go rate.-- For EA subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin for the subscription. Direct EA customers can now update **Reserved Instances** settings in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the Policies menu to change settings.
+- For EA subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to the **Policies** menu to change settings.
- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can buy reservations. To buy an instance:
Buy Windows stamp reservations if you have one or more Windows workers on the st
You can buy Isolated Stamp reserved capacity in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22AppService%22%7D). Pay for the reservation [up front or with monthly payments](./prepare-buy-reservation.md). To buy reserved capacity, you must have the owner role for at least one enterprise subscription or an individual subscription with pay-as-you-go rates. -- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if the setting is disabled, you must be an EA Admin.
+- For Enterprise subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). Or, if the setting is disabled, you must be an EA Admin.
- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Synapse Analytics reserved capacity. **To Purchase:**
cost-management-billing Prepay Databricks Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-databricks-reserved-capacity.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
Before you buy, calculate the total DBU quantity consumed for different workload
You can buy Databricks plans in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22Databricks%22%7D). To buy reserved capacity, you must have the owner role for at least one enterprise or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates subscription, or the required role for CSP subscriptions. - You must be in an Owner role for at least one Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P).-- For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin of the subscription to enable it. Direct EA customers can now update **Reserved Instance** setting on [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the Policies menu to change settings.
+- For Enterprise subscriptions, **Reserved Instances** policy option must be enabled in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the **Policies** menu to change settings.
- For CSP subscriptions, follow the steps in [Acquire, provision, and manage Azure reserved VM instances (RI) + server subscriptions for customers](/partner-center/azure-ri-server-subscriptions).
cost-management-billing Prepay Jboss Eap Integrated Support App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-jboss-eap-integrated-support-app-service.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
When you purchase a JBoss EAP Integrated Support reservation, the discount is au
You can buy a reservation for JBoss EAP Integrated Support in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22VirtualMachines%22%7D). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). - You must be in an Owner role for at least one EA subscription or a subscription with a pay-as-you-go rate.-- For EA subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin for the subscription.
+- For EA subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). Or, if that setting is disabled, you must be an EA Admin for the subscription.
- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can buy reservations. To buy an instance:
cost-management-billing Prepay Sql Data Warehouse Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-sql-data-warehouse-charges.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
For pricing information, see the [Azure Synapse Analytics reserved capacity offe
You can buy Azure Synapse Analytics reserved capacity in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/ReservationsBrowseBlade). Pay for the reservation [up front or with monthly payments](./prepare-buy-reservation.md). To buy reserved capacity: - You must have the owner role for at least one enterprise, Pay-As-You-Go, or Microsoft Customer Agreement subscription.-- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin to enable it. Direct Enterprise customers can update the **Reserved Instances** setting in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the Policies menu to change settings.
+- For Enterprise subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). If the setting is disabled, you must be an EA Admin to enable it.
- For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can purchase Azure Synapse Analytics reserved capacity.
cost-management-billing Prepay Sql Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-sql-edge.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
When you prepay for your SQL Edge reserved capacity, you can save money over you
You can buy SQL Edge reserved capacity from the [Azure portal](https://portal.azure.com/). Pay for the reservation [up front or with monthly payments](prepare-buy-reservation.md). To buy reserved capacity: - You must be in the Owner role for at least one Enterprise or individual subscription with pay-as-you-go rates.-- For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin on the subscription.
+- For Enterprise subscriptions, **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). Or, if that setting is disabled, you must be an EA Admin on the subscription.
- For the Cloud Solution Provider (CSP) program, only admin agents or sales agents can buy SQL Edge reserved capacity. ## Buy a software plan
cost-management-billing Synapse Analytics Pre Purchase Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/synapse-analytics-pre-purchase-plan.md
Previously updated : 03/28/2023 Last updated : 02/14/2024
For more information about available SCU tiers and pricing discounts, you use th
You buy Synapse plans in the [Azure portal](https://portal.azure.com). To buy a Pre-Purchase Plan, you must have the owner role for at least one enterprise or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates subscription, or the required role for CSP subscriptions. - You must be in an Owner role for at least one Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P).-- For Enterprise Agreement (EA) subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin of the subscription.
+- For Enterprise Agreement (EA) subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). Or, if that setting is disabled, you must be an EA Admin of the subscription.
- For CSP subscriptions, follow the steps in [Acquire, provision, and manage Azure reserved VM instances (RI) + server subscriptions for customers](/partner-center/azure-ri-server-subscriptions). ### To Purchase:
cost-management-billing Understand Reserved Instance Usage Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage-ea.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
Information in the following table about metric and filter can help solve for co
## Download the EA usage CSV file with new data
-If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
+If you're an EA admin, you can download the CSV file that contains new usage data from the Azure portal.
In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
cost-management-billing Understand Vm Software Reservation Discount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-vm-software-reservation-discount.md
When you prepay for your virtual machine software usage (available in the Azure
You can buy a virtual machine software reservation in the Azure portal. To buy a reservation: - You must have the owner role for at least one Enterprise or individual subscription with pay-as-you-go pricing.-- For Enterprise subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). If the setting is disabled, you must be an EA Admin for the subscription.
+- For Enterprise subscriptions, the **Reserved Instances** policy option must be enabled in the [Azure portal](../manage/direct-ea-administration.md#view-and-manage-enrollment-policies). If the setting is disabled, you must be an EA Admin for the subscription.
- For the Cloud Solution Provider (CSP) program, the admin agents or sales agents can buy the software plans. ## Need help? Contact us
cost-management-billing Charge Back Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/charge-back-costs.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
Information in the following table about metric and filter can help solve common
## Download the usage CSV file with new data
-If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal. This data isn't available from the EA portal (ea.azure.com), you must download the usage file from Azure portal (portal.azure.com) to see the new data.
+If you're an EA admin, you can download the CSV file that contains new usage data from Azure portal.
In the Azure portal, navigate to [Cost management + billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
cost-management-billing Renew Savings Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/renew-savings-plan.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
Renewal notification emails are sent 30 days before expiration and again on the
Emails are sent to different people depending on your purchase method: -- EA customers - Emails are sent to the notification contacts set on the EA portal or Enterprise Administrators who are automatically enrolled to receive usage notifications.
+- EA customers - Emails are sent to the notification contacts set in the Azure portal or Enterprise Administrators who are automatically enrolled to receive usage notifications.
- MPA - No email notifications are currently sent for Microsoft Partner Agreement subscriptions. ## Need help? Contact us.
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
Previously updated : 01/04/2024 Last updated : 02/14/2024
You can purchase savings from the [Azure portal](https://portal.azure.com/) and
## How to find products covered under a savings plan
-The complete list of savings plan eligible products is found in your price sheet, which can be downloaded from the [Azure portal](https://portal.azure.com). The EA portal price sheet doesn't include savings plan pricing. After you download the file, filter `Price Type` by `Savings Plan` to see the one-year and three-year prices.
+The complete list of savings plan eligible products is found in your price sheet, which can be downloaded from the [Azure portal](https://portal.azure.com). After you download the file, filter `Price Type` by `Savings Plan` to see the one-year and three-year prices.
## How is a savings plan billed?
cost-management-billing Utilization Cost Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/utilization-cost-reports.md
Previously updated : 11/17/2023 Last updated : 02/14/2024
To download your saving plan cost and usage file, use the information in the fol
### EA customers
-If you're an EA admin, you can download the CSV file that contains new cost data from the Azure portal. This data isn't available from the [EA portal](https://ea.azure.com/), you must download the cost file from Azure portal (portal.azure.com) to see the new data.
+If you're an EA admin, you can download the CSV file that contains new cost data from the Azure portal.
In the Azure portal, navigate toΓÇ»[Cost Management + Billing](https://portal.azure.com/#blade/Microsoft_Azure_Billing/ModernBillingMenuBlade/BillingAccounts).
cost-management-billing Create Sql License Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/create-sql-license-assignments.md
Title: Create SQL Server license assignments for Azure Hybrid Benefit
description: This article explains how to create SQL Server license assignments for Azure Hybrid Benefit. Previously updated : 04/23/2023 Last updated : 02/13/2024
The prerequisite roles differ depending on the agreement type.
| Agreement type | Required role | Supported offers | | | | |
-| Enterprise Agreement | _Enterprise Administrator_<p> If you're an Enterprise admin with read-only access, you need your organization to give you **full** access to assign licenses using centrally managed Azure Hybrid Benefit. <p>If you're not an Enterprise admin, you must get assigned that role by your organization (with full access). For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator). | - MS-AZR-0017P (Microsoft Azure Enterprise)<br>- MS-AZR-USGOV-0017P (Azure Government Enterprise) |
+| Enterprise Agreement | _Enterprise Administrator_<p> If you're an Enterprise admin with read-only access, you need your organization to give you **full** access to assign licenses using centrally managed Azure Hybrid Benefit. <p>If you're not an Enterprise admin, you must get assigned that role by your organization (with full access). For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/direct-ea-administration.md#add-another-enterprise-administrator). | - MS-AZR-0017P (Microsoft Azure Enterprise)<br>- MS-AZR-USGOV-0017P (Azure Government Enterprise) |
| Microsoft Customer Agreement | *Billing account owner*<br> *Billing account contributor* <br> *Billing profile owner*<br> *Billing profile contributor*<br> If you don't have one of the preceding roles, your organization must assign one to you. For more information about how to become a member of the roles, see [Manage billing roles](../manage/understand-mca-roles.md#manage-billing-roles-in-the-azure-portal). | MS-AZR-0017G (Microsoft Azure Plan)| | WebDirect / Pay-as-you-go | Not available | None | | CSP / Partner led customers | Not available | None |
cost-management-billing Overview Azure Hybrid Benefit Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/scope-level/overview-azure-hybrid-benefit-scope.md
To use centrally managed licenses, you must have a specific role assigned to you
If you're not an Enterprise admin, you need to contact one and either: - Have them give you the enterprise administrator role with full access. - Contact your Microsoft account team to have them identify your primary enterprise administrator.
- For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator).
+ For more information about how to become a member of the role, see [Add another enterprise administrator](../manage/direct-ea-administration.md#add-another-enterprise-administrator).
- Microsoft Customer Agreement - Billing account owner - Billing account contributor
cost-management-billing Ea Portal Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/ea-portal-troubleshoot.md
- Title: Troubleshoot Azure EA portal access
-description: This article describes some common issues that can occur with an Azure Enterprise Agreement (EA) in the Azure EA portal.
-- Previously updated : 12/16/2022------
-# Troubleshoot Azure EA portal access
-
-This article describes some common issues that can occur with an Azure Enterprise Agreement (EA). The Azure EA portal is used to manage enterprise agreement users and costs. You might come across these issues when you're configuring or updating Azure EA portal access.
-
-> [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../manage/ea-direct-portal-get-started.md).
->
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
->
-> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
-
-## Issues adding a user to an enrollment
-
-There are different types of authentication levels for enterprise enrollments. When authentication levels are applied incorrectly, you might have issues when you try to sign in to the Azure EA portal.
-
-You use the Azure EA portal to grant access to users with different authentication levels. An enterprise administrator can update the authentication level to meet security requirements of their organization.
-
-### Authentication level types
--- Microsoft Account Only - For organizations that want to use, create, and manage users through Microsoft accounts.-- Work or School Account - For organizations that have set up Microsoft Entra ID with federation to the cloud and all accounts are on a single tenant.-- Work or School Account Cross Tenant - For organizations that have set up Microsoft Entra ID with federation to the cloud and will have accounts in multiple tenants.-- Mixed Account - Allows you to add users with Microsoft Account and/or with a Work or School Account.-
-The first work or school account added to the enrollment determines the _default_ domain. To add a work or school account with another tenant, you must change the authentication level under the enrollment to cross-tenant authentication.
-
-To update the Authentication Level:
-
-1. Sign in to the Azure [EA portal](https://ea.azure.com/) as an Enterprise Administrator.
-2. Select **Manage** on the left navigation panel.
-3. Select the **Enrollment** tab.
-4. Under **Enrollment Details**, select **Auth Level**.
-5. Select the pencil symbol.
-6. Select **Save**.
-
-![Example showing authentication levels ](./media/ea-portal-troubleshoot/create-ea-authentication-level-types.png)
-
-Microsoft accounts must have an associated ID created at [https://signup.live.com](https://signup.live.com/).
-
-Work or school accounts are available to organizations that have set up Microsoft Entra ID with federation and where all accounts are on a single tenant. Users can be added with work or school federated user authentication if the company's internal Microsoft Entra ID is federated.
-
-If your organization doesn't use Microsoft Entra ID federation, you can't use your work or school email address. Instead, register or create a new email address and register it as a Microsoft account.
-
-## Unable to access the Azure EA portal
-
-If you get an error message when you try to sign in to the Azure EA portal, use the following the troubleshooting steps:
--- Ensure that you're using the correct Azure EA portal URL, which is https://ea.azure.com.-- Determine if your access to the Azure EA portal was added as a work or school account or as a Microsoft account.
- - If you're using your work account, enter your work email and work password. Your work password is provided by your organization. You can check with your IT department about how to reset the password if you have issues with it.
- - If you're using a Microsoft account, enter your Microsoft account email address and password. If you've forgotten your Microsoft account password, you can reset it at [https://account.live.com/password/reset](https://account.live.com/password/reset).
-- Use an in-private or incognito browser session to sign in so that no cookies or cached information from previous or existing sessions are kept. Clear your browser's cache and use an in-private or incognito window to open https://ea.azure.com.-- If you get an _Invalid User_ error when using a Microsoft account, it might be because you have multiple Microsoft accounts. The one that you're trying to sign in with isn't the primary email address.
-Or, if you get an _Invalid User_ error, it might be because the wrong account type was used when the user was added to the enrollment. For example, a work or school account instead of a Microsoft account. In this example, you have another EA admin add the correct account or you need to contact [support](https://support.microsoft.com/supportforbusiness/productselection?sapId=cf791efa-485b-95a3-6fad-3daf9cd4027c).
- - If you need to check the primary alias, go to [https://account.live.com](https://account.live.com). Then, select **Your Info** and then select **Manage how to sign in to Microsoft**. Follow the prompts to verify an alternate email address and obtain a code to access sensitive information. Enter the security code. Select **Set it up later** if you don't want to set up two-factor authentication.
- - You see the **Manage how to sign in to Microsoft** page where you can view your account aliases. Check that the primary alias is the one that you're using to sign in to the Azure EA portal. If it isn't, you can make it your primary alias. Or, you can use the primary alias for Azure EA portal instead.
-
-## Next steps
--- Azure EA portal administrators should read [Azure EA portal administration](../manage/ea-portal-administration.md) to learn about common administrative tasks.-- Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about common issues for Azure EA Activation.
cost-management-billing Enterprise Mgmt Grp Troubleshoot Cost View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/enterprise-mgmt-grp-troubleshoot-cost-view.md
- Title: Troubleshoot Azure enterprise cost views
-description: Learn how to resolve any issues you might have with organizational cost views within the Azure portal.
----- Previously updated : 12/16/2022----
-# Troubleshoot enterprise cost views
-
-Within enterprise enrollments, there are several settings that could cause users within the enrollment to not see costs. These settings are managed by the enrollment administrator. Or, if the enrollment isn't bought directly through Microsoft, the settings are managed by the partner. This article helps you understand what the settings are and how they impact the enrollment. These settings are independent of the Azure roles.
-
-> [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../manage/ea-direct-portal-get-started.md).
->
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
->
-> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
-
-## Enable access to costs
-
-Are you seeing a message Unauthorized, or *"Cost views are disabled in your enrollment."* when looking for cost information?
-![Screenshot that shows "unauthorized" in Current Cost field for subscription.](./media/enterprise-mgmt-grp-troubleshoot-cost-view/unauthorized.png)
-
-It might be for one of the following reasons:
-
-1. YouΓÇÖve bought Azure through an enterprise partner, and the partner didn't release pricing yet. Contact your partner to update the pricing setting within the [Enterprise portal](https://ea.azure.com).
-2. If youΓÇÖre an EA Direct customer, there are a couple of possibilities:
- * You're an Account Owner and your Enrollment Administrator disabled the **AO view charges** setting.
- * You're a Department Administrator and your Enrollment Administrator disabled the **DA view charges** setting.
- * Contact your Enrollment Administrator to get access. The Enrollment Admin can now update the settings in [Azure portal](https://portal.azure.com/). Navigate to **Policies** menu to change settings.
- * The Enrollment Admin can update the settings in the [Enterprise portal](https://ea.azure.com/manage/enrollment).
-
- ![Screenshot that shows the Enterprise Portal Settings for view charges.](./media/enterprise-mgmt-grp-troubleshoot-cost-view/ea-portal-settings.png)
-
-
-
-## Asset is unavailable
-
-If you get an error message stating **This asset is unavailable** when trying to access a subscription or management group, then you don't have the correct role to view this item.
-
-![Screenshot that shows "asset is unavailable" message.](./media/enterprise-mgmt-grp-troubleshoot-cost-view/asset-not-found.png)
-
-Ask your Azure subscription or management group administrator for access. For more information, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
-## Next steps
-- If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
cost-management-billing How To Create Azure Support Request Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/how-to-create-azure-support-request-ea.md
Title: How to create an Azure support request for an Enterprise Agreement issue description: Enterprise Agreement customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests. Previously updated : 04/05/2023 Last updated : 02/13/2024
You can get to **Help + support** in the Azure portal. It's available from the A
### Azure role-based access control
-To create a support request for an Enterprise Agreement, you must be an Enterprise Administrator or Partner Administrator associated with an enterprise enrollment.
+To create a support request for an Enterprise Agreement, you must be an Enterprise Administrator or Partner Administrator associated with an enterprise enrollment.
### Go to Help + support from the global header
We'll walk you through some steps to gather information about your problem and h
### Problem description
-1. Type a summary of your issue and then select **Issue type**.
-1. In the **Issue type** list, select **Enrollment administration** for EA portal related issues.
+1. Type a summary of your issue and then select **Issue type**.
+1. In the **Issue type** list, select **Enrollment administration** for enterprise agreement issues.
:::image type="content" source="./media/how-to-create-azure-support-request-ea/select-issue-type-enrollment-administration.png" alt-text="Screenshot showing Select Enrollment administration." lightbox="./media/how-to-create-azure-support-request-ea/select-issue-type-enrollment-administration.png" :::
-1. For **Enrollment number**, select the enrollment number.
+1. For **Enrollment number**, select the enrollment number.
:::image type="content" source="./media/how-to-create-azure-support-request-ea/select-enrollment.png" alt-text="Screenshot showing Select Enrollment number." ::: 1. For **Problem type**, select the issue category that best describes the type of problem that you have. :::image type="content" source="./media/how-to-create-azure-support-request-ea/select-problem-type.png" alt-text="Screenshot showing Select a problem type." :::
To create an Azure support ticket, an *organizational account* must have the EA
If you have an MSA, have an administrator create an organizational account for you. An enterprise administrator or partner administrator must then add your organizational account as an enterprise administrator or partner administrator. Then you can use your organizational account to file a support request. -- To add an Enterprise Administrator, see [Create another enterprise administrator](../manage/ea-portal-administration.md#create-another-enterprise-administrator).
+- To add an Enterprise Administrator, see [Add another enterprise administrator](../manage/direct-ea-administration.md#add-another-enterprise-administrator).
- To add a Partner Administrator, see [Manage partner administrators](../manage/ea-partner-portal-administration.md#manage-partner-administrators). ## Next steps
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/troubleshoot-azure-sign-up.md
Title: Troubleshoot issues when you sign up for a new account in the Azure portal
-description: Resolving an issue when trying to sign up for a new account in the Microsoft Azure portal.
+description: Resolving an issue when trying to sign up for a new account in the Azure portal.
# Troubleshoot issues when you sign up for a new account in the Azure portal
-You might experience an issue when you try to sign up for a new account in the Microsoft Azure portal. This short guide walks you through the sign-up process and discusses some common issues at each step.
+You might experience an issue when you try to sign up for a new account in the Azure portal. This short guide walks you through the sign-up process and discusses some common issues at each step.
> [!NOTE] > If you already have an existing account and are looking for guidance to troubleshoot sign-in issues, see [Troubleshoot Azure subscription sign-in issues](troubleshoot-sign-in-issue.md).
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
Previously updated : 05/17/2023 Last updated : 02/14/2024 # View and download your Azure usage and charges
If you want to get cost and usage data using the Azure CLI, see [Get usage data
To view and download usage data as a EA customer, you must be an Enterprise Administrator, Account Owner, or Department Admin with the view charges policy enabled.
-> [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../manage/ea-direct-portal-get-started.md).
->
-> As of February 20, 2023 indirect EA customers wonΓÇÖt be able to manage their billing account in the EA portal. Instead, they must use the Azure portal.
->
-> This change doesnΓÇÖt affect Azure Government EA enrollments. They continue using the EA portal to manage their enrollment.
- 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for *Cost Management + Billing*. ![Screenshot shows Azure portal search.](./media/download-azure-daily-usage/portal-cm-billing-search.png)
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
Previously updated : 03/24/2023 Last updated : 02/14/2024
You can download your invoice in the [Azure portal](https://portal.azure.com/) or have it sent in email. Invoices are sent to the person set to receive invoices for the enrollment.
-If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Direct EA administrators can [Download or view their Azure billing invoice](../manage/direct-ea-azure-usage-charges-invoices.md#download-or-view-your-azure-billing-invoice). Indirect EA administrators can use the information at [Azure Enterprise enrollment invoices](../manage/ea-portal-enrollment-invoices.md) to download their invoice.
+If you're an Azure customer with an Enterprise Agreement (EA customer), only an EA administrator can download and view your organization's invoice. Direct EA administrators can [Download or view their Azure billing invoice](../manage/direct-ea-azure-usage-charges-invoices.md#download-or-view-your-azure-billing-invoice). Indirect EA administrators can use the information at [Azure Enterprise enrollment invoices](../manage/direct-ea-billing-invoice-documents.md) to download their invoice.
## Where invoices are generated
cost-management-billing Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/plan-manage-costs.md
Previously updated : 04/05/2023 Last updated : 02/14/2024
Use the Azure [billing](/rest/api/billing/) and [Cost Management automation APIs
## <a name="other-offers"></a> Additional resources and special cases
-### EA, CSP, and Sponsorship customers
+### CSP and Sponsorship customers
Talk to your account manager or Azure partner to get started. | Offer | Resources | |-|--|
-| Enterprise Agreement (EA) | [EA portal](https://ea.azure.com/), [help docs](https://ea.azure.com/helpdocs), and [Power BI report](https://powerbi.microsoft.com/documentation/powerbi-content-pack-azure-enterprise/) |
| Cloud Solution Provider (CSP) | Talk to your provider | | Azure Sponsorship | [Sponsorship portal](https://www.microsoftazuresponsorships.com/) |
If you're managing IT for a large organization, we recommend reading [Azure ente
Enterprise cost views are currently in Public Preview. Items to note: - Subscription costs are based on usage and don't include prepaid amounts, overages, included quantities, adjustments, and taxes. Actual charges are computed at the Enrollment level.-- Amounts shown in the Azure portal might be different than what's in the Enterprise portal. Updates in the Enterprise portal may take a few minutes before the changes are shown in the Azure portal. - If you aren't seeing costs, it might be for one of the following reasons: - You don't have permissions at the subscription level. To see enterprise cost views, you must be a Billing Reader, Reader, Contributor, or Owner at the subscription level. - You're an Account Owner and your Enrollment Administrator has disabled the "AO view charges" setting. Contact your Enrollment Administrator to get access to costs. - You're a Department Administrator and your Enrollment Administrator has disabled the **DA view charges** setting. Contact your Enrollment Administrator to get access. - You bought Azure through a channel partner, and the partner didn't release pricing information. -- If you update settings related to cost, access in the Enterprise portal, there's a delay of a few minutes before the changes are shown in the Azure portal. - Direct EA customers can update cost-related settings in the [Azure portal](https://portal.azure.com/). Navigate to the Policies menu to change settings. - Spending limit, and invoice guidance don't apply to EA Subscriptions.
cost-management-billing Review Enterprise Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-enterprise-agreement-bill.md
See [Power BI self-service sign up](https://powerbi.microsoft.com/documentation/
> [!TIP] >
-> - To learn how to generate the API key for your enrollment, see the API Reports help file on the [Enterprise portal](https://ea.azure.com/?WT.mc_id=azurebg_email_Trans_33675_1378_Service_Notice_EA_Customer_Power_BI_EA_Content_Pack_Apr26).
+> - For more information about API keys, see [API key generation](../manage/enterprise-rest-apis.md#api-key-generation).
> - For more information about connecting Power BI to your Azure consumption, see [Microsoft Azure Consumption Insights](/power-bi/desktop-connect-azure-cost-management). ### To access the legacy Power BI EA content pack:
See [Power BI self-service sign up](https://powerbi.microsoft.com/documentation/
- Number of Months: from 1 to 36 - Enrollment Number: your enrollment number 1. Select **Next**.
-1. In **Authentication Key Box**, enter the API key.
+1. In **Authentication Key** box, enter the API key.
- You can get the API key in the Azure Enterprise portal under the **Download Usage** tab. Select **API Access Key**, and then paste the key into the **Account Key** box.
+ You can get the API key in the Azure portal. For more information about API keys, see [API key generation](../manage/enterprise-rest-apis.md#api-key-generation).
1. Data takes approximately 5-30 minutes to load in Power BI, depending on the size of the data sets. ## Next steps
cost-management-billing Understand Azure Marketplace Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-azure-marketplace-charges.md
You can view a list of the external services that are on each subscription withi
## External spending for EA customers
-EA customers can see external service spending and download reports in the EA portal. See [Azure Marketplace for EA Customers](https://ea.azure.com/helpdocs/azureMarketplace) to get started.
-
-Direct EA customers can see external service spending in the [Azure portal](https://portal.azure.com). Navigate to the Usage + charges menu to view and download Azure Marketplace charges.
+EA customers can see external service spending in the [Azure portal](https://portal.azure.com). Navigate to the Usage + charges menu to view and download Azure Marketplace charges.
## View and download invoices for external services
data-factory Azure Integration Runtime Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-integration-runtime-ip-addresses.md
The IP addresses that Azure Integration Runtime uses depends on the region where
Allow traffic from the IP addresses listed for the Azure Integration runtime in the specific Azure region where your resources are located. You can get an IP range list of service tags from the [service tags IP range download link](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files). For example, if the Azure region is **AustraliaEast**, you can get an IP range list from **DataFactory.AustraliaEast**.
+> [!NOTE]
+> Azure Data Factory IP range is shared across Fabric Data Factory.
## Known issue with Azure Storage
ddos-protection Manage Ddos Protection Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-bicep.md
Previously updated : 10/12/2022 Last updated : 02/14/2024 # Quickstart: Create and configure Azure DDoS Network Protection using Bicep
defender-for-cloud Onboard Machines With Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/onboard-machines-with-defender-for-endpoint.md
+ # Connect your non-Azure machines to Microsoft Defender for Cloud with Defender for Endpoint Defender for Cloud allows you to directly onboard your non-Azure servers by deploying the Defender for Endpoint agent. This provides protection for both your cloud and non-cloud assets under a single, unified offering.
Direct onboarding is a seamless integration between Defender for Endpoint and De
## Enabling direct onboarding
-Enabling direct onboarding is an opt-in setting at the tenant level. It affects both existing and new servers onboarded to Defender for Endpoint in the same Microsoft Entra tenant. Shortly after enabling this setting, your server devices will show under the designated subscription. Alerts, software inventory, and vulnerability data are integrated with Defender for Cloud, in a similar way to how it works with Azure VMs.
+Enabling direct onboarding is an opt-in setting at the tenant level. It affects both existing and new servers onboarded to Defender for Endpoint in the same Microsoft Entra tenant. Shortly after you enable this setting, your server devices will show under the designated subscription. Alerts, software inventory, and vulnerability data are integrated with Defender for Cloud, in a similar way to how it works with Azure VMs.
Before you begin:
Before you begin:
1. Go to **Defender for Cloud** > **Environment Settings** > **Direct onboarding**. 1. Switch the **Direct onboarding** toggle to **On**.
-1. Select the subscription you would like to use for servers onboarded directly with Defender for Endpoint
+1. Select the subscription you would like to use for servers onboarded directly with Defender for Endpoint.
1. Select **Save**. :::image type="content" source="media/onboard-machines-with-defender-for-endpoint/onboard-with-defender-for-endpoint.png" alt-text="Screenshot of Onboard non-Azure servers with Defender for Endpoint.":::
-You've now successfully enabled direct onboarding on your tenant. After you enable it for the first time, it might take up to 24 hours to see your non-Azure servers in your designated subscription.
+You now successfully enabled direct onboarding on your tenant. After you enable it for the first time, it might take up to 24 hours to see your non-Azure servers in your designated subscription.
### Deploying Defender for Endpoint on your servers
Deploying the Defender for Endpoint agent on your on-premises Windows and Linux
## Current limitations - **Plan support**: Direct onboarding provides access to all Defender for Servers Plan 1 features. However, certain features in Plan 2 still require the deployment of the Azure Monitor Agent, which is only available with Azure Arc on non-Azure machines. If you enable Plan 2 on your designated subscription, machines onboarded directly with Defender for Endpoint have access to all Defender for Servers Plan 1 features and the Defender Vulnerability Management Addon features included in Plan 2.- - **Multi-cloud support**: You can directly onboard VMs in AWS and GCP using the Defender for Endpoint agent. However, if you plan to simultaneously connect your AWS or GCP account to Defender for Servers using multicloud connectors, it's currently still recommended to deploy Azure Arc.
+- **Simultaneous onboarding limited support**: For servers simultaneously onboarded using multiple methods (for example, direct onboarding combined with Log Analytics workspace-based onboarding), Defender for Cloud makes every effort to correlate them into a single device representation. However, devices using older versions of Defender for Endpoint might face certain limitations. In some instances, this could result in overcharges. We generally advise using the latest agent version. Specifically, for this limitation, ensure your Defender for Endpoint agent versions meet or exceed these minimum versions:
-- **Simultaneous onboarding limited support**: Defender for Cloud makes a best effort to correlate servers onboarded using multiple billing methods. However, in certain server deployment use cases, there might be limitations where Defender for Cloud is unable to correlate your machines. This might result in overcharges on certain devices if direct onboarding is also enabled on your tenant.-
- The following are deployment use cases currently with this limitation when used with direct onboarding of your tenant:
-
- | Location | Deployment use case |
- | | |
- | All | <u>Windows 2012, 2016:</u> <br />Azure VMs or Azure Arc machines already onboarded and billed by Defender for Servers via an Azure subscription or Log Analytics workspace, running the Defender for Endpoint modern unified agent without the MDE.Windows Azure extension. For such machines, you can enable Defender for Cloud integration with Defender for Endpoint to deploy the extension. |
- | On-premises (not running Azure Arc) | <u>Windows Server 2012, 2016</u>: <br />Servers running the Defender for Endpoint modern unified agent, and already billed by Defender for Servers P2 via the Log Analytics workspace |
- | AWS, GCP (not running Azure Arc) | <u>Windows Server 2012, 2016</u>: <br />AWS or GCP VMs using the modern unified Defender for Endpoint solution, already onboarded and billed by Defender for Servers via multicloud connectors, Log Analytics workspace, or both. |
-
- Note: For Windows 2019 and above and Linux, agent version updates have been already released to support simultaneous onboarding without limitations. For Windows - use agent version 10.8555.X and above, For Linux - use agent version 30.101.23052.009 and above.
+ |Operating System|Minimum agent version|
+ | -- | -- |
+ |Windows 2019| 10.8555|
+ |Windows 2012 R2, 2016 (modern, unified agent)|10.8560|
+ |Linux|30.101.23052.009|
## Next steps
defender-for-cloud Other Threat Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/other-threat-protections.md
Title: Other threat protections
-description: Learn about the threat protections available from Microsoft Defender for Cloud
+description: Learn about the threat protections available from Microsoft Defender for Cloud.
Last updated 05/22/2023
For a list of the Azure network layer alerts, see the [Reference table of alerts
Azure Application Gateway offers a web application firewall (WAF) that provides centralized protection of your web applications from common exploits and vulnerabilities.
-Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. The Application Gateway WAF is based on Core Rule Set 3.2 or higher from the Open Web Application Security Project. The WAF is updated automatically to protect against new vulnerabilities.
+Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. The Application Gateway WAF is based on Core Rule Set 3.2 or higher from the Open Web Application Security Project. The WAF is updated automatically to protect against new vulnerabilities.
-If you have created [WAF Security solution](partner-integration.md#add-data-sources), your WAF alerts are streamed to Defender for Cloud with no other configurations. For more information on the alerts generated by WAF, see [Web application firewall CRS rule groups and rules](../web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md?tabs=owasp31#crs911-31).
+If you created [WAF Security solution](partner-integration.md#add-data-sources), your WAF alerts are streamed to Defender for Cloud with no other configurations. For more information on the alerts generated by WAF, see [Web application firewall CRS rule groups and rules](../web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md?tabs=owasp31#crs911-31).
> [!NOTE] > Only WAF v1 is supported and will work with Microsoft Defender for Cloud.
-To deploy Azure's Application Gateway WAF, do the following:
+To deploy Azure's Application Gateway WAF, do the following steps:
1. From the Azure portal, open **Defender for Cloud**.
To deploy Azure's Application Gateway WAF, do the following:
1. In the **Add data sources** section, select **Add** for Azure's Application Gateway WAF.
- :::image type="content" source="media/other-threat-protections/deploy-azure-waf.png" alt-text="Screenshot showing where to select add to deploy WAF." lightbox="media/other-threat-protections/deploy-azure-waf.png":::
-
+ :::image type="content" source="media/other-threat-protections/deploy-azure-waf.png" alt-text="Screenshot showing where to select add to deploy WAF." lightbox="media/other-threat-protections/deploy-azure-waf.png":::
<a name="azure-ddos"></a>
If you have Azure DDoS Protection enabled, your DDoS alerts are streamed to Defe
## Microsoft Entra Permissions Management (formerly Cloudknox)
-[Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) is a cloud infrastructure entitlement management (CIEM) solution. Microsoft Entra Permission Management provides comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP.
-
+[Microsoft Entra Permissions Management](../active-directory/cloud-infrastructure-entitlement-management/index.yml) is a cloud infrastructure entitlement management (CIEM) solution. Microsoft Entra Permission Management provides comprehensive visibility and control over permissions for any identity and any resource in Azure, AWS, and GCP.
+ As part of the integration, each onboarded Azure subscription, AWS account, and GCP project give you a view of your [Permission Creep Index (PCI)](../active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md). The PCI is an aggregated metric that periodically evaluates the level of risk associated with the number of unused or excessive permissions across identities and resources. PCI measures how risky identities can potentially be, based on the permissions available to them. ## Next steps+ To learn more about the security alerts from these threat protection features, see the following articles: - [Reference table for all Defender for Cloud alerts](alerts-reference.md)
defender-for-cloud Overview Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/overview-page.md
Title: Review cloud security posture in Microsoft Defender for Cloud
-description: Learn about cloud security posture in Microsoft Defender for Cloud
+description: Learn about cloud security posture in Microsoft Defender for Cloud.
Last updated 11/02/2023
-# Review cloud security posture
+# Review cloud security posture
-Microsoft Defender for Cloud provides a unified view into the security posture of hybrid cloud workloads with the
-interactive **Overview** dashboard. Select any element on the dashboard to get more information.
+Microsoft Defender for Cloud provides a unified view into the security posture of hybrid cloud workloads with the interactive **Overview** dashboard. Select any element on the dashboard to get more information.
:::image type="content" source="./media/overview-page/overview-07-2023.png" alt-text="Screenshot of Defender for Cloud's overview page." lightbox="./media/overview-page/overview-07-2023.png"::: ## Metrics - The **top menu bar** offers: -- **Subscriptions** - You can view and filter the list of subscriptions by selecting this button. Defender for Cloud will adjust the display to reflect the security posture of the selected subscriptions.
+- **Subscriptions** - You can view and filter the list of subscriptions by selecting this button. Defender for Cloud adjusts the display to reflect the security posture of the selected subscriptions.
- **What's new** - Opens the [release notes](release-notes.md) so you can keep up to date with new features, bug fixes, and deprecated functionality. - **High-level numbers** for the connected cloud accounts, showing the context of the information in the main tiles, and the number of assessed resources, active recommendations, and security alerts. Select the assessed resources number to access [Asset inventory](asset-inventory.md). Learn more about connecting your [AWS accounts](quickstart-onboard-aws.md) and your [GCP projects](quickstart-onboard-gcp.md).
The center of the page displays the **feature tiles**, each linking to a high pr
- **Security posture** - Defender for Cloud continually assesses your resources, subscriptions, and organization for security issues. It then aggregates all the findings into a single score so that you can understand, at a glance, your current security situation: the higher the score, the lower the identified risk level. [Learn more](secure-score-security-controls.md). - **Workload protections** - This is the cloud workload protection platform (CWPP) integrated within Defender for Cloud for advanced, intelligent protection of your workloads running on Azure, on-premises machines, or other cloud providers. For each resource type, there's a corresponding Microsoft Defender plan. The tile shows the coverage of your connected resources (for the currently selected subscriptions) and the recent alerts, color-coded by severity. Learn more about [the Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).-- **Regulatory compliance** - Based on continuous assessments of your hybrid and multi-cloud resources, Defender for Cloud provides insights into your compliance with the standards that matter to your organization. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md).-- **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you've connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you've enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).
+- **Regulatory compliance** - Based on continuous assessments of your hybrid and multicloud resources, Defender for Cloud provides insights into your compliance with the standards that matter to your organization. Defender for Cloud analyzes risk factors in your environment according to security best practices. These assessments are mapped to compliance controls from a supported set of standards. [Learn more](regulatory-compliance-dashboard.md).
+- **Inventory** - The asset inventory page of Microsoft Defender for Cloud provides a single page for viewing the security posture of the resources you connected to Microsoft Defender for Cloud. All resources with unresolved security recommendations are shown in the inventory. If you enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you also have access to a software inventory. The tile on the overview page shows you at a glance the total healthy and unhealthy resources (for the currently selected subscriptions). [Learn more](asset-inventory.md).
## Insights
The **Insights** pane offers customized items for your environment including:
## Next steps - [Learn more](concept-cloud-security-posture-management.md) about cloud security posture management.-- [Learn more](security-policy-concept.md) about security standards and recommendations-- [Review your asset inventory](asset-inventory.md)-
+- [Learn more](security-policy-concept.md) about security standards and recommendations.
+- [Review your asset inventory](asset-inventory.md).
defender-for-cloud Plan Defender For Servers Select Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-select-plan.md
A couple of vulnerability assessment options are available in Defender for Serve
## Next steps After you work through these planning steps, [review Azure Arc and agent and extension requirements](plan-defender-for-servers-agents.md).-
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
The following table shows an overview of the Defender for Servers deployment pro
| Enable Defender for Servers | ΓÇó When you enable a paid plan, Defender for Cloud enables the *Security* solution on its default workspace.<br /><br />ΓÇó Enable Defender for Servers Plan 1 (subscription only) or Plan 2 (subscription and workspace).<br /><br />ΓÇó After enabling a plan, decide how you want to install agents and extensions on Azure VMs in the subscription or workgroup.<br /><br />ΓÇóBy default, auto-provisioning is enabled for some extensions. | | Protect AWS/GCP machines | ΓÇó For a Defender for Servers deployment, you set up a connector, turn off plans you don't need, configure auto-provisioning settings, authenticate to AWS/GCP, and deploy the settings.<br /><br />ΓÇó Auto-provisioning includes the agents used by Defender for Cloud and the Azure Connected Machine agent for onboarding to Azure with Azure Arc.<br /><br />ΓÇó AWS uses a CloudFormation template.<br /><br />ΓÇó GCP uses a Cloud Shell template.<br /><br />ΓÇó Recommendations start appearing in the portal. | | Protect on-premises servers | ΓÇó Onboard them as Azure Arc machines and deploy agents with automation provisioning. |
-| Foundational CSPM | ΓÇó There are no charges when you use foundational CSPM with no plans enabled.<br /><br />ΓÇó AWS/GCP machines don't need to be set up with Azure Arc for foundational CSPM. On-premises machines do.<br /><br />ΓÇó Some foundational recommendations rely only agents: Antimalware / endpoint protection (Log Analytics agent or Azure Monitor agent) \| OS baselines recommendations (Log Analytics agent or Azure Monitor agent and Guest Configuration extension) \|
+| Foundational CSPM | ΓÇó There are no charges when you use foundational CSPM with no plans enabled.<br /><br />ΓÇó AWS/GCP machines don't need to be set up with Azure Arc for foundational CSPM. On-premises machines do.<br /><br />ΓÇó Some foundational recommendations rely only agents: Antimalware / endpoint protection (Log Analytics agent or Azure Monitor agent) \| OS baselines recommendations (Log Analytics agent or Azure Monitor agent and Guest Configuration extension) \||
- Learn more about [foundational cloud security posture management (CSPM)](concept-cloud-security-posture-management.md). - Learn more about [Azure Arc](../azure-arc/index.yml) onboarding.
defender-for-cloud Plan Multicloud Security Determine Ownership Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-ownership-requirements.md
Depending on the size of your organization, separate teams will manage [security
| Security function | Details | |||
-|[Security Operations (SecOps)](/azure/cloud-adoption-framework/organize/cloud-security-operations-center) | Reducing organizational risk by reducing the time in which bad actors have access to corporate resources. Reactive detection, analysis, response and remediation of attacks. Proactive threat hunting.
+|[Security Operations (SecOps)](/azure/cloud-adoption-framework/organize/cloud-security-operations-center) | Reducing organizational risk by reducing the time in which bad actors have access to corporate resources. Reactive detection, analysis, response and remediation of attacks. Proactive threat hunting. |
| [Security architecture](/azure/cloud-adoption-framework/organize/cloud-security-architecture)| Security design summarizing and documenting the components, tools, processes, teams, and technologies that protect your business from risk.| |[Security compliance management](/azure/cloud-adoption-framework/organize/cloud-security-compliance-management)| Processes that ensure the organization is compliant with regulatory requirements and internal policies.| |[People security](/azure/cloud-adoption-framework/organize/cloud-security-people)|Protecting the organization from human risk to security.|
Depending on the size of your organization, separate teams will manage [security
|[Identity and key management](/azure/cloud-adoption-framework/organize/cloud-security-identity-keys)|Authenticating and authorizing users, services, devices, and apps. Provide secure distribution and access for cryptographic operations.| |[Threat intelligence](/azure/cloud-adoption-framework/organize/cloud-security-threat-intelligence)| Making decisions and acting on security threat intelligence that provides context and actionable insights on active attacks and potential threats.| |[Posture management](/azure/cloud-adoption-framework/organize/cloud-security-posture-management)|Continuously reporting on, and improving, your organizational security posture.|
-|[Incident preparation](/azure/cloud-adoption-framework/organize/cloud-security-incident-preparation)|Building tools, processes, and expertise to respond to security incidents.
+|[Incident preparation](/azure/cloud-adoption-framework/organize/cloud-security-incident-preparation)|Building tools, processes, and expertise to respond to security incidents. |
## Team alignment
defender-for-cloud Sql Azure Vulnerability Assessment Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/sql-azure-vulnerability-assessment-enable.md
Last updated 05/18/2023
-tags: azure-synapse
# Enable vulnerability assessment on your Azure SQL databases
event-hubs Compare Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/compare-tiers.md
Title: Compare Azure Event Hubs tiers description: This article compares supported tiers of Azure Event Hubs. Previously updated : 10/19/2022 Last updated : 02/15/2024 # Compare Azure Event Hubs tiers
event-hubs Event Hubs Node Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-node-get-started-send.md
This quickstart shows how to send events to and receive events from an event hub
## Prerequisites
-If you are new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.
+If you're new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.
To complete this quickstart, you need the following prerequisites:
To complete this quickstart, you need the following prerequisites:
- Visual Studio Code (recommended) or any other integrated development environment (IDE). - **Create an Event Hubs namespace and an event hub**. The first step is to use the [Azure portal](https://portal.azure.com) to create a namespace of type Event Hubs, and obtain the management credentials your application needs to communicate with the event hub. To create a namespace and an event hub, follow the procedure in [this article](event-hubs-create.md).
-### Install the npm package(s) to send events
+### Install npm packages to send events
To install the [Node Package Manager (npm) package for Event Hubs](https://www.npmjs.com/package/@azure/event-hubs), open a command prompt that has *npm* in its path, change the directory to the folder where you want to keep your samples.
In this section, you create a JavaScript application that sends events to an eve
## [Passwordless (Recommended)](#tab/passwordless) In the code, use real values to replace the following placeholders:
- * `EVENT HUBS RESOURCE NAME`
+ * `EVENT HUBS NAMESPACE NAME`
* `EVENT HUB NAME` ```javascript
In this section, you create a JavaScript application that sends events to an eve
const { DefaultAzureCredential } = require("@azure/identity"); // Event hubs
- const eventHubsResourceName = "EVENT HUBS RESOURCE NAME";
+ const eventHubsResourceName = "EVENT HUBS NAMESPACE NAME";
const fullyQualifiedNamespace = `${eventHubsResourceName}.servicebus.windows.net`; const eventHubName = "EVENT HUB NAME";
In this section, you create a JavaScript application that sends events to an eve
> [!NOTE] > For the complete source code, including additional informational comments, go to the [GitHub sendEvents.js page](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/event-hubs/samples/v5/javascript/sendEvents.js).
-Congratulations! You have now sent events to an event hub.
+ You have now sent events to an event hub.
## Receive events
To create an Azure storage account and a blob container in it, do the following
## [Connection String](#tab/connection-string)
-[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
+[Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md).
Note the connection string and the container name. You'll use them in the receive code.
npm install @azure/eventhubs-checkpointstore-blob
### [Passwordless (Recommended)](#tab/passwordless) In the code, use real values to replace the following placeholders:
- - `EVENT HUBS RESOURCE NAME`
+ - `EVENT HUBS NAMESPACE NAME`
- `EVENT HUB NAME` - `STORAGE ACCOUNT NAME` - `STORAGE CONTAINER NAME`
npm install @azure/eventhubs-checkpointstore-blob
const { BlobCheckpointStore } = require("@azure/eventhubs-checkpointstore-blob"); // Event hubs
- const eventHubsResourceName = "EVENT HUBS RESOURCE NAME";
+ const eventHubsResourceName = "EVENT HUBS NAMESPACE NAME";
const fullyQualifiedNamespace = `${eventHubsResourceName}.servicebus.windows.net`; const eventHubName = "EVENT HUB NAME"; const consumerGroup = "$Default"; // name of the default consumer group
npm install @azure/eventhubs-checkpointstore-blob
> [!NOTE] > For the complete source code, including additional informational comments, go to the [GitHub receiveEventsUsingCheckpointStore.js page](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventhub/eventhubs-checkpointstore-blob/samples/v1/javascript/receiveEventsUsingCheckpointStore.js).
-Congratulations! You have now received events from your event hub. The receiver program will receive events from all the partitions of the default consumer group in the event hub.
+You have now received events from your event hub. The receiver program will receive events from all the partitions of the default consumer group in the event hub.
## Next steps Check out these samples on GitHub:
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
Title: Overview of Event Hubs Premium
-description: This article provides an overview of Azure Event Hubs Premium, which offers multi-tenant deployments of Event Hubs for high-end streaming needs.
+description: This article provides an overview of Azure Event Hubs Premium, which offers multitenant deployments of Event Hubs for high-end streaming needs.
Previously updated : 09/20/2022 Last updated : 02/15/2024 # Overview of Event Hubs Premium
-The Event Hubs Premium (premium tier) is designed for high-end streaming scenarios that require elastic, superior performance with predictable latency. The performance is achieved by providing reserved compute, memory, and storage resources, which minimize cross-tenant interference in a managed multi-tenant PaaS environment.
+The Event Hubs Premium (premium tier) is designed for high-end streaming scenarios that require elastic, superior performance with predictable latency. The premium tier provides reserved compute, memory, and storage resources, which minimize cross-tenant interference in a managed multitenant PaaS environment.
It replicates events to three replicas, distributed across Azure availability zones where available. All replicas are synchronously flushed to the underlying fast storage before the send operation is reported as completed. Events that aren't read immediately or that need to be re-read later can be retained up to 90 days, transparently held in an availability-zone redundant storage tier.
In addition to these storage-related features and all capabilities and protocol
> - The premium tier isn't available in all regions. Try to create a namespace in the Azure portal and see supported regions in the **Location** drop-down list on the **Create Namespace** page.
-You can purchase 1, 2, 4, 8 and 16 processing units for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it is in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU will depend on various factors, including:
+You can purchase 1, 2, 4, 8 and 16 processing units (PU) for each namespace. As the premium tier is a capacity-based offering, the achievable throughput isn't set by a throttle as it is in the standard tier, but depends on the work you ask Event Hubs to do, similar to the dedicated tier. The effective ingest and stream throughput per PU depends on various factors, including:
* Number of producers and consumers * Payload size
The premium tier offers three compelling benefits for customers who require bett
The premium tier uses a new two-tier log storage engine that drastically improves the data ingress performance with substantially reduced overall latency without compromising the durability guarantees. ### Better isolation and predictability
-The premium tier offers an isolated compute and memory capacity to achieve more predictable latency and far reduced *noisy neighbor* impact risk in a multi-tenant deployment.
+The premium tier offers an isolated compute and memory capacity to achieve more predictable latency and far reduced *noisy neighbor* impact risk in a multitenant deployment.
It implements a *cluster in cluster* model in its multitenant clusters to provide predictability and performance while retaining all the benefits of a managed multitenant PaaS environment. ### Cost savings and scalability
-As the premium tier is a multitenant offering, it can dynamically scale more flexibly and very quickly. Capacity is allocated in processing units (PUs) that allocate isolated pods of CPU/memory inside the cluster. The number of those pods can be scaled up/down per namespace. Therefore, the premium tier is a low-cost option for messaging scenarios with the overall throughput range that is less than 120 MB/s but higher than what you can achieve with the standard SKU.
+As the premium tier is a multitenant offering, it can dynamically scale more flexibly and very quickly. Capacity is allocated in processing units (PUs) that allocate isolated pods of CPU and memory inside the cluster. The number of those pods can be scaled up or down per namespace. Therefore, the premium tier is a low-cost option for messaging scenarios with the overall throughput range that is less than 120 MB/s but higher than what you can achieve with the standard tier.
## Encryption of events
-Azure Event Hubs provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). The Event Hubs service uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as Bring Your Own Key (BYOK) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key will be encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the BYOK feature is a one time setup process on your namespace. For more information, see [Configure customer-managed keys for encrypting Azure Event Hubs data at rest](configure-customer-managed-key.md).
+Azure Event Hubs provides encryption of data at rest with Azure Storage Service Encryption (Azure SSE). The Event Hubs service uses Azure Storage to store the data. All the data that's stored with Azure Storage is encrypted using Microsoft-managed keys. If you use your own key (also referred to as Bring Your Own Key (BYOK) or customer-managed key), the data is still encrypted using the Microsoft-managed key, but in addition the Microsoft-managed key is encrypted using the customer-managed key. This feature enables you to create, rotate, disable, and revoke access to customer-managed keys that are used for encrypting Microsoft-managed keys. Enabling the BYOK feature is a one time setup process on your namespace. For more information, see [Configure customer-managed keys for encrypting Azure Event Hubs data at rest](configure-customer-managed-key.md).
> [!NOTE] > All Event Hubs namespaces are enabled for the Apache Kafka RPC protocol by default can be used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.
Azure Event Hubs provides encryption of data at rest with Azure Storage Service
## Quotas and limits The premium tier offers all the features of the standard plan, but with better performance, isolation and more generous quotas.
-For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
+For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md).
## High availability with availability zones Event Hubs standard, premium, and dedicated tiers offer [availability zones](../availability-zones/az-overview.md#availability-zones) support with no extra cost. Using availability zones, you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures.
Event Hubs standard, premium, and dedicated tiers offer [availability zones](../
## Premium vs. dedicated tiers In comparison to the dedicated offering, the premium tier provides the following benefits: -- Isolation inside a large multi-tenant environment that can shift resources quickly
+- Isolation inside a large multitenant environment that can shift resources quickly
- Scale far more elastically and quicker - PUs can be dynamically adjusted
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 01/26/2024 Last updated : 02/16/2024
The following table shows connectivity locations and the service providers for e
| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Epsilon Global Communications<br/>GTT<br/>Interxion<br/>IX Reach<br/>JISC<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Ooredoo Cloud Connect<br/>Orange<br/>SES<br/>Sohonet<br/>Telehouse - KDDI<br/>Zayo<br/>Vodafone | | **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>CoreSite<br/>Cloudflare<br/>Equinix*<br/>Megaport<br/>Neutrona Networks<br/>NTT<br/>Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* | | **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix<br/>GTT<br/>PacketFabric |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX<br/>InterCloud<br/>Interxion<br/>Megaport<br/>Telefonica |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | n/a | Supported | DE-CIX<br/>InterCloud<br/>Interxion<br/>Megaport<br/>Telefonica |
+| **Madrid2** | [Equinix MD2](https://www.equinix.com/data-centers/europe-colocation/spain-colocation/madrid-data-centers/md2) | 1 | n/a | Supported | Equinix |
| **Marseille** | [Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt<br/>DE-CIX<br/>GEANT<br/>Interxion<br/>Jaguar Network<br/>Ooredoo Cloud Connect | | **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet<br/>Devoli<br/>Equinix<br/>Megaport<br/>NETSG<br/>NEXTDC<br/>Optus<br/>Orange<br/>Telstra Corporation<br/>TPG Telecom | | **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Megaport<br/>Neutrona Networks<br/>PitChile |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **du datamena** |Supported |Supported | Dubai2 | | **[eir evo](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin | | **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | Supported | Supported | Hong Kong2<br/>London2<br/>Singapore<br/>Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Melbourne<br/>Miami<br/>Milan<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br>Zurich2</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Madrid2<br/>Melbourne<br/>Miami<br/>Milan<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br>Zurich2</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>London<br/>Paris | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | Supported | Supported | Taipei |
expressroute How To Expressroute Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/how-to-expressroute-direct-portal.md
The following steps help you create an ExpressRoute circuit from the ExpressRout
1. Select **Create** once validation passed. You'll see a message letting you know that your deployment is underway. A status will display on this page when your ExpressRoute circuit resource gets created.
-## Public Preview
-
-The following scenario is in public preview:
-
-ExpressRoute Direct and ExpressRoute circuit(s) in a different subscription or Microsoft Entra tenants. You'll create an authorization for your ExpressRoute Direct resource, and redeem the authorization to create an ExpressRoute circuit in a different subscription or Microsoft Entra tenant.
- ### Enable ExpressRoute Direct and circuits in a different subscription 1. To enroll in the preview, send an e-mail to ExpressRouteDirect@microsoft.com with the ExpressRoute Direct and target ExpressRoute circuit Azure subscription IDs. You'll receive an e-mail once the feature get enabled for your subscriptions.
firewall-manager Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/policy-overview.md
With inheritance, any changes to the parent policy are automatically applied dow
## Built-in high availability High availability is built in, so there's nothing you need to configure.-
-Azure Firewall Policy is replicated to a paired Azure region. For example, if one Azure region goes down, Azure Firewall policy becomes active in the paired Azure region. The paired region is automatically selected based on the region where the policy is created. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](../reliability/cross-region-replication-azure.md#azure-paired-regions).
+You can create an Azure Firewall Policy object in any region and link it globally to multiple Azure Firewall instances under the same Azure AD tenant. If the region where you create the Policy goes down and has a paired region, the ARM object metadata automatically fails over to the secondary region. During the failover, or if the single-region with no pair remains in a failed state, you cannot modify the Azure Firewall Policy object. However, the Azure Firewall instances linked to the Firewall Policy continue to operate. For more information, see [Cross-region replication in Azure: Business continuity and disaster recovery](../reliability/cross-region-replication-azure.md#azure-paired-regions).
## Pricing
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
If a Firewall Policy is inherited from a parent policy, Rule Collection Groups i
Here's an example policy:
+Assuming BaseRCG1 is a rule collection group priority (200) that contains the rule collections: DNATRC1, DNATRC3,NetworkRC1.\
+BaseRCG2 is a rule collection group priority (300) that contains the rule collections: AppRC2, NetworkRC2.\
+ChildRCG1 is a rule collection group priority (200) that contains the rule collections: ChNetRC1, ChAppRC1.\
+ChildRCG2 is a rule collection group that contains the rule collections: ChNetRC2, ChAppRC2,ChDNATRC3.
+
+As per following table:
|Name |Type |Priority |Rules |Inherited from |||||-|
Here's an example policy:
|ChAppRC2 | Application rule collection |2000 |7 |-| |ChDNATRC3 | DNAT rule collection | 3000 | 2 |-|
-The rule processing is in the following order: DNATRC1, DNATRC3, ChDNATRC3, NetworkRC1, NetworkRC2, ChNetRC1, ChNetRC2, AppRC2, ChAppRC1, ChAppRC2.
+Initial Processing:
+
+The process begins by examining the rule collection group (RCG) with the lowest number, which is BaseRCG1 with a priority of 200. Within this group, it searches for DNAT rule collections and evaluates them according to their priorities. In this case, DNATRC1 (priority 600) and DNATRC3 (priority 610) are found and processed accordingly.\
+Next, it moves to the next RCG, BaseRCG2 (priority 200), but finds no DNAT rule collection.\
+Following that, it proceeds to ChildRCG1 (priority 300), also without a DNAT rule collection.\
+Finally, it checks ChildRCG2 (priority 650) and finds the ChDNATRC3 rule collection (priority 3000).
+
+Iteration Within Rule Collection Groups:
+
+Returning to BaseRCG1, the iteration continues, this time for NETWORK rules. Only NetworkRC1 (priority 800) is found.\
+Then, it moves to BaseRCG2, where NetworkRC2 (priority 1300) is located.\
+Moving on to ChildRCG1, it discovers ChNetRC1 (priority 700) as the NETWORK rule.\
+Lastly, in ChildRCG2, it finds ChNetRC2 (priority 1100) as the NETWORK rule collection.
+
+Final Iteration for APPLICATION Rules:
+
+Returning to BaseRCG1, the process iterates for APPLICATION rules, but none are found.\
+In BaseRCG2, it identifies AppRC2 (priority 1200) as the APPLICATION rule.\
+In ChildRCG1, ChAppRC1 (priority 900) is found as the APPLICATION rule.\
+Finally, in ChildRCG2, it locates ChAppRC2 (priority 2000) as the APPLICATION rule.
+
+**In summary, the rule processing sequence is as follows: DNATRC1, DNATRC3, ChDNATRC3, NetworkRC1, NetworkRC2, ChNetRC1, ChNetRC2, AppRC2, ChAppRC1, ChAppRC2.**
+
+This process involves analyzing rule collection groups by priority, and within each group, ordering the rules according to their priorities for each rule type (DNAT, NETWORK, and APPLICATION).
+
+So first all the DNAT rules are processed from all the rule collection groups, analysing the rule collection groups by order of priority and ordering the DNAT rules within each rule collection group by order of priority. Then the same process for NETWORK rules, and finally for APPLICATION rules.
For more information about Firewall Policy rule sets, see [Azure Firewall Policy rule sets](policy-rule-sets.md).
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
Previously updated : 02/07/2023 Last updated : 02/15/2024 zone_pivot_groups: front-door-tiers
Azure Front Door offloads the TLS sessions at the edge and decrypts client reque
## Supported TLS versions
-Azure Front Door supports three versions of the TLS protocol: TLS versions 1.0, 1.1, and 1.2. All Azure Front Door profiles created after September 2019 use TLS 1.2 as the default minimum, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
+Azure Front Door supports four versions of the TLS protocol: TLS versions 1.0, 1.1, 1.2 and 1.3. All Azure Front Door profiles created after September 2019 use TLS 1.2 as the default minimum with TLS 1.3 enabled, but TLS 1.0 and TLS 1.1 are still supported for backward compatibility.
-Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in RFC 5246, currently, Azure Front Door doesn't support client/mutual authentication.
+Although Azure Front Door supports TLS 1.2, which introduced client/mutual authentication in RFC 5246, currently, Azure Front Door doesn't support client/mutual authentication (mTLS) yet.
-You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or theΓÇ»[Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2. As such, specifying TLS 1.2 as the minimum version controls the minimum acceptable TLS version Azure Front Door will accept from a client. When Azure Front Door initiates TLS traffic to the origin, it will attempt to negotiate the best TLS version that the origin can reliably and consistently accept.
+You can configure the minimum TLS version in Azure Front Door in the custom domain HTTPS settings using the Azure portal or theΓÇ»[Azure REST API](/rest/api/frontdoorservice/frontdoor/frontdoors/createorupdate#minimumtlsversion). Currently, you can choose between 1.0 and 1.2. As such, specifying TLS 1.2 as the minimum version controls the minimum acceptable TLS version Azure Front Door will accept from a client. For minimum TLS version 1.2 the negotiation will attempt to establish TLS 1.3 and then TLS 1.2, while for minimum TLS version 1.0 all four versions will be attempted. When Azure Front Door initiates TLS traffic to the origin, it will attempt to negotiate the best TLS version that the origin can reliably and consistently accept. Supported TLS versions for origin connections are TLS 1.0, TLS 1.1, TLS 1.2 and TLS 1.3.
## Supported certificates
For your own custom TLS/SSL certificate:
## Supported cipher suites
-For TLS 1.2 the following cipher suites are supported:
-
+For TLS 1.2/1.3 the following cipher suites are supported:
+* TLS_AES_256_GCM_SHA384 (TLS 1.3 only)
+* TLS_AES_128_GCM_SHA256 (TLS 1.3 only)
* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
For TLS 1.2 the following cipher suites are supported:
When using custom domains with TLS 1.0 and 1.1 enabled, the following cipher suites are supported:
+* TLS_AES_256_GCM_SHA384 (TLS 1.3 only)
+* TLS_AES_128_GCM_SHA256 (TLS 1.3 only)
* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 * TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
governance Assign Policy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-powershell.md
Title: "Quickstart: New policy assignment with PowerShell"
-description: In this quickstart, you use Azure PowerShell to create an Azure Policy assignment to identify non-compliant resources.
Previously updated : 08/17/2021
+ Title: "Quickstart: Create policy assignment using Azure PowerShell"
+description: In this quickstart, you create an Azure Policy assignment to identify non-compliant resources using Azure PowerShell.
Last updated : 02/15/2024 + # Quickstart: Create a policy assignment to identify non-compliant resources using Azure PowerShell
-The first step in understanding compliance in Azure is to identify the status of your resources. In
-this quickstart, you create a policy assignment to identify virtual machines that aren't using
-managed disks. When complete, you'll identify virtual machines that are _non-compliant_.
+The first step in understanding compliance in Azure is to identify the status of your resources. In this quickstart, you create a policy assignment to identify non-compliant resources using Azure PowerShell. This example evaluates virtual machines that don't use managed disks. After you create the policy assignment, you identify non-compliant virtual machines.
-The Azure PowerShell module is used to manage Azure resources from the command line or in scripts.
-This guide explains how to use Az module to create a policy assignment.
+The Azure PowerShell modules can be used to manage Azure resources from the command line or in scripts. This article explains how to use Azure PowerShell to create a policy assignment.
## Prerequisites -- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
- account before you begin.
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure PowerShell](/powershell/azure/install-az-ps).
+- [Visual Studio Code](https://code.visualstudio.com/).
+- `Microsoft.PolicyInsights` must be [registered](../../azure-resource-manager/management/resource-providers-and-types.md) in your Azure subscription. To register a resource provider, you must have permission to register resource providers. That permission is included in the Contributor and Owner roles.
+- A resource group with at least one virtual machine that doesn't use managed disks.
+
+## Connect to Azure
+
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
+
+```azurepowershell
+Connect-AzAccount
+
+# Run these commands if you have multiple subscriptions
+Get-AzSubScription
+Set-AzContext -Subscription <subscriptionID>
+```
-- Before you start, make sure that the latest version of Azure PowerShell is installed. See
- [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for detailed information.
+## Register resource provider
-- Register the Azure Policy Insights resource provider using Azure PowerShell. Registering the
- resource provider makes sure that your subscription works with it. To register a resource
- provider, you must have permission to the register resource provider operation. This operation is
- included in the Contributor and Owner roles. Run the following command to register the resource
- provider:
+When a resource provider is registered, it's available to use in your Azure subscription.
- ```azurepowershell-interactive
- # Register the resource provider if it's not already registered
- Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
- ```
+To verify if `Microsoft.PolicyInsights` is registered, run `Get-AzResourceProvider`. The resource provider contains several resource types. If the result is `NotRegistered` run `Register-AzResourceProvider`:
- For more information about registering and viewing resource providers, see
- [Resource Providers and Types](../../azure-resource-manager/management/resource-providers-and-types.md).
+```azurepowershell
+ Get-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights' |
+ Select-Object -Property ResourceTypes, RegistrationState
+Register-AzResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
+```
-## Create a policy assignment
+## Create policy assignment
-In this quickstart, you create a policy assignment for the _Audit VMs without managed disks_
-definition. This policy definition identifies virtual machines not using managed disks.
+Use the following commands to create a new policy assignment for your resource group. This example uses an existing resource group that contains a virtual machine _without_ managed disks. The resource group is the scope for the policy assignment.
-Run the following commands to create a new policy assignment:
+Run the following commands and replace `<resourceGroupName>` with your resource group name:
-```azurepowershell-interactive
-# Get a reference to the resource group that is the scope of the assignment
+```azurepowershell
$rg = Get-AzResourceGroup -Name '<resourceGroupName>'
-# Get a reference to the built-in policy definition to assign
-$definition = Get-AzPolicyDefinition | Where-Object { $_.Properties.DisplayName -eq 'Audit VMs that do not use managed disks' }
+$definition = Get-AzPolicyDefinition |
+ Where-Object { $_.Properties.DisplayName -eq 'Audit VMs that do not use managed disks' }
+```
+
+The `$rg` variable stores properties for the resource group and the `$definition` variable stores the policy definition's properties. The properties are used in subsequent commands.
+
+Run the following command to create the policy assignment:
-# Create the policy assignment with the built-in definition against your resource group
-New-AzPolicyAssignment -Name 'audit-vm-manageddisks' -DisplayName 'Audit VMs without managed disks Assignment' -Scope $rg.ResourceId -PolicyDefinition $definition
+```azurepowershell
+$policyparms = @{
+Name = 'audit-vm-managed-disks'
+DisplayName = 'Audit VMs without managed disks Assignment'
+Scope = $rg.ResourceId
+PolicyDefinition = $definition
+Description = 'Az PowerShell policy assignment to resource group'
+}
+
+New-AzPolicyAssignment @policyparms
```
-The preceding commands use the following information:
+The `$policyparms` variable uses [splatting](/powershell/module/microsoft.powershell.core/about/about_splatting) to create parameter values and improve readability. The `New-AzPolicyAssignment` command uses the parameter values defined in the `$policyparms` variable.
+
+- `Name` creates the policy assignment name used in the assignment's `ResourceId`.
+- `DisplayName` is the name for the policy assignment and is visible in Azure portal.
+- `Scope` uses the `$rg.ResourceId` property to assign the policy to the resource group.
+- `PolicyDefinition` assigns the policy definition stored in the `$definition` variable.
+- `Description` can be used to add context about the policy assignment.
+
+The results of the policy assignment resemble the following example:
-- **Name** - The actual name of the assignment. For this example, _audit-vm-manageddisks_ was used.-- **DisplayName** - Display name for the policy assignment. In this case, you're using _Audit VMs
- without managed disks Assignment_.
-- **Definition** - The policy definition, based on which you're using to create the assignment. In
- this case, it's the ID of policy definition _Audit VMs that do not use managed disks_.
-- **Scope** - A scope determines what resources or grouping of resources the policy assignment gets
- enforced on. It could range from a subscription to resource groups. Be sure to replace
- &lt;scope&gt; with the name of your resource group.
+```output
+Name : audit-vm-managed-disks
+ResourceId : /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/policyAssignments/audit-vm-managed-disks
+ResourceName : audit-vm-managed-disks
+ResourceGroupName : {resourceGroupName}
+ResourceType : Microsoft.Authorization/policyAssignments
+SubscriptionId : {subscriptionId}
+PolicyAssignmentId : /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Authorization/policyAssignments/audit-vm-managed-disks
+Properties : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.Policy.PsPolicyAssignmentProperties
+```
-You're now ready to identify non-compliant resources to understand the compliance state of your
-environment.
+For more information, go to [New-AzPolicyAssignment](/powershell/module/az.resources/new-azpolicyassignment).
## Identify non-compliant resources
-Use the following information to identify resources that aren't compliant with the policy assignment
-you created. Run the following commands:
+The compliance state for a new policy assignment takes a few minutes to become active and provide results about the policy's state.
-```azurepowershell-interactive
-# Get the resources in your resource group that are non-compliant to the policy assignment
-Get-AzPolicyState -ResourceGroupName $rg.ResourceGroupName -PolicyAssignmentName 'audit-vm-manageddisks' -Filter 'IsCompliant eq false'
+Use the following command to identify resources that aren't compliant with the policy assignment
+you created:
+
+```azurepowershell
+$complianceparms = @{
+ResourceGroupName = $rg.ResourceGroupName
+PolicyAssignmentName = 'audit-vm-managed-disks'
+Filter = 'IsCompliant eq false'
+}
+
+Get-AzPolicyState @complianceparms
```
-For more information about getting policy state, see
-[Get-AzPolicyState](/powershell/module/az.policyinsights/Get-AzPolicyState).
+The `$complianceparms` variable uses splatting to create parameter values used in the `Get-AzPolicyState` command.
+
+- `ResourceGroupName` gets the resource group name from the `$rg.ResourceGroupName` property.
+- `PolicyAssignmentName` specifies the name used when the policy assignment was created.
+- `Filter` uses an expression to find resources that aren't compliant with the policy assignment.
-Your results resemble the following example:
+For more information, go to [Get-AzPolicyState](/powershell/module/az.policyinsights/Get-AzPolicyState).
+
+Your results resemble the following example and `ComplianceState` shows `NonCompliant`:
```output
-Timestamp : 3/9/19 9:21:29 PM
-ResourceId : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachines/{vmId}
-PolicyAssignmentId : /subscriptions/{subscriptionId}/providers/microsoft.authorization/policyassignments/audit-vm-manageddisks
-PolicyDefinitionId : /providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d
-IsCompliant : False
-SubscriptionId : {subscriptionId}
-ResourceType : /Microsoft.Compute/virtualMachines
-ResourceTags : tbd
-PolicyAssignmentName : audit-vm-manageddisks
-PolicyAssignmentOwner : tbd
-PolicyAssignmentScope : /subscriptions/{subscriptionId}
-PolicyDefinitionName : 06a78e20-9358-41c9-923c-fb736d382a4d
-PolicyDefinitionAction : audit
-PolicyDefinitionCategory : Compute
-ManagementGroupIds : {managementGroupId}
+Timestamp : 2/14/2024 18:25:37
+ResourceId : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.compute/virtualmachines/{vmId}
+PolicyAssignmentId : /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.authorization/policyassignments/audit-vm-managed-disks
+PolicyDefinitionId : /providers/microsoft.authorization/policydefinitions/06a78e20-9358-41c9-923c-fb736d382a4d
+IsCompliant : False
+SubscriptionId : {subscriptionId}
+ResourceType : Microsoft.Compute/virtualMachines
+ResourceLocation : {location}
+ResourceGroup : {resourceGroupName}
+ResourceTags : tbd
+PolicyAssignmentName : audit-vm-managed-disks
+PolicyAssignmentOwner : tbd
+PolicyAssignmentScope : /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}
+PolicyDefinitionName : 06a78e20-9358-41c9-923c-fb736d382a4d
+PolicyDefinitionAction : audit
+PolicyDefinitionCategory : tbd
+ManagementGroupIds : {managementGroupId}
+ComplianceState : NonCompliant
+AdditionalProperties : {[complianceReasonCode, ]}
```
-The results match what you see in the **Resource compliance** tab of a policy assignment in the
-Azure portal view.
- ## Clean up resources
-To remove the assignment created, use the following command:
+To remove the policy assignment, use the following command:
-```azurepowershell-interactive
-# Removes the policy assignment
-Remove-AzPolicyAssignment -Name 'audit-vm-manageddisks' -Scope '/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>'
+```azurepowershell
+Remove-AzPolicyAssignment -Name 'audit-vm-managed-disks' -Scope $rg.ResourceId
``` ## Next steps
Remove-AzPolicyAssignment -Name 'audit-vm-manageddisks' -Scope '/subscriptions/<
In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more about assigning policies to validate that new resources are compliant, continue to the
-tutorial for:
+To learn more how to assign policies that validate if new resources are compliant, continue to the
+tutorial.
> [!div class="nextstepaction"]
-> [Creating and managing policies](./tutorials/create-and-manage.md)
+> [Tutorial: Create and manage policies to enforce compliance](./tutorials/create-and-manage.md)
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
always stay the same, however their values change based on the individual fillin
Parameters work the same way when building policies. By including parameters in a policy definition, you can reuse that policy for different scenarios by using different values.
+### Adding or removing parameters
+ Parameters may be added to an existing and assigned definition. The new parameter must include the **defaultValue** property. This prevents existing assignments of the policy or initiative from indirectly being made invalid.
-Parameters can't be removed from a policy definition because there may be an assignment that sets the parameter value, and that reference would become broken. Instead of removing, you can classify the parameter as deprecated in the parameter metadata.
+Parameters can't be removed from a policy definition because there may be an assignment that sets the parameter value, and that reference would become broken. Some built-in policy definitions have deprecated parameters using metadata `"deprecated": true`, which hides the parameter when assigning the definition in Azure Portal. While this is not supported for custom policy definitions, another option is to duplicate and create a new custom policy definition without the parameter.
### Parameter properties
A parameter has the following properties that are used in the policy definition:
the assignment scope. There's one role assignment per role definition in the policy (or per role definition in all of the policies in the initiative). The parameter value must be a valid resource or scope.
+ - `deprecated`: A boolean flag to indicate whether a parameter is deprecated in a built-in definition.
- `defaultValue`: (Optional) Sets the value of the parameter in an assignment if no value is given. Required when updating an existing policy definition that is assigned. For oject-type parameters, the value must match the appropriate schema. - `allowedValues`: (Optional) Provides an array of values that the parameter accepts during assignment.
governance Disallowed Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/disallowed-resources.md
Now that you assigned a built-in policy definition, go to [All Services](https:/
Now suppose that one subscope should be allowed to have the resource types disabled by this policy. Let's create an exemption on this scope so that otherwise restricted resources can be deployed there. > [!WARNING]
-> If you assign this policy definition to your root management group scope, Azure portal is unable to detect exemptions at lower level scopes. Resources disallowed by the policy assignment will show as disabled from the **All Services** list and the **Create** option is unavailable.
+> If you assign this policy definition to your root management group scope, Azure portal is unable to detect exemptions at lower level scopes. Resources disallowed by the policy assignment will show as disabled from the **All Services** list and the **Create** option is unavailable. But you can create resources in the exempted scope with clients like Azure CLI, Azure PowerShell, or Azure Resource Manager templates.
1. Select **Assignments** under **Authoring** in the left side of the Azure Policy page.
hdinsight Hdinsight Overview Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-overview-versioning.md
Last updated 04/03/2023
# How versioning works in HDInsight
-HDInsight service has two main components: a Resource provider and open-source software (OSS) componentscomponents that are deployed on a cluster.
+HDInsight service has two main components: a Resource provider and open-source software (OSS) components that are deployed on a cluster.
## HDInsight Resource provider
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 01/18/2024 Last updated : 02/16/2024 # Archived release notes
Last updated 01/18/2024
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/Azure/HDInsight/releases).
+### Release date: January 10, 2024
+
+This hotfix release applies to HDInsight 4.x and 5.x versions. HDInsight release will be available to all regions over several days. This release is applicable for image number **2401030422**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
+
+**OS versions**
+
+* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+
+> [!NOTE]
+> Ubuntu 18.04 is supported under [Extended Security Maintenance(ESM)](https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/canonical-ubuntu-18-04-lts-reaching-end-of-standard-support/ba-p/3822623) by the Azure Linux team for [Azure HDInsight July 2023](/azure/hdinsight/hdinsight-release-notes-archive#release-date-july-25-2023), release onwards.
+
+For workload specific versions, see
+
+* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
+* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
+
+## Fixed issues
+
+- Security fixes from Ambari and Oozie components
++
+## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
+
+* Basic and Standard A-series VMs Retirement.
+ * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+ * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight)
+
+We are listening: YouΓÇÖre welcome to add more ideas and other topics here and vote for them - [HDInsight Ideas](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [AzureHDInsight Community](https://www.linkedin.com/groups/14313521/)
+
+> [!NOTE]
+> We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
++ ## Release date: October 26, 2023 This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2310140056**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 01/18/2024 Last updated : 02/15/2024 # Azure HDInsight release notes
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDIn
## Release Information
-### Release date: January 10, 2024
+### Release date: February 15, 2024
-This hotfix release applies to HDInsight 4.x and 5.x versions. HDInsight release will be available to all regions over several days. This release is applicable for image number **2401030422**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+This release applies to HDInsight 4.x and 5.x versions. HDInsight release will be available to all regions over several days. This release is applicable for image number **2401250802**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
-HDInsight uses safe deployment practices, which involve gradual region deployment. it might take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
**OS versions**
For workload specific versions, see
* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md) * [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
+## New features
+
+- Apache Ranger support for Spark SQL in Spark 3.3.0 (HDInsight version 5.1) with Enterprise security package. Learn more about it [here](./spark/ranger-policies-for-spark.md).
+
## Fixed issues - Security fixes from Ambari and Oozie components
For workload specific versions, see
* Basic and Standard A-series VMs Retirement. * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
+
+## Known issues
+
+- HMS secrets failure v1.1 HTTP 401
If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
hdinsight Ranger Policies For Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/ranger-policies-for-spark.md
+
+ Title: Configure Apache Ranger policies for Spark SQL in HDInsight with Enterprise security package.
+description: This article describes how to configure Ranger policies for Spark SQL with Enterprise security package.
++ Last updated : 02/12/2024++
+# Configure Apache Ranger policies for Spark SQL in HDInsight with Enterprise security package
+
+This article describes how to configure Ranger policies for Spark SQL with Enterprise security package in HDInsight.
+
+In this tutorial, you'll learn how to,
+- Create Apache Ranger policies
+- Verify the applied Ranger policies
+- Guideline for setting Apache Ranger for Spark SQL
+
+## Prerequisites
+
+An Apache Spark cluster in HDInsight version 5.1 with [Enterprise security package](../domain-joined/apache-domain-joined-configure-using-azure-adds.md).
+
+## Connect to Apache Ranger Admin UI
+
+1. From a browser, connect to the Ranger Admin user interface using the URL `https://ClusterName.azurehdinsight.net/Ranger/`.
+
+ Remember to change `ClusterName` to the name of your Spark cluster.
+
+1. Sign in using your Microsoft Entra admin credentials. The Microsoft Entra admin credentials aren't the same as HDInsight cluster credentials or Linux HDInsight node SSH credentials.
+
+ :::image type="content" source="./media/ranger-policies-for-spark/ranger-spark.png" alt-text="Screenshot shows the create alert notification dialog box." lightbox="./media/ranger-policies-for-spark/ranger-spark.png":::
+
+## Create Domain users
+
+See [Create an HDInsight cluster with ESP](../domain-joined/apache-domain-joined-configure-using-azure-adds.md#create-an-hdinsight-cluster-with-esp), for information on how to create **sparkuser** domain users. In a production scenario, domain users come from your Active Directory tenant.
+
+## Create Ranger policy
+
+In this section, you create two Ranger policies;
+
+- [Access policy for accessing ΓÇ£hivesampletableΓÇ¥ from spark-sql](./ranger-policies-for-spark.md#create-ranger-access-policies)
+- [Masking policy for obfuscating the columns in hivesampletable](./ranger-policies-for-spark.md#create-ranger-masking-policy)
+
+### Create Ranger Access policies
+
+1. Open Ranger Admin UI.
+
+1. Select **hive_and_spark**, under **Hadoop SQL**.
+
+ :::image type="content" source="./media/ranger-policies-for-spark/ranger-spark.png" alt-text="Screenshot shows select hive and spark." lightbox="./media/ranger-policies-for-spark/ranger-spark.png":::
+
+1. Select **Add New Policy** under **Access** tab, and then enter the following values:
+
+ :::image type="content" source="./media/ranger-policies-for-spark/add-new-policy-screenshot.png" alt-text="Screenshot shows select hive." lightbox="./media/ranger-policies-for-spark/add-new-policy-screenshot.png":::
+
+ | Property | Value |
+ |||
+ | Policy Name | read-hivesampletable-all |
+ | Hive Database | default |
+ | table | hivesampletable |
+ | Hive Column | * |
+ | Select User | sparkuser |
+ | Permissions | select |
+
+ :::image type="content" source="./media/ranger-policies-for-spark/sample-policy-details.png" alt-text="Screenshot shows sample policy details." lightbox="./media/ranger-policies-for-spark/sample-policy-details.png":::
+
+ Wait a few moments for Ranger to sync with Microsoft Entra ID if a domain user is not automatically populated for Select User.
+
+1. Select **Add** to save the policy.
+
+1. Open Zeppelin notebook and run the following command to verify the policy.
+
+ ```
+ %sql
+ select * from hivesampletable limit 10;
+ ```
+
+ Result before policy was saved:
+
+ :::image type="content" source="./media/ranger-policies-for-spark/result-before-access-policy.png" alt-text="Screenshot shows result before access policy." lightbox="./media/ranger-policies-for-spark/result-before-access-policy.png":::
+
+ Result after policy is applied:
+
+ :::image type="content" source="./media/ranger-policies-for-spark/result-after-access-policy.png" alt-text="Screenshot shows result after access policy." lightbox="./media/ranger-policies-for-spark/result-after-access-policy.png":::
+
+#### Create Ranger masking policy
+
+
+The following example explains how to create a policy to mask a column.
+
+1. Create another policy under **Masking** tab with the following properties using Ranger Admin UI
+
+ :::image type="content" source="./media/ranger-policies-for-spark/add-new-masking-policy-screenshot.png" alt-text="Screenshot shows add new masking policy screenshot." lightbox="./media/ranger-policies-for-spark/add-new-masking-policy-screenshot.png":::
+
+
+ |Property |Value |
+ |||
+ |Policy Name| mask-hivesampletable |
+ |Hive Database|default|
+ |Hive table| hivesampletable|
+ |Hive column|devicemake|
+ |Select User|sparkuser|
+ |Permissions|select|
+ |Masking options|hash|
+
+
+
+ :::image type="content" source="./media/ranger-policies-for-spark/masking-policy-details.png" alt-text="Screenshot shows masking policy details." lightbox="./media/ranger-policies-for-spark/masking-policy-details.png":::
+
+
+1. Select **Save** to save the policy.
+
+1. Open Zeppelin notebook and run the following command to verify the policy.
+
+ ```
+ %sql
+ select clientId, deviceMake from hivesampletable;
+ ```
+ :::image type="content" source="./media/ranger-policies-for-spark/open-zipline-notebook.png" alt-text="Screenshot shows open zeppelin notebook." lightbox="./media/ranger-policies-for-spark/open-zipline-notebook.png":::
+
+
+
+> [!NOTE]
+> By default, the policies for hive and spark-sql will be common in Ranger.
++
+
+## Guideline for setting up Apache Ranger for Spark-sql
+
+**Scenario 1**: Using new Ranger database while creating HDInsight 5.1 Spark cluster.
+
+When the cluster is created, the relevant Ranger repo containing the Hive and Spark Ranger policies are created under the name <hive_and_spark> in the Hadoop SQL service on the Ranger DB.
+
+
+
+
+You can edit the policies and these policies gets applied to both Hive and Spark.
+
+Points to consider:
+
+1. In case you have two metastore databases with the same name used for both hive (for example, DB1) and spark (for example, DB1) catalogs.
+ - If spark uses spark catalog (metastore.catalog.default=spark), the policy applies to the DB1 of the spark catalog.
+ - If spark uses hive catalog (metastore.catalog.default=hive), the policies get applied to the DB1 of the hive catalog.
+
+ There is no way of differentiating between DB1 of hive and spark catalog from the perspective of Ranger.
+
+
+ In such cases, it is recommended to either use ΓÇÿhiveΓÇÖ catalog for both Hive and Spark or maintain different database, table and column names for both Hive and Spark catalogs so that the policies are not applied to databases across catalogs.
+
+
+1. In case you use ΓÇÿhiveΓÇÖ catalog for both Hive and Spark.
+
+ LetΓÇÖs say you create a table **table1** through Hive with current ΓÇÿxyzΓÇÖ user. It creates an HDFS file called **table1.db** whose owner is ΓÇÿxyzΓÇÖ user.
+
+ - Now consider, the user ΓÇÿabcΓÇÖ is used while launching the Spark Sql session. In this session of user ΓÇÿabcΓÇÖ, if you try to write anything to **table1**, it is bound to fail since the table owner is ΓÇÿxyzΓÇÖ.
+ - In such case, it is recommended to use the same user in Hive and Spark SQL for updating the table and that user should have sufficient privileges to perform update operations.
+
+**Scenario 2**: Using existing Ranger database (with existing policies) while creating HDInsight 5.1 Spark cluster.
+
+ - In this case when the HDI 5.1 cluster is created using existing Ranger database then, new Ranger repo gets created again on this database with the name of the new cluster in this format - <hive_and_spark>.
++
+ :::image type="content" source="./media/ranger-policies-for-spark/new-repo-old-ranger-database.png" alt-text="Screenshot shows new repo old ranger database." lightbox="./media/ranger-policies-for-spark/new-repo-old-ranger-database.png":::
+
+ LetΓÇÖs say you have the policies defined in the Ranger repo already under the name <oldclustername_hive> on the existing Ranger database inside Hadoop SQL service and you want to share the same policies in the new HDInsight 5.1 Spark cluster. To achieve this, follow the steps given below:
+
+> [!NOTE]
+> Config updates can be performed by the user with Ambari admin privileges.
+
+1. Open Ambari UI from your new HDInsight 5.1 cluster.
+
+1. Go to Spark 3 service -> Configs.
+
+1. Open ΓÇ£ranger-spark-securityΓÇ¥ security config.
+
+
+
+ Or Open ΓÇ£ranger-spark-securityΓÇ¥ security config in /etc/spark3/conf using SSH.
+
+ :::image type="content" source="./media/ranger-policies-for-spark/ambari-config-ranger-security.png" alt-text="Screenshot shows Ambari config ranger security." lightbox="./media/ranger-policies-for-spark/ambari-config-ranger-security.png":::
+
+
+
+1. Edit two configurations ΓÇ£ranger.plugin.spark.service.nameΓÇ£ and ΓÇ£ranger.plugin.spark.policy.cache.dir" to point to old policy repo ΓÇ£oldclustername_hiveΓÇ¥ and ΓÇ£SaveΓÇ¥ the configurations.
+
+ Ambari:
+
+ :::image type="content" source="./media/ranger-policies-for-spark/config-update-service-name-ambari.png" alt-text="Screenshot shows config update service name Ambari." lightbox="./media/ranger-policies-for-spark/config-update-service-name-ambari.png":::
+
+ XML file:
+
+ :::image type="content" source="./media/ranger-policies-for-spark/config-update-xml.png" alt-text="Screenshot shows config update xml." lightbox="./media/ranger-policies-for-spark/config-update-xml.png":::
+
+
+
+1. Restart Ranger and Spark services from Ambari.
+
+ The policies get applied on databases in the spark catalog. If you want to access the databases under hive catalog, go to Ambari -> SPARK3 -> Configs -> Change ΓÇ£metastore.catalog.defaultΓÇ¥ from spark to hive.
+
+ :::image type="content" source="./media/ranger-policies-for-spark/change-metastore-config.png" alt-text="Screenshot shows change metastore config." lightbox="./media/ranger-policies-for-spark/change-metastore-config.png":::
++
+### Known issues
+
+- Apache Ranger Spark-sql integration not works if Ranger admin is down.
+- Ranger DB could be overloaded if >20 spark sessions are launched concurrently because of continuous policy pulls.
+- In Ranger Audit logs, ΓÇ£ResourceΓÇ¥ column, on hover, doesnΓÇÖt show the entire query which got executed.
+
+
+
+
healthcare-apis Configure Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-identity-providers.md
# Configure multiple service identity providers
-In addition to [Microsoft Entra ID](/entra/fundamentals/whatis), you can configure up to two additional identity providers for a FHIR service, whether the service already exists or is newly created. Identity providers must support OpenID Connect (OIDC), and must be able to issue JSON Web Tokens (JWT) with a `fhirUser` claim, a `azp` or `appId` claim, and an `scp` claim with [SMART on FHIR v1 Scopes](https://www.hl7.org/fhir/smart-app-launch/1.0.0/scopes-and-launch-context/https://docsupdatetracker.net/index.html#scopes-for-requesting-clinical-data).
+In addition to [Microsoft Entra ID](/entra/fundamentals/whatis), you can configure up to two additional identity providers for a FHIR service, whether the service already exists or is newly created.
+
+## Identity providers prerequisite
+Identity providers must support OpenID Connect (OIDC), and must be able to issue JSON Web Tokens (JWT) with a `fhirUser` claim, a `azp` or `appId` claim, and an `scp` claim with [SMART on FHIR v1 Scopes](https://www.hl7.org/fhir/smart-app-launch/1.0.0/scopes-and-launch-context/https://docsupdatetracker.net/index.html#scopes-for-requesting-clinical-data).
## Enable additional identity providers with Azure Resource Manager (ARM)
hpc-cache Cache Usage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/cache-usage-models.md
Title: Azure HPC Cache usage models description: Describes the different cache usage models and how to choose among them to set read-only or read/write caching and control other caching settings-+ Previously updated : 06/29/2022 Last updated : 02/16/2024 <!-- filename is referenced from GUI in aka.ms/hpc-cache-usagemodel -->
Cache usage models let you customize how your Azure HPC Cache stores files to sp
File caching is how Azure HPC Cache expedites client requests. It uses these basic practices:
-* **Read caching** - Azure HPC Cache keeps a copy of files that clients request from the storage system. The next time a client requests the same file, HPC Cache can provide the version in its cache instead of having to fetch the file from the back-end storage system again.
+* **Read caching** - Azure HPC Cache keeps a copy of files that clients request from the storage system. The next time a client requests the same file, HPC Cache can provide the version in its cache instead of having to fetch the file from the back-end storage system again. Write requests are passed to the back-end storage system.
-* **Write caching** - Optionally, Azure HPC Cache can store a copy of any changed files sent from the client machines. If multiple clients make changes to the same file over a short period, the cache can collect all the changes in the cache instead of having to write each change individually to the back-end storage system.
+* **Write caching** - Optionally, Azure HPC Cache can store a copy of any changed files sent from the client machines. If multiple clients make changes to the same file over a short period, the cache can collect all the changes in the cache instead of having to write each change individually to the back-end storage system. After a specified amount of time with no changes, the cache moves the file to the long-term storage system.
- After a specified amount of time with no changes, the cache moves the file to the long-term storage system.
+* **Verification timer** - The verification timer setting determines how frequently the cache compares its local copy of a file with the remote version on the back-end storage system. If the back-end copy is newer than the cached copy, the cache fetches the remote copy and stores it for future requests.
- If write caching is disabled, the cache doesn't store the changed file and immediately writes it to the back-end storage system.
+ The verification timer setting shows when the cache *automatically* compares its files with source files in remote storage. However, you can force Azure HPC Cache to compare files by performing a directory operation that includes a readdirplus request. Readdirplus is a standard NFS API (also called extended read) that returns directory metadata, which causes the cache to compare and update files.
-* **Write-back delay** - For a cache with write caching turned on, write-back delay is the amount of time the cache waits for additional file changes before copying the file to the back-end storage system.
-
-* **Back-end verification** - The back-end verification setting determines how frequently the cache compares its local copy of a file with the remote version on the back-end storage system. If the back-end copy is newer than the cached copy, the cache fetches the remote copy and stores it for future requests.
-
- The back-end verification setting shows when the cache *automatically* compares its files with source files in remote storage. However, you can force Azure HPC Cache to compare files by performing a directory operation that includes a readdirplus request. Readdirplus is a standard NFS API (also called extended read) that returns directory metadata, which causes the cache to compare and update files.
+* **Write-back timer** - For a cache with read-write caching, write-back timer is the maximum amount of time in seconds that the cache waits before copying a changed file to the back-end storage system.
The usage models built into Azure HPC Cache have different values for these settings so that you can choose the best combination for your situation.
The usage models built into Azure HPC Cache have different values for these sett
You must choose a usage model for each NFS-protocol storage target that you use. Azure Blob storage targets have a built-in usage model that can't be customized.
-HPC Cache usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize speed for reading files, you might not care whether the files in the cache are checked against the back-end files. On the other hand, if you want to make sure your files are always up to date with the remote storage, choose a model that checks frequently.
+HPC Cache usage models let you choose how to balance fast response with the risk of getting stale data. If you want to optimize speed for reading files, you might not care whether the files in the cache are checked against the back-end files. On the other hand, if you want to make sure your files are always up to date with the remote storage, choose a model and set the verification timer to a low number to check frequently.
These are the usage model options:
-* **Read heavy, infrequent writes** - Use this option if you want to speed up read access to files that are static or rarely changed.
+* **Read-only caching** - Use this option if you want to speed up read access to files. Choose this option when your workflow involves minimal write operations like 0% to 5%.
- This option caches client reads but doesn't cache writes. It passes writes through to the back-end storage immediately.
+ This option caches client reads but doesn't cache writes. Writes pass through to the back-end storage.
- Files stored in the cache are not automatically compared to the files on the NFS storage volume. (Read the description of back-end verification above to learn how to compare them manually.)
-
- Do not use this option if there is a risk that a file might be modified directly on the storage system without first writing it to the cache. If that happens, the cached version of the file will be out of sync with the back-end file.
-
-* **Greater than 15% writes** - This option speeds up both read and write performance. When using this option, all clients must access files through the Azure HPC Cache instead of mounting the back-end storage directly. The cached files will have recent changes that have not yet been copied to the back end.
-
- In this usage model, files in the cache are only checked against the files on back-end storage every eight hours. The cached version of the file is assumed to be more current. A modified file in the cache is written to the back-end storage system after it has been in the cache for an hour with no additional changes.
-
-* **Clients write to the NFS target, bypassing the cache** - Choose this option if any clients in your workflow write data directly to the storage system without first writing to the cache, or if you want to optimize data consistency. Files that clients request are cached (reads), but any changes to those files from the client (writes) are not cached. They are passed through directly to the back-end storage system.
-
- With this usage model, the files in the cache are frequently checked against the back-end versions for updates - every 30 seconds. This verification allows files to be changed outside of the cache while maintaining data consistency.
-
- > [!TIP]
- > Those first three basic usage models can be used to handle the majority of Azure HPC Cache workflows. The next options are for less common scenarios.
+ Files stored in the cache are not automatically compared to the files on the NFS storage volume. (Read the description of verification timer above to learn how to compare them manually.)
-* **Greater than 15% writes, checking the backing server for changes every 30 seconds** and **Greater than 15% writes, checking the backing server for changes every 60 seconds** - These options are designed for workflows where you want to speed up both reads and writes, but there's a chance that another user will write directly to the back-end storage system. For example, if multiple sets of clients are working on the same files from different locations, these usage models might make sense to balance the need for quick file access with low tolerance for stale content from the source.
+ When choosing the **Read-only caching** option, you may change the Verification timer. The default value is 30 seconds. The value must be an integer (no decimals) between 1 and 31536000 seconds (1 year) inclusive.
-* **Greater than 15% writes, write back to the server every 30 seconds** - This option is designed for the scenario where multiple clients are actively modifying the same files, or if some clients access the back-end storage directly instead of mounting the cache.
+* **Read-write caching** - This option caches both read and write operations. When using this option, most clients are expected to access files through the Azure HPC Cache instead of mounting the back-end storage directly. The cached files will have recent changes that have not yet been copied to the back end.
- The frequent back-end writes affect cache performance, so you should consider using the **Greater than 15% writes** usage model if there's a low risk of file conflict - for example, if you know that different clients are working in different areas of the same file set.
+ In this usage model, files in the cache are only checked against the files on back-end storage every eight hours by default. The cached version of the file is assumed to be more current. A modified file in the cache is written to the back-end storage system after it has been in the cache for an hour by default.
-* **Read heavy, checking the backing server every 3 hours** - This option prioritizes fast reads on the client side, but also refreshes cached files from the back-end storage system regularly, unlike the **Read heavy, infrequent writes** usage model.
+ When choosing the **Read-write caching** option, you may change both the Verification timer and the Write-back timer. The Verification timer default value is 28,800 seconds (8 hours). The value must be an integer (no decimals) between 1 and 31536000 inclusive. The Write-back timer default value is 3600 seconds (1 hour). The value must be an integer (no decimals) between 1 and 31536000 seconds (1 year) inclusive.
This table summarizes the usage model differences: [!INCLUDE [usage-models-table.md](includes/usage-models-table.md)]
+> [!WARNING]
+> **Changing usage models causes a service disruption.** HPC Cache clients will not receive responses while the usage model is transitioning. If you must change usage models, it is recommended that the change is made during a scheduled maintenance window to prevent client disruption.
+ If you have questions about the best usage model for your Azure HPC Cache workflow, talk to your Azure representative or open a support request for help. > [!TIP] > A utility is available to write specific individual files back to a storage target without writing the entire cache contents. Learn more about the flush_file.py script in [Customize file write-back in Azure HPC Cache](custom-flush-script.md).
-## Change usage models
-
-You can change usage models by editing the storage target, but some changes are not allowed because they create a small risk of file version conflict.
-
-You can't change **to** or **from** the model named **Read heavy, infrequent writes**. To change a storage target to this usage model, or to change it from this model to a different usage model, you have to delete the original storage target and create a new one.
-
-This restriction also applies to the usage model **Read heavy, checking the backing server every 3 hours**, which is less commonly used. Also, you can change between the two "read heavy..." usage models, but not into or out of a different usage model style.
-
-This restriction is needed because of the way different usage models handle Network Lock Manager (NLM) requests. Azure HPC Cache sits between clients and the back-end storage system. Usually, the cache passes NLM requests through to the back-end storage system, but in some situations, the cache itself acknowledges the NLM request and returns a value to the client. In Azure HPC Cache, this only happens when you use the usage models **Read heavy, infrequent writes** or **Read heavy, checking the backing server every 3 hours**, or with a standard blob storage target, which doesn't have configurable usage models.
-
-If you change between **Read heavy, infrequent writes** and a different usage model, there's no way to transfer the current NLM state from the cache to the storage system or vice versa. So the client's lock status is inaccurate.
-
-> [!NOTE]
-> ADLS-NFS does not support NLM. You should disable NLM when clients mount the cluster to access an ADLS-NFS storage target.
->
-> Use the option ``-o nolock`` in the ``mount`` command. Check your client operating system's mount documentation (man 5 nfs) to learn the exact behavior of the ``nolock`` option for your clients.
## Next steps
hpc-cache Hpc Cache Edit Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-edit-storage.md
az hpc-cache nfs-storage-target update --cache-name mycache \
The usage model influences how the cache retains data. Read [Understand cache usage models](cache-usage-models.md) to learn more. > [!NOTE]
-> You can't change between **Read heavy, infrequent writes** and other usage models. Read [Understand cache usage models](cache-usage-models.md#change-usage-models) for details.
+> Changing usage models causes a service disruption to clients. Read [Choose the right usage model](cache-usage-models.md#choose-the-right-usage-model-for-your-workflow) for details.
To change the usage model for an NFS storage target, use one of these methods.
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
More about Azure Key Vault management guidelines, see:
| Key Vault Administrator| Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. | 00482a5a-887f-4fb3-b363-3b7fe8e74483 | | Key Vault Reader | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 | | Key Vault Certificates Officer | Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 |
-| Key Vault Certificates User | Read entire certificate contents including secret and key portion. Only works for key vaults that use the 'Azure role-based access control' permission model. | db79e9a7-68ee-4b58-9aeb-b90e7c24fcba |
+| Key Vault Certificate User | Read entire certificate contents including secret and key portion. Only works for key vaults that use the 'Azure role-based access control' permission model. | db79e9a7-68ee-4b58-9aeb-b90e7c24fcba |
| Key Vault Crypto Officer | Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 | | Key Vault Crypto Service Encryption User | Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. | e147488a-f6f5-4113-8e2d-b22465e65bf6 | | Key Vault Crypto User | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 |
lab-services Troubleshoot Access Lab Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/troubleshoot-access-lab-vm.md
Last updated 12/05/2022
# Troubleshoot accessing lab VMs in Azure Lab Services
-In this article, you learn about the different approaches for troubleshooting lab VMs. Understand how each approach affects your lab environment and user data on the lab VM. There can be different reasons why you're unable to connect to a lab VM in Azure Lab Services, or why you're stuck to complete a course. For example, the underlying VM is experiencing issues, your organization's firewall settings have changed, or a software change in the lab VM operating system.
+In this article, you learn about the different approaches for troubleshooting lab VMs. Understand how each approach affects your lab environment and user data on the lab VM. There can be different reasons why you're unable to access to a lab VM in Azure Lab Services, or why you're stuck to complete a course. For example, the underlying VM is experiencing issues, your organization's firewall settings changed, or a software change in the lab VM operating system.
## Prerequisites
In this article, you learn about the different approaches for troubleshooting la
To use and access a lab VM, you connect to it by using Remote Desktop (RDP) or Secure Shell (SSH). You may experience difficulties to access your lab VM: -- You're unable to connect to the lab VM from your computer by using RDP or SSH. There might be a problem with the underlying VM, or a network or firewall configuration might prevent you from connecting.
+- You're unable to start the lab VM
-- You're unable to login to the lab VM.
+- You're unable to connect to the lab VM from your computer by using RDP or SSH
-- After connecting to the lab VM, the VM is not working correctly.
+- You're unable to log in to the lab VM
+
+- After connecting to the lab VM, the VM isn't working correctly
## Troubleshooting steps
+### Unable to start lab VM
+When our service is unable to maintain connection with a lab VM, a warning appears next to the lab VM name.
++
+If you see a lab VM alert next to its name, then hover over it to read the error message. This message might state that _"Our cloud virtual machine has become non-responsive and will be stopped automatically to avoid overbilling."_
++
+Scenarios might include:
+
+|Scenario|Cause|Resolution|
+|-|-|-|
+|Shutdown lab at OS level |Ending a lab VM session through OS level shutdown |Lab users may start the lab VM at any time without affecting lab connectivity |
+|Network configuration |ΓÇó Installing a firewall that has outbound rule blocking 443 port <br> ΓÇó Changing DNS setting, custom DNS solution, can't find our DNS endpoint <br> ΓÇó Changing DHCP settings or IP address in the VM |Learn more about [supported networking scenarios and topologies for advanced networking](./concept-lab-services-supported-networking-scenarios.md) and review [troubleshooting lab VM connection](./troubleshoot-connect-lab-vm.md) |
+|OS disk full |ΓÇó Limited disk space prevents the lab VM from starting <br> ΓÇó A nested virtualization template with a full host disk prevents the lab from publishing|Ensure at least 1 GB of space is available on the primary disk |
+|Lab Services Agent |Disabling the Lab Services agent on the lab VM in any form, including: <br> ΓÇó Changing system files or folders under C:\WindowsAzure <br> ΓÇó Modifying services by either starting or stopping the Azure agent |ΓÇó Check if the idle agent service started, which should be set as a 'Manual' startup task for the VM Agent service to start <br> ΓÇó If the LabServicesIdleAgent service isn't already running, run a Windows startup task to start it <br> ΓÇó Students should avoid making changes to any files/folders under C:\WindowsAzure |
+
+If you have questions or need help, review the [Advanced troubleshooting](#advanced-troubleshooting) section.
+ ### Unable to connect to the lab VM with Remote Desktop (RDP) or Secure Shell (SSH)
-1. [Redeploy your lab VM](./how-to-reset-and-redeploy-vm.md#redeploy-a-lab-vm) to another infrastructure node, while maintaining the user data.
+1. [Redeploy your lab VM](./how-to-reset-and-redeploy-vm.md#redeploy-a-lab-vm) to another infrastructure node, while maintaining the user data
This approach might help resolve issues with the underlying virtual machine. Learn more about [redeploying versus reimaging a lab VM](#redeploy-versus-reimage-a-lab-vm) and how they affect your user data.
-1. [Verify your organization's firewall settings for your lab](./how-to-configure-firewall-settings.md) with the educator and IT admin.
+1. [Verify your organization's firewall settings for your lab](./how-to-configure-firewall-settings.md) with the educator and IT admin
A change in the organization's firewall or network settings might prevent your computer to connect to the lab VM.
-1. If you still can't connect to the lab VM, [reimage the lab VM](./how-to-reset-and-redeploy-vm.md#reimage-a-lab-vm).
+1. If you still can't connect to the lab VM, [reimage the lab VM](./how-to-reset-and-redeploy-vm.md#reimage-a-lab-vm)
> [!IMPORTANT] > Reimaging a lab VM deletes the user data in the VM. Make sure to [store the user data outside the lab VM](#store-user-data-outside-the-lab-vm).
When you create a new lab from an exported lab VM image, perform the following s
1. After the lab creation finishes, you can [reset the password](./how-to-set-virtual-machine-passwords.md).
-### After logging in, the lab VM is not working correctly
+### After logging in, the lab VM isn't working correctly
The lab VM might be malfunctioning as a result of installing a software component, or making a change to the operating system configuration.
load-balancer Load Balancer Common Deployment Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-common-deployment-errors.md
Title: Troubleshoot common deployment errors
description: Describes how to resolve common errors when you deploy Azure Load Balancers.
-tags: top-support-issue
load-balancer Load Balancer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-overview.md
An **[internal (or private) load balancer](./components.md#frontend-ip-configura
For more information on the individual load balancer components, see [Azure Load Balancer components](./components.md).
->[!NOTE]
-> Azure provides a suite of fully managed load-balancing solutions for your scenarios.
-> * If you are looking to do DNS based global routing and do **not** have requirements for Transport Layer Security (TLS) protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review [Traffic Manager](../traffic-manager/traffic-manager-overview.md).
-> * If you want to load balance between your servers in a region at the application layer, review [Application Gateway](../application-gateway/overview.md).
-> * If you need to optimize global routing of your web traffic and optimize top-tier end-user performance and reliability through quick global failover, see [Front Door](../frontdoor/front-door-overview.md).
->
-> Your end-to-end scenarios may benefit from combining these solutions as needed.
-> For an Azure load-balancing options comparison, see [Overview of load-balancing options in Azure](/azure/architecture/guide/technology-choices/load-balancing-overview).
-- ## Why use Azure Load Balancer? With Azure Load Balancer, you can scale your applications and create highly available services. Load balancer supports both inbound and outbound scenarios. Load balancer provides low latency and high throughput, and scales up to millions of flows for all TCP and UDP applications.
logic-apps Biztalk Server Azure Integration Services Migration Approaches https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/biztalk-server-azure-integration-services-migration-approaches.md
Again, having a naming convention is critical, although the format isn't overly
`CN-<*connector-name*>-<*logic-app-or-workflow-name*>`
-As a concrete example, you might rename a Service Bus connection in an **OrderQueue** logic app or workflow with **CN-ServiceBus-OrderQueue** as the new name. For more information, see the Serverless360 blog post [Logic app best practices, tips, and tricks: #11 connectors naming convention](https://www.serverless360.com/blog/logic-app-best-practices-tips-and-tricks-11-connectors-naming-convention).
+As a concrete example, you might rename a Service Bus connection in an **OrderQueue** logic app or workflow with **CN-ServiceBus-OrderQueue** as the new name. For more information, see the Turbo360 (Formerly Serverless360) blog post [Logic app best practices, tips, and tricks: #11 connectors naming convention](https://www.turbo360.com/blog/logic-app-best-practices-tips-and-tricks-11-connectors-naming-convention).
### Handle exceptions with scopes and "Run after" options
You've now learned more about available migration approaches, planning considera
> [!div class="nextstepaction"] >
-> [Give feedback about migration guidance for BizTalk Server to Azure Integration Services](https://aka.ms/BizTalkMigrationGuidance)
+> [Give feedback about migration guidance for BizTalk Server to Azure Integration Services](https://aka.ms/BizTalkMigrationGuidance)
logic-apps Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap.md
Last updated 02/10/2024
-tags: connectors
# Connect to SAP from workflows in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
Last updated 01/10/2024
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. If you receive encoded XML content, you'll need to decode that content first. When you're building a logic app workflow in Azure Logic Apps, you can encode and decode flat files by using the **Flat File** built-in connector actions and a flat file schema for encoding and decoding. You can use **Flat File** actions in multi-tenant Consumption logic app workflows and single-tenant Standard logic app workflows.
+Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. If you receive encoded XML content, you'll need to decode that content first. When you're building a logic app workflow in Azure Logic Apps, you can encode and decode flat files by using the **Flat File** built-in connector actions and a flat file schema for encoding and decoding. You can use **Flat File** actions in multitenant Consumption logic app workflows and single-tenant Standard logic app workflows.
While no **Flat File** triggers are available, you can use any trigger or action to feed the source XML content into your workflow. For example, you can use a built-in connector trigger, a managed or Azure-hosted connector trigger available for Azure Logic Apps, or even another app.
This article shows how to add the **Flat File** encoding and decoding actions to
For more information, review the following documentation: * [Consumption versus Standard logic apps](logic-apps-overview.md#resource-environment-differences)
-* [Integration account built-in connectors](../connectors/built-in.md#integration-account-built-in)
+* [Integration account built-in connectors](../connectors/built-in.md#b2b-built-in-operations)
* [Built-in connectors overview for Azure Logic Apps](../connectors/built-in.md) * [Managed or Azure-hosted connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
For more information, review the following documentation:
* Your logic app resource and workflow. Flat file operations don't have any triggers available, so your workflow has to minimally include a trigger. For more information, see the following documentation:
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+ * [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
Last updated 01/04/2024
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-When you want to perform basic JSON transformations in your logic app workflows, you can use built-in data operations, such as the **Compose** action or **Parse JSON** action. However, some scenarios might require advanced and complex transformations that include elements such as iterations, control flows, and variables. For transformations between JSON to JSON, JSON to text, XML to JSON, or XML to text, you can create a template that describes the required mapping or transformation using the Liquid open-source template language. You can select this template when you add a **Liquid** built-in action to your workflow. You can use **Liquid** actions in multi-tenant Consumption logic app workflows and single-tenant Standard logic app workflows.
+When you want to perform basic JSON transformations in your logic app workflows, you can use built-in data operations, such as the **Compose** action or **Parse JSON** action. However, some scenarios might require advanced and complex transformations that include elements such as iterations, control flows, and variables. For transformations between JSON to JSON, JSON to text, XML to JSON, or XML to text, you can create a template that describes the required mapping or transformation using the Liquid open-source template language. You can select this template when you add a **Liquid** built-in action to your workflow. You can use **Liquid** actions in multitenant Consumption logic app workflows and single-tenant Standard logic app workflows.
While no **Liquid** triggers are available, you can use any trigger or action to feed the source JSON or XML content into your workflow. For example, you can use a built-in connector trigger, a managed or Azure-hosted connector trigger available for Azure Logic Apps, or even another app.
For more information, review the following documentation:
* [Perform data operations in Azure Logic Apps](logic-apps-perform-data-operations.md) * [Liquid open-source template language](https://shopify.github.io/liquid/) * [Consumption versus Standard logic apps](logic-apps-overview.md#resource-environment-differences)
-* [Integration account built-in connectors](../connectors/built-in.md#integration-account-built-in)
+* [Integration account built-in connectors](../connectors/built-in.md#b2b-built-in-operations)
* [Built-in connectors overview for Azure Logic Apps](../connectors/built-in.md) * [Managed or Azure-hosted connectors overview for Azure Logic Apps](../connectors/managed.md) and [Managed or Azure-hosted connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
For more information, review the following documentation:
* Your logic app resource and workflow. Liquid operations don't have any triggers available, so your workflow has to minimally include a trigger. For more information, se the following documentation:
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+ * [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
Title: Single-tenant versus multi-tenant Azure Logic Apps
-description: Learn the differences between single-tenant, multi-tenant, and integration service environment (ISE) for Azure Logic Apps.
+ Title: Differences between Standard and Consumption logic apps
+description: Learn the differences between Standard workflows (single-tenant) and Consumption workflows (multitenant) in Azure Logic Apps.
ms.suite: integration Previously updated : 10/30/2023 Last updated : 02/15/2024
-# Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps
+# Differences between Standard single-tenant logic apps versus Consumption multitenant logic apps
-Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. When you create a logic app resource, you select either the **Consumption** workflow type or **Standard** workflow type. A Consumption logic app can have only one workflow that runs in *multi-tenant* Azure Logic Apps or an *integration service environment*. A Standard logic app can have one or multiple workflows that run in *single-tenant* Azure Logic Apps or an App Service Environment (ASE).
+Azure Logic Apps is a cloud-based platform for creating and running automated *logic app workflows* that integrate your apps, data, services, and systems. With this platform, you can quickly develop highly scalable integration solutions for your enterprise and business-to-business (B2B) scenarios. When you create a logic app resource, you select either the **Consumption** workflow type or **Standard** workflow type. A Consumption logic app can have only one workflow that runs in *multitenant* Azure Logic Apps or an *integration service environment*. A Standard logic app can have one or multiple workflows that run in *single-tenant* Azure Logic Apps or an App Service Environment v3 (ASE v3).
-Before you choose which logic app resource to create, review the following guide to learn how the logic app workflow types and service environments compare with each other. You can then make a better choice about which logic app workflow and environment best suits your scenario, solution requirements, and the destination where you want to deploy and run your workflows.
+Before you choose which logic app resource to create, review the following guide to learn how the logic app workflow types compare with each other. You can then make a better choice about which logic app workflow and environment best suits your scenario, solution requirements, and the destination where you want to deploy and run your workflows.
If you're new to Azure Logic Apps, review [What is Azure Logic Apps?](logic-apps-overview.md) and [What is a *logic app workflow*?](logic-apps-overview.md#logic-app-concepts).
If you're new to Azure Logic Apps, review [What is Azure Logic Apps?](logic-apps
## Logic app workflow types and environments
-The following table summarizes the differences between a Consumption logic app workflow and Standard logic app workflow. You also learn how the *single-tenant* environment differs from the *multi-tenant* environment and *integration service environment (ISE)* for deploying, hosting, and running your workflows.
+The following table summarizes the differences between a **Consumption** logic app workflow and **Standard** logic app workflow. You also learn how the single-tenant environment differs from the multitenant environment and an integration service environment (ISE) for deploying, hosting, and running your workflows.
[!INCLUDE [Logic app workflow and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
The following table summarizes the differences between a Consumption logic app w
The **Standard** logic app and workflow is powered by the redesigned single-tenant Azure Logic Apps runtime. This runtime uses the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and is hosted as an extension on the Azure Functions runtime. This design provides portability, flexibility, and more performance for your logic app workflows plus other capabilities and benefits inherited from the Azure Functions platform and the Azure App Service ecosystem. For example, you can create, deploy, and run single-tenant based logic apps and their workflows in [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md).
-The Standard logic app introduces a resource structure that can host multiple workflows, similar to how an Azure function app can host multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Consumption** logic app resource where you have a 1-to-1 mapping between the logic app resource and a workflow.
+The **Standard** logic app introduces a resource structure that can host multiple workflows, similar to how an Azure function app can host multiple functions. With a 1-to-many mapping, workflows in the same logic app and tenant share compute and processing resources, providing better performance due to their proximity. This structure differs from the **Consumption** logic app resource where you have a 1-to-1 mapping between the logic app resource and a workflow.
To learn more about portability, flexibility, and performance improvements, continue reviewing the following sections. For more information about the single-tenant Azure Logic Apps runtime and Azure Functions extensibility, review the following documentation:
To learn more about portability, flexibility, and performance improvements, cont
When you create a **Standard** logic app and workflow, you can deploy and run your workflow in other environments, such as [Azure App Service Environment v3 (Windows plans only)](../app-service/environment/overview.md). If you use Visual Studio Code with the **Azure Logic Apps (Standard)** extension, you can *locally* develop, build, and run your workflow in your development environment without having to deploy to Azure. If your scenario requires containers, you can [create single tenant logic apps using Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md). For more information, see [What is Azure Arc enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md)
-These capabilities provide major improvements and substantial benefits compared to the multi-tenant model, which requires you to develop against an existing running resource in Azure. The multi-tenant model for automating **Consumption** logic app resource deployment is based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both apps and infrastructure.
+These capabilities provide major improvements and substantial benefits compared to the multitenant model, which requires you to develop against an existing running resource in Azure. The multitenant model for automating **Consumption** logic app resource deployment is based on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both apps and infrastructure.
With the **Standard** logic app resource, deployment becomes easier because you can separate app deployment from infrastructure deployment. You can package the single-tenant Azure Logic Apps runtime and your workflows together as part of your logic app resource or project. You can use generic steps or tasks that build, assemble, and zip your logic app resources into ready-to-deploy artifacts. To deploy your infrastructure, you can still use ARM templates to separately provision those resources along with other processes and pipelines that you use for those purposes.
When you use the new built-in connector operations, you create connections calle
Workflows that run in either single-tenant Azure Logic Apps or in an *integration service environment* (ISE) can directly access secured resources such as virtual machines (VMs), other services, and systems that exist in an [Azure virtual network](../virtual-network/virtual-networks-overview.md).
-Both single-tenant Azure Logic Apps and an ISE are dedicated instances of the Azure Logic Apps service, use dedicated resources, and run separately from multi-tenant Azure Logic Apps. Running workflows in a dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors).
+Both single-tenant Azure Logic Apps and an ISE are dedicated instances of the Azure Logic Apps service, use dedicated resources, and run separately from multitenant Azure Logic Apps. Running workflows in a dedicated instance helps reduce the impact that other Azure tenants might have on app performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors).
Single-tenant Azure Logic Apps and an ISE also provide the following benefits:
-* Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multi-tenant Azure Logic Apps. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
+* Your own static IP addresses, which are separate from the static IP addresses that are shared by the logic apps in the multitenant Azure Logic Apps. You can also set up a single public, static, and predictable outbound IP address to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE.
* Increased limits on run duration, storage retention, throughput, HTTP request and response timeouts, message sizes, and custom connector requests. For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md).
To create a logic app resource based on the environment that you want, you have
| Azure Arc-enabled Logic Apps | [Azure Arc-enabled Logic Apps sample](https://github.com/Azure/logicapps/tree/master/arc-enabled-logic-app-sample) | - [What is Azure Arc-enabled Logic Apps?](azure-arc-enabled-logic-apps-overview.md) <br><br>- [Create and deploy single-tenant based logic app workflows with Azure Arc-enabled Logic Apps](azure-arc-enabled-logic-apps-create-deploy-workflows.md) | | Azure REST API | [Azure App Service REST API](/rest/api/appservice/workflows)* <br><br>**Note**: The Standard logic app REST API is included with the Azure App Service REST API. | [Get started with Azure REST API reference](/rest/api/azure) |
-**Multi-tenant environment**
+**Multitenant environment**
| Option | Resources and tools | More information | |--|||
-| Azure portal | **Consumption** logic app | [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md) |
-| Visual Studio Code | [**Azure Logic Apps (Consumption)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-logicapps) | [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md)
-| Azure CLI | [**Logic Apps Azure CLI** extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/logic) | - [Quickstart: Create and manage Consumption logic app workflows in multi-tenant Azure Logic Apps - Azure CLI](quickstart-logic-apps-azure-cli.md) <br><br>- [az logic](/cli/azure/logic) |
-| Azure Resource Manager | [**Create a logic app** ARM template](https://azure.microsoft.com/resources/templates/logic-app-create/) | [Quickstart: Create and deploy Consumption logic app workflows in multi-tenant Azure Logic Apps - ARM template](quickstart-create-deploy-azure-resource-manager-template.md) |
+| Azure portal | **Consumption** logic app | [Quickstart: Create an example Consumption logic app workflow in multitenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md) |
+| Visual Studio Code | [**Azure Logic Apps (Consumption)** extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-logicapps) | [Quickstart: Create an example Consumption logic app workflow in multitenant Azure Logic Apps - Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md)
+| Azure CLI | [**Logic Apps Azure CLI** extension](https://github.com/Azure/azure-cli-extensions/tree/master/src/logic) | - [Quickstart: Create and manage Consumption logic app workflows in multitenant Azure Logic Apps - Azure CLI](quickstart-logic-apps-azure-cli.md) <br><br>- [az logic](/cli/azure/logic) |
+| Azure Resource Manager | [**Create a logic app** ARM template](https://azure.microsoft.com/resources/templates/logic-app-create/) | [Quickstart: Create and deploy Consumption logic app workflows in multitenant Azure Logic Apps - ARM template](quickstart-create-deploy-azure-resource-manager-template.md) |
| Azure PowerShell | [Az.LogicApp module](/powershell/module/az.logicapp) | [Get started with Azure PowerShell](/powershell/azure/get-started-azureps) | | Azure REST API | [Azure Logic Apps REST API](/rest/api/logic) | [Get started with Azure REST API reference](/rest/api/azure) |
To create a logic app resource based on the environment that you want, you have
| Option | Resources and tools | More information | |--|||
-| Azure portal | **Consumption** logic app deployed to an existing ISE resource | Same as [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md), but select an ISE, not a multi-tenant region. |
+| Azure portal | **Consumption** logic app deployed to an existing ISE resource | Same as [Quickstart: Create an example Consumption logic app workflow in multitenant Azure Logic Apps - Azure portal](quickstart-create-example-consumption-workflow.md), but select an ISE, not a multitenant region. |
Although your development experiences differ based on whether you create **Consumption** or **Standard** logic app resources, you can find and access all your deployed logic apps under your Azure subscription.
Within a **Standard** logic app, you can create the following workflow types:
Create a stateful workflow when you need to keep, review, or reference data from previous events. These workflows save all the operations' inputs, outputs, and states to external storage. This information makes reviewing the workflow run details and history possible after each run finishes. Stateful workflows provide high resiliency if outages happen. After services and systems are restored, you can reconstruct interrupted runs from the saved state and rerun the workflows to completion. Stateful workflows can continue running for much longer than stateless workflows.
- By default, stateful workflows in both multi-tenant and single-tenant Azure Logic Apps run asynchronously. All HTTP-based actions follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply). After an HTTP action calls or sends a request to an endpoint, service, system, or API, the request receiver immediately returns a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response. This code confirms that the receiver accepted the request but hasn't finished processing. The response can include a `location` header that specifies the URI and a refresh ID that the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a ["200 OK"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) success response or other non-202 response. However, the caller doesn't have to wait for the request to finish processing and can continue to run the next action. For more information, see [Asynchronous microservice integration enforces microservice autonomy](/azure/architecture/microservices/design/interservice-communication#synchronous-versus-asynchronous-messaging).
+ By default, stateful workflows in both multitenant and single-tenant Azure Logic Apps run asynchronously. All HTTP-based actions follow the standard [asynchronous operation pattern](/azure/architecture/patterns/async-request-reply). After an HTTP action calls or sends a request to an endpoint, service, system, or API, the request receiver immediately returns a ["202 ACCEPTED"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3) response. This code confirms that the receiver accepted the request but hasn't finished processing. The response can include a `location` header that specifies the URI and a refresh ID that the caller can use to poll or check the status for the asynchronous request until the receiver stops processing and returns a ["200 OK"](https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.1) success response or other non-202 response. However, the caller doesn't have to wait for the request to finish processing and can continue to run the next action. For more information, see [Asynchronous microservice integration enforces microservice autonomy](/azure/architecture/microservices/design/interservice-communication#synchronous-versus-asynchronous-messaging).
* *Stateless*
The single-tenant model and **Standard** logic app include many current and new
A **Standard** workflow can use many of the same built-in connectors as a Consumption workflow, but not all. Vice versa, a Standard workflow has many built-in connectors that aren't available in a Consumption workflow.
-For example, a Standard workflow has both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, SQL Server, and others. Although a Consumption workflow doesn't have these same built-in connector versions, other built-in connectors such as Azure API Management, Azure App Services, and Batch, are available.
+For example, a Standard workflow has both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, SQL Server, and others. Although a Consumption workflow doesn't have these same built-in connector versions, other built-in connectors such as Azure API Management and Azure App Services are available.
In single-tenant Azure Logic Apps, [built-in connectors with specific attributes are informally known as *service providers*](../connectors/built-in.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Microsoft Entra ID, or a managed identity. All built-in connectors run in the same process as the redesigned Azure Logic Apps runtime. For more information, review the [built-in connector list for Standard logic app workflows](../connectors/built-in.md#built-in-connectors).
In single-tenant Azure Logic Apps, [built-in connectors with specific attributes
For the **Standard** logic app workflow, these capabilities have changed, or they're currently limited, unavailable, or unsupported:
-* **Triggers and actions**: [Built-in triggers and actions](../connectors/built-in.md) run natively in Azure Logic Apps, while managed connectors are hosted and run using shared resources in Azure. For Standard workflows, some built-in triggers and actions are currently unavailable, such as Sliding Window, Batch, Azure App Service, and Azure API Management. To start a stateful or stateless workflow, use a built-in trigger such as the Request, Event Hubs, or Service Bus trigger. The Recurrence trigger is available for stateful workflows, but not stateless workflows. In the designer, built-in triggers and actions appear with the **In-App** label, while [managed connector triggers and actions](../connectors/managed.md) appear with the **Shared** label.
+* **Triggers and actions**: [Built-in triggers and actions](../connectors/built-in.md) run natively in Azure Logic Apps, while managed connectors are hosted and run using shared resources in Azure. For Standard workflows, some built-in triggers and actions are currently unavailable, such as Sliding Window, Azure App Service, and Azure API Management. To start a stateful or stateless workflow, use a built-in trigger such as the Request, Event Hubs, or Service Bus trigger. The Recurrence trigger is available for stateful workflows, but not stateless workflows. In the designer, built-in triggers and actions appear with the **In-App** label, while [managed connector triggers and actions](../connectors/managed.md) appear with the **Shared** label.
For *stateless* workflows, *managed connector actions* are available, but *managed connector triggers* are unavailable. Although you can enable managed connectors for stateless workflows, the designer doesn't show any managed connector triggers for you to add.
For the **Standard** logic app workflow, these capabilities have changed, or the
* The following triggers and actions have either changed or are currently limited, unsupported, or unavailable:
- * The built-in action, [Azure Functions - Choose an Azure function](logic-apps-azure-functions.md) is now **Azure Function Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template.
+ * The built-in action, [Azure Functions - Choose an Azure function](logic-apps-azure-functions.md) is now **Azure Functions Operations - Call an Azure function**. This action currently works only for functions that are created from the **HTTP Trigger** template.
In the Azure portal, you can select an HTTP trigger function that you can access by creating a connection through the user experience. If you inspect the function action's JSON definition in code view or the **workflow.json** file using Visual Studio Code, the action refers to the function by using a `connectionName` reference. This version abstracts the function's information as a connection, which you can find in your logic app project's **connections.json** file, which is available after you create a connection in Visual Studio Code.
For the **Standard** logic app workflow, these capabilities have changed, or the
> Azure Logic Apps gets the default key from the function when making the connection, > stores that key in your app's settings, and uses the key for authentication when calling the function. >
- > As in the multi-tenant model, if you renew this key, for example, through the Azure Functions experience
+ > As in the multitenant model, if you renew this key, for example, through the Azure Functions experience
> in the portal, the function action no longer works due to the invalid key. To fix this problem, you need > to recreate the connection to the function that you want to call or update your app's settings with the new key.
For the **Standard** logic app workflow, these capabilities have changed, or the
* Managed identity authentication: Both system-assigned and user-assigned managed identity support is available. By default, the system-assigned managed identity is automatically enabled. However, most [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/) don't currently support selecting user-assigned managed identities for authentication.
-* **XML transformation**: Support for referencing assemblies from maps is currently unavailable. Also, only XSLT 1.0 is currently supported.
+* **XML transformation**: Only XSLT 1.0 is currently supported.
* **Breakpoint debugging in Visual Studio Code**: Although you can add and use breakpoints inside the **workflow.json** file for a workflow, breakpoints are supported only for actions at this time, not triggers. For more information, see [Create single-tenant based workflows in Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#manage-breakpoints). * **Trigger history and run history**: For a **Standard** logic app, trigger history and run history in the Azure portal appears at the workflow level, not the logic app resource level. For more information, review [Create single-tenant based workflows using the Azure portal](create-single-tenant-workflows-azure-portal.md).
-* **Back up and restore for workflow run history**: **Standard** logic apps currently don't support back up and restore for workflow run history.
-
-* **Zoom control**: The zoom control is currently unavailable on the designer.
+* **Backup and restore for workflow run history**: **Standard** logic apps currently don't support backup and restore for workflow run history.
* **Deployment targets**: You can't deploy a **Standard** logic app resource to an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) nor to Azure deployment slots.
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
You can use IP network rules to allow access to your workspace and endpoint from
> [!WARNING] > * Enable your endpoint's [public network access flag](concept-secure-online-endpoint.md#secure-inbound-scoring-requests) if you want to allow access to your endpoint from specific public internet IP address ranges. > * When you enable this feature, this has an impact to all existing public endpoints associated with your workspace. This may limit access to new or existing endpoints. If you access any endpoints from a non-allowed IP, you get a 403 error.
+> * To use this feature with Azure Machine Learning managed virtual network, see [Azure Machine Learning managed virtual network](how-to-managed-network.md#scenario-enable-access-from-selected-ip-addresses).
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
machine-learning How To Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-llama.md
If you need to deploy a different model, [deploy it to real-time endpoints](#dep
### Prerequisites -- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure Machine Learning account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
- An Azure Machine Learning workspace and a compute instance. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. > [!IMPORTANT]
To create a deployment:
1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Llama-2-70b) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. > [!NOTE]
- > Subscribing a project to a particular Azure Marketplace offering (in this case, Llama-2-70b) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
+ > Subscribing a workspace to a particular Azure Marketplace offering (in this case, Llama-2-70b) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
:::image type="content" source="media/how-to-deploy-models-llama/deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="media/how-to-deploy-models-llama/deploy-marketplace-terms.png":::
Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time en
> If you don't have enough quota available in the selected project, you can use the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours**. 1. Select the **Virtual machine** and the **Instance count** that you want to assign to the deployment.
-1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resources configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys.
+1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resource configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys.
1. Indicate if you want to enable **Inferencing data collection (preview)**. 1. Indicate if you want to enable **Package Model (preview)**. 1. Select **Deploy**. After a few moments, the endpoint's **Details** page opens up.
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
Last updated 10/10/2022 -+ # Image processing with batch model deployments
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
Title: Logging MLflow models
-description: Learn how to start logging MLflow models instead of artifacts using MLflow SDK in Azure Machine Learning.
+description: Logging MLflow models, instead of artifacts, with MLflow SDK in Azure Machine Learning
-+ Previously updated : 07/8/2022 Last updated : 02/16/2024 # Logging MLflow models
-The following article explains how to start logging your trained models (or artifacts) as MLflow models. It explores the different methods to customize the way MLflow packages your models and hence how it runs them.
+This article describes how to log your trained models (or artifacts) as MLflow models. It explores the different ways to customize how MLflow packages your models, and how it runs those models.
## Why logging models instead of artifacts?
-If you are not familiar with MLflow, you may not be aware of the difference between logging artifacts or files vs. logging MLflow models. We recommend reading the article [From artifacts to models in MLflow](concept-mlflow-models.md) for an introduction to the topic.
+[From artifacts to models in MLflow](concept-mlflow-models.md) describes the difference between logging artifacts or files, as compared to logging MLflow models.
-A model in MLflow is also an artifact, but with a specific structure that serves as a contract between the person that created the model and the person that intends to use it. Such contract helps build the bridge about the artifacts themselves and what they mean.
+An MLflow model is also an artifact. However, that model has a specific structure that serves as a contract between the person that created the model and the person that intends to use it. This contract helps build a bridge between the artifacts themselves and their meanings.
-Logging models has the following advantages:
+Model logging has these advantages:
> [!div class="checklist"]
-> * Models can be directly loaded for inference using `mlflow.<flavor>.load_model` and use the `predict` function.
-> * Models can be used as pipelines inputs directly.
-> * Models can be deployed without indicating a scoring script nor an environment.
-> * Swagger is enabled in deployed endpoints automatically and the __Test__ feature can be used in Azure Machine Learning studio.
-> * You can use the Responsible AI dashboard.
+> * You can directly load models, for inference, with `mlflow.<flavor>.load_model`, and you can use the `predict` function
+> * Pipeline inputs can use models directly
+> * You can deploy models without indication of a scoring script or an environment
+> * Swagger is automatically enabled in deployed endpoints, and the Azure Machine Learning studio can use the __Test__ feature
+> * You can use the Responsible AI dashboard
-There are different ways to start using the model's concept in Azure Machine Learning with MLflow, as explained in the following sections:
+This section describes how to use the model's concept in Azure Machine Learning with MLflow:
## Logging models using autolog
-One of the simplest ways to start using this approach is by using MLflow autolog functionality. Autolog allows MLflow to instruct the framework associated to with the framework you are using to log all the metrics, parameters, artifacts and models that the framework considers relevant. By default, most models will be log if autolog is enabled. Some flavors may decide not to do that in specific situations. For instance, the flavor PySpark won't log models if they exceed a certain size.
+You can use MLflow autolog functionality. Autolog allows MLflow to instruct the framework in use to log all the metrics, parameters, artifacts, and models that the framework considers relevant. By default, if autolog is enabled, most models are logged. In some situations, some flavors might not log a model. For instance, the PySpark flavor doesn't log models that exceed a certain size.
-You can turn on autologging by using either `mlflow.autolog()` or `mlflow.<flavor>.autolog()`. The following example uses `autolog()` for logging a classifier model trained with XGBoost:
+Use either `mlflow.autolog()` or `mlflow.<flavor>.autolog()` to activate autologging. This example uses `autolog()` to log a classifier model trained with XGBoost:
```python import mlflow
accuracy = accuracy_score(y_test, y_pred)
``` > [!TIP]
-> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb) demonstrates how to log a model with preprocessing using pipelines.
+> If use Machine Learning pipelines, for example [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that pipeline flavor to log models. Model logging automatically happens when the `fit()` method is called on the pipeline object. The [Training and tracking an XGBoost classifier with MLflow notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb) demonstrates how to log a model with preprocessing, using pipelines.
## Logging models with a custom signature, environment or samples
-You can log models manually using the method `mlflow.<flavor>.log_model` in MLflow. Such workflow has the advantages of retaining control of different aspects of how the model is logged.
+The MLflow `mlflow.<flavor>.log_model` method can manually log models. This workflow can control different aspects of the model logging.
Use this method when: > [!div class="checklist"]
-> * You want to indicate pip packages or a conda environment different from the ones that are automatically detected.
-> * You want to include input examples.
-> * You want to include specific artifacts into the package that will be needed.
-> * Your signature is not correctly inferred by `autolog`. This is specifically important when you deal with inputs that are tensors where the signature needs specific shapes.
-> * Somehow the default behavior of autolog doesn't fill your purpose.
+> * You want to indicate pip packages or a conda environment that differ from those that are automatically detected
+> * You want to include input examples
+> * You want to include specific artifacts in the needed package
+> * `autolog` does not correctly infer your signature. This matters when you deal with tensor inputs, where the signature needs specific shapes
+> * The autolog behavior does not cover your purpose for some reason
-The following example code logs a model for an XGBoost classifier:
+This code example logs a model for an XGBoost classifier:
```python import mlflow
mlflow.xgboost.log_model(model,
``` > [!NOTE]
-> * `log_models=False` is configured in `autolog`. This prevents MLflow to automatically log the model, as it is done manually later.
-> * `infer_signature` is a convenient method to try to infer the signature directly from inputs and outputs.
-> * `mlflow.utils.environment._mlflow_conda_env` is a private method in MLflow SDK and it may change in the future. This example uses it just for sake of simplicity, but it must be used with caution or generate the YAML definition manually as a Python dictionary.
+> * `autolog` has the `log_models=False` configuration. This prevents automatic MLflow model logging. Automatic MLflow model logging happens later, as a manual process
+> * Use the `infer_signature` method to try to infer the signature directly from inputs and outputs
+> * The `mlflow.utils.environment._mlflow_conda_env` method is a private method in the MLflow SDK. In this example, it makes the code simpler, but use it with caution. It may change in the future. As an alternative, you can generate the YAML definition manually as a Python dictionary.
## Logging models with a different behavior in the predict method
-When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict` generate results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
+When logging a model with either `mlflow.autolog` or `mlflow.<flavor>.log_model`, the model flavor determines how to execute the inference, and what the model returns. MLflow doesn't enforce any specific behavior about the generation of `predict` results. In some scenarios, you might want to do some preprocessing or post-processing before and after your model executes.
-A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](#logging-custom-models) as explained in the following section.
+In this situation, implement machine learning pipelines that directly move from inputs to outputs. Although this implementation is possible, and sometimes encouraged to improve performance, it might become challenging to achieve. In those cases, it can help to [customize how your model handles inference](#logging-custom-models) as explained in next section.
## Logging custom models
-MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) including FastAI, MXNet Gluon, PyTorch, TensorFlow, XGBoost, CatBoost, h2o, Keras, LightGBM, MLeap, ONNX, Prophet, spaCy, Spark MLLib, Scikit-Learn, and statsmodels. However, there may be times where you need to change how a flavor works, log a model not natively supported by MLflow or even log a model that uses multiple elements from different frameworks. For those cases, you may need to create a custom model flavor.
+MLflow supports many [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors), including
-For this type of models, MLflow introduces a flavor called `pyfunc` (standing from Python function). Basically this flavor allows you to log any object you want as a model, as long as it satisfies two conditions:
+- CatBoost
+- FastAI
+- h2o
+- Keras
+- LightGBM
+- MLeap
+- MXNet Gluon
+- ONNX
+- Prophet
+- PyTorch
+- Scikit-Learn
+- spaCy
+- Spark MLLib
+- statsmodels
+- TensorFlow
+- XGBoost
-* You implement the method `predict` (at least).
-* The Python object inherits from `mlflow.pyfunc.PythonModel`.
+However, you might need to change the way a flavor works, log a model not natively supported by MLflow or even log a model that uses multiple elements from different frameworks. In these cases, you might need to create a custom model flavor.
+
+To solve the problem, MLflow introduces the `pyfunc` flavor (starting from a Python function). This flavor can log any object as a model, as long as that object satisfies two conditions:
+
+* You implement the method `predict` method, at least
+* The Python object inherits from `mlflow.pyfunc.PythonModel`
> [!TIP]
-> Serializable models that implements the Scikit-learn API can use the Scikit-learn flavor to log the model, regardless of whether the model was built with Scikit-learn. If your model can be persisted in Pickle format and the object has methods `predict()` and `predict_proba()` (at least), then you can use `mlflow.sklearn.log_model()` to log it inside a MLflow run.
+> Serializable models that implement the Scikit-learn API can use the Scikit-learn flavor to log the model, regardless of whether the model was built with Scikit-learn. If you can persist your model in Pickle format, and the object has the `predict()` and `predict_proba()` methods (at least), you can use `mlflow.sklearn.log_model()` to log the model inside a MLflow run.
# [Using a model wrapper](#tab/wrapper)
-The simplest way of creating your custom model's flavor is by creating a wrapper around your existing model object. MLflow will serialize it and package it for you. Python objects are serializable when the object can be stored in the file system as a file (generally in Pickle format). During runtime, the object can be materialized from such file and all the values, properties and methods available when it was saved will be restored.
+If you create a wrapper around your existing model object, it becomes the simplest to create a flavor for your custom model. MLflow serializes and packages it for you. Python objects are serializable when the object can be stored in the file system as a file, generally in Pickle format. At runtime, the object can materialize from that file. This restores all the values, properties, and methods available when it was saved.
Use this method when: > [!div class="checklist"]
-> * Your model can be serialized in Pickle format.
-> * You want to retain the models state as it was just after training.
-> * You want to customize the way the `predict` function works.
+> * You can serialize your model in Pickle format
+> * You want to retain the state of the model, as it was just after training
+> * You want to customize how the `predict` function works.
-The following sample wraps a model created with XGBoost to make it behaves in a different way to the default implementation of the XGBoost flavor (it returns the probabilities instead of the classes):
+This code sample wraps a model created with XGBoost, to make it behave in a different from the XGBoost flavor default implementation. Instead, it returns the probabilities instead of the classes:
```python from mlflow.pyfunc import PythonModel, PythonModelContext
class ModelWrapper(PythonModel):
pass ```
-Then, a custom model can be logged in the run like this:
+Log a custom model in the run:
```python import mlflow
mlflow.pyfunc.log_model("classifier",
``` > [!TIP]
-> Note how the `infer_signature` method now uses `y_probs` to infer the signature. Our target column has the target class, but our model now returns the two probabilities for each class.
-
+> Here, the `infer_signature` method uses `y_probs` to infer the signature. Our target column has the target class, but our model now returns the two probabilities for each class.
# [Using artifacts](#tab/artifacts)
-Wrapping your model may be simple, but sometimes your model is composed by multiple pieces that need to be loaded or it can't just be serialized as a Pickle file. In those cases, the `PythonModel` supports indicating an arbitrary list of **artifacts**. Each artifact will be packaged along with your model.
+Your model might be composed of multiple pieces that need to be loaded. You might not have a way to serialize it as a Pickle file. In those cases, the `PythonModel` supports indication of an arbitrary list of **artifacts**. Each artifact is packaged along with your model.
-Use this method when:
+Use this technique when:
> [!div class="checklist"]
-> * Your model can't be serialized in Pickle format or there is a better format available for that.
-> * Your model have one or many artifacts that need to be referenced in order to load the model.
-> * You may want to persist some inference configuration properties (i.e. number of items to recommend).
-> * You want to customize the way the model is loaded and how the `predict` function works.
+> * You can't serialize your model in Pickle format, or you have a better serialization format available
+> * Your model has one, or many, artifacts must be referenced to load the model
+> * You might want to persist some inference configuration properties - for example, the number of items to recommend
+> * You want to customize the way the model loads, and how the `predict` function works
-To log a custom model using artifacts, you can do something as follows:
+This code sample shows how to log a custom model, using artifacts:
```python encoder_path = 'encoder.pkl'
mlflow.pyfunc.log_model("classifier",
``` > [!NOTE]
-> * The model was saved using the save method of the framework used (it's not saved as a pickle).
-> * `ModelWrapper()` is the model wrapper, but the model is not passed as a parameter to the constructor.
-> A new parameter is indicated, `artifacts`, that is a dictionary with keys as the name of the artifact and values as the path is the local file system where the artifact is stored.
+> * The model is not saved as a pickle. Instead, the code saved the model with save method of the framework that you used
+> * The model wrapper is `ModelWrapper()`, but the model is not passed as a parameter to the constructor
+> A new dictionary parameter - `artifacts` - has keys as the artifact names, and values as the path in the local file system where the artifact is stored
-The corresponding model wrapper then would look as follows:
+The corresponding model wrapper then would look like this:
```python from mlflow.pyfunc import PythonModel, PythonModelContext
class ModelWrapper(PythonModel):
return self._model.predict_proba(data) ```
-The complete training routine would look as follows:
+The complete training routine would look like this:
```python import mlflow
mlflow.pyfunc.log_model("classifier",
# [Using a model loader](#tab/loader)
-Sometimes your model logic is complex and there are several source files that your model loads on inference time. This would be the case when you have a Python library for your model for instance. In this scenario, you want to package the library all along with your model so it can move as a single piece.
+A model might have complex logic, or it might load several source files at inference time. This happens if you have a Python library for your model, for example. In this scenario, you should package the library along with your model, so it can move as a single piece.
-Use this method when:
+Use this technique when:
> [!div class="checklist"]
-> * Your model can't be serialized in Pickle format or there is a better format available for that.
-> * Your model artifacts can be stored in a folder where all the requiered artifacts are placed.
-> * Your model source code is complex and it requires multiple Python files. Potentially, there is a library that supports your model.
-> * You want to customize the way the model is loaded and how the `predict` function works.
+> * You can't serialize your model in Pickle format, or you have a better serialization format available
+> * You can store your model artifacts in a folder which stores all the required artifacts
+> * Your model source code has great complexity, and it requires multiple Python files. Potentially, a library supports your model
+> * You want to customize the way the model loads, and how the `predict` function operates
-MLflow supports this kind of models too by allowing you to specify any arbitrary source code to package along with the model as long as it has a *loader module*. Loader modules can be specified in the `log_model()` instruction using the argument `loader_module` which indicates the Python namespace where the loader is implemented. The argument `code_path` is also required, where you indicate the source files where the `loader_module` is defined. You are required to implement in this namespace a function called `_load_pyfunc(data_path: str)` that received the path of the artifacts and returns an object with a method predict (at least).
+MLflow supports these models. With MLflow, you can specify any arbitrary source code to package along with the model, as long as it has a *loader module*. You can specify loader modules in the `log_model()` instruction with the `loader_module` argument, which indicates the Python namespace that implements the loader. The `code_path` argument is also required, to indicate the source files that define the `loader_module`. In this namespace, you must implement a `_load_pyfunc(data_path: str)` function that receives the path of the artifacts, and returns an object with a method predict (at least).
```python model_path = 'xgb.model'
mlflow.pyfunc.log_model("classifier",
``` > [!NOTE]
-> * The model was saved using the save method of the framework used (it's not saved as a pickle).
-> * A new parameter, `data_path`, was added pointing to the folder where the model's artifacts are located. This can be a folder or a file. Whatever is on that folder or file, it will be packaged with the model.
-> * A new parameter, `code_path`, was added pointing to the location where the source code is placed. This can be a path or a single file. Whatever is on that folder or file, it will be packaged with the model.
-> * `loader_module` is the Python module where the function `_load_pyfunc` is defined.
+> * The model is not saved as a pickle. Instead, the code saved the model with save method of the framework that you used
+> * A new parameter - `data_path` - points to the folder that holds the model artifacts. The artifacts can be a folder or a file. Those artifacts - either a folder or a file - will be packaged with the model
+> * A new parameter - `code_path` - points to the source code location. This resource at this location can be a path or a single file. That resource - either a folder or a file - will be packaged with the model
+> * The function `_load_pyfunc` function is stored in the `loader_module` Python module
-The folder `src` contains a file called `loader_module.py` (which is the loader module):
+The `src` folder contains the `loader_module.py` file. That file is the loader module:
__src/loader_module.py__
def _load_pyfunc(data_path: str):
``` > [!NOTE]
-> * The class `MyModel` doesn't inherits from `PythonModel` as we did before, but it has a `predict` function.
-> * The model's source code is on a file. This can be any source code you want. If your project has a folder src, it is a great candidate.
-> * We added a function `_load_pyfunc` which returns an instance of the model's class.
+> * The `MyModel` class doesn't inherit from `PythonModel` as shown earlier. However, it has a `predict` function
+> * The model source code is in a file. Any source code will work. A **src** folder is ideal for this
+> * A `_load_pyfunc` function returns an instance of the class of the model
-The complete training code would look as follows:
+The complete training code looks like this:
```python import mlflow
mlflow.pyfunc.log_model("classifier",
## Next steps
-* [Deploy MLflow models](how-to-deploy-mlflow-models.md)
+* [Deploy MLflow models](how-to-deploy-mlflow-models.md)
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
If you plan to use __HuggingFace models__ with Azure Machine Learning, add outbo
* `cdn.auth0.com` * `cdn-lfs.huggingface.co`
+### Scenario: Enable access from selected IP Addresses
+
+If you want to enable access from specific IP addresses, use the following actions:
+
+1. Add an outbound _private endpoint_ rule to allow traffic to the Azure Machine Learning workspace. This allows compute instances created in the managed virtual network to access the workspace.
+
+ > [!TIP]
+ > You can't add this rule during workspace creation, as the workspace doesn't exist yet.
+
+1. Enable public network access to the workspace. For more information, see [public network access enabled](how-to-configure-private-link.md#enable-public-access).
+1. Add your IP addresses to the firewall for Azure Machine Learning. For more information, see [enable access only from IP ranges](how-to-configure-private-link.md#enable-public-access-only-from-internet-ip-ranges-preview).
+ ## Private endpoints Private endpoints are currently supported for the following Azure
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
For more information, see [Create Azure Monitor alert rules](../azure-monitor/al
There are three logs that can be enabled for online endpoints:
-* **AMLOnlineEndpointTrafficLog**: You could choose to enable traffic logs if you want to check the information of your request. Below are some cases:
+* **AmlOnlineEndpointTrafficLog**: You could choose to enable traffic logs if you want to check the information of your request. Below are some cases:
* If the response isn't 200, check the value of the column "ResponseCodeReason" to see what happened. Also check the reason in the "HTTPS status codes" section of the [Troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md#http-status-codes) article.
There are three logs that can be enabled for online endpoints:
* If you want to check how many requests or failed requests recently. You could also enable the logs.
-* **AMLOnlineEndpointConsoleLog**: Contains logs that the containers output to the console. Below are some cases:
+* **AmlOnlineEndpointConsoleLog**: Contains logs that the containers output to the console. Below are some cases:
* If the container fails to start, the console log can be useful for debugging. * Monitor container behavior and make sure that all requests are correctly handled.
- * Write request IDs in the console log. Joining the request ID, the AMLOnlineEndpointConsoleLog, and AMLOnlineEndpointTrafficLog in the Log Analytics workspace, you can trace a request from the network entry point of an online endpoint to the container.
+ * Write request IDs in the console log. Joining the request ID, the AmlOnlineEndpointConsoleLog, and AmlOnlineEndpointTrafficLog in the Log Analytics workspace, you can trace a request from the network entry point of an online endpoint to the container.
* You can also use this log for performance analysis in determining the time required by the model to process each request.
-* **AMLOnlineEndpointEventLog**: Contains event information regarding the container's life cycle. Currently, we provide information on the following types of events:
+* **AmlOnlineEndpointEventLog**: Contains event information regarding the container's life cycle. Currently, we provide information on the following types of events:
| Name | Message | | -- | -- |
You can find example queries on the __Queries__ tab while viewing logs. Search f
The following tables provide details on the data stored in each log:
-**AMLOnlineEndpointTrafficLog**
+**AmlOnlineEndpointTrafficLog**
[!INCLUDE [endpoint-monitor-traffic-reference](includes/endpoint-monitor-traffic-reference.md)]
-**AMLOnlineEndpointConsoleLog**
+**AmlOnlineEndpointConsoleLog**
[!INCLUDE [endpoint-monitor-console-reference](includes/endpoint-monitor-console-reference.md)]
-**AMLOnlineEndpointEventLog**
+**AmlOnlineEndpointEventLog**
[!INCLUDE [endpoint-monitor-event-reference](includes/endpoint-monitor-event-reference.md)]
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
workspace = Workspace(
) )
-workspace = ml_client.workspaces.begin_create_or_update(workspace)
+workspace = ml_client.workspaces.begin_update(workspace)
``` # [Studio](#tab/azure-studio)
workspace = Workspace(
) )
-workspace = ml_client.workspaces.begin_create_or_update(workspace)
+workspace = ml_client.workspaces.begin_update(workspace)
``` # [Studio](#tab/azure-studio)
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
## Limitations
-* The customer-managed key for resources the workspace depends on can't be updated after workspace creation.
+* After workspace creation, the customer-managed encryption key for resources that the workspace depends on can only be updated to another key in the original Azure Key Vault resource..
* Resources managed by Microsoft in your subscription can't transfer ownership to you. * You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace. * The key vault that contains your customer-managed key must be in the same Azure subscription as the Azure Machine Learning workspace.
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Last updated 01/29/2024-+ # Distributed GPU training guide (SDK v2)
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
Title: Track ML experiments and models with MLflow
-description: Use MLflow to log metrics and artifacts from machine learning runs
+description: Use MLflow to log metrics and artifacts from machine learning runs.
Previously updated : 11/04/2022 Last updated : 02/15/2024 ms.devlang: azurecli
ms.devlang: azurecli
# Track ML experiments and models with MLflow
-__Tracking__ refers to process of saving all experiment's related information that you may find relevant for every experiment you run. Such metadata varies based on your project, but it may include:
+In this article, you learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
-> [!div class="checklist"]
-> - Code
-> - Environment details (OS version, Python packages)
-> - Input data
-> - Parameter configurations
-> - Models
-> - Evaluation metrics
-> - Evaluation visualizations (confusion matrix, importance plots)
-> - Evaluation results (including some evaluation predictions)
+_Tracking_ is the process of saving relevant information about experiments that you run. The saved information (metadata) varies based on your project, and it can include:
-Some of these elements are automatically tracked by Azure Machine Learning when working with jobs (including code, environment, and input and output data). However, others like models, parameters, and metrics, need to be instrumented by the model builder as it's specific to the particular scenario.
+- Code
+- Environment details (such as OS version, Python packages)
+- Input data
+- Parameter configurations
+- Models
+- Evaluation metrics
+- Evaluation visualizations (such as confusion matrices, importance plots)
+- Evaluation results (including some evaluation predictions)
-In this article, you'll learn how to use MLflow for tracking your experiments and runs in Azure Machine Learning workspaces.
+When you're working with jobs in Azure Machine Learning, Azure Machine Learning automatically tracks some information about your experiments, such as code, environment, and input and output data. However, for others like models, parameters, and metrics, the model builder needs to configure their tracking, as they're specific to the particular scenario.
> [!NOTE]
-> If you want to track experiments running on Azure Databricks or Azure Synapse Analytics, see the dedicated articles [Track Azure Databricks ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-databricks.md) or [Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-synapse.md).
+> If you want to track experiments that are running on Azure Databricks, see [Track Azure Databricks ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-databricks.md). To learn about tracking experiments that are running on Azure Synapse Analytics, see [Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning](how-to-use-mlflow-azure-synapse.md).
## Benefits of tracking experiments
-We highly encourage machine learning practitioners to instrument their experimentation by tracking them, regardless if they're training with jobs in Azure Machine Learning or interactively in notebooks. Benefits include:
+We strongly recommend that machine learning practitioners track experiments, whether you're training with jobs in Azure Machine Learning or training interactively in notebooks. Experiment tracking allows you to:
-- All of your ML experiments are organized in a single place, allowing you to search and filter experiments to find the information and drill down to see what exactly it was that you tried before.
+- Organize all of your machine learning experiments in a single place. You can then search and filter experiments and drill down to see details about the experiments you ran before.
- Compare experiments, analyze results, and debug model training with little extra work.-- Reproduce or re-run experiments to validate results.-- Improve collaboration by seeing what everyone is doing, sharing experiment results, and access experiment data programmatically.
+- Reproduce or rerun experiments to validate results.
+- Improve collaboration, since you can see what other teammates are doing, share experiment results, and access experiment data programmatically.
-### Why MLflow
+## Why use MLflow for tracking experiments?
-Azure Machine Learning workspaces are MLflow-compatible, which means you can use MLflow to track runs, metrics, parameters, and artifacts with your Azure Machine Learning workspaces. By using MLflow for tracking, you don't need to change your training routines to work with Azure Machine Learning or inject any cloud-specific syntax, which is one of the main advantages of the approach.
+Azure Machine Learning workspaces are MLflow-compatible, which means you can use MLflow to track runs, metrics, parameters, and artifacts within your Azure Machine Learning workspaces. A major advantage of using MLflow for tracking is that you don't need to change your training routines to work with Azure Machine Learning or inject any cloud-specific syntax.
-See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLflow and Azure Machine Learning functionality including MLflow Project support (preview) and model deployment.
+For more information about all supported MLflow and Azure Machine Learning functionalities, see [MLflow and Azure Machine Learning](concept-mlflow.md).
+
+## Limitations
+
+Some methods available in the MLflow API might not be available when connected to Azure Machine Learning. For details about supported and unsupported operations, see [Support matrix for querying runs and experiments](how-to-track-experiments-mlflow.md#support-matrix-for-querying-runs-and-experiments).
## Prerequisites
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+ [!INCLUDE [mlflow-prereqs](includes/machine-learning-mlflow-prereqs.md)]
-## Configuring the experiment
+## Configure the experiment
-MLflow organizes the information in experiments and runs (in Azure Machine Learning, runs are called __Jobs__). By default, runs are logged to an experiment named __Default__ that is automatically created for you. You can configure the experiment where tracking is happening.
+MLflow organizes information in experiments and runs (_runs_ are called _jobs_ in Azure Machine Learning). By default, runs are logged to an experiment named __Default__ that is automatically created for you. You can configure the experiment where tracking is happening.
# [Working interactively](#tab/interactive)
-When training interactively, such as in a Jupyter Notebook, use MLflow command `mlflow.set_experiment()`. For example, the following code snippet demonstrates configuring the experiment, and then logging during a job:
+For interactive training, such as in a Jupyter notebook, use the MLflow command `mlflow.set_experiment()`. For example, the following code snippet configures an experiment:
```python experiment_name = 'hello-world-example'
mlflow.set_experiment(experiment_name)
# [Working with jobs](#tab/jobs)
-When submitting jobs using Azure Machine Learning CLI or SDK, you can set the experiment name using the property `experiment_name` of the job. You don't have to configure it on your training script.
+To submit jobs, when using Azure Machine Learning CLI or SDK, set the experiment name by using the `experiment_name` property of the job. You don't have to configure it in your training script.
Azure Machine Learning tracks any training job in what MLflow calls a run. Use r
# [Working interactively](#tab/interactive)
-When working interactively, MLflow starts tracking your training routine as soon as you try to log information that requires an active run. For instance, when you log a metric, log a parameter, or when you start a training cycle when Mlflow's autologging functionality is enabled. However, it's usually helpful to start the run explicitly, specially if you want to capture the total time of your experiment in the field __Duration__. To start the run explicitly, use `mlflow.start_run()`.
+When you're working interactively, MLflow starts tracking your training routine as soon as you try to log information that requires an active run. For instance, MLflow tracking starts when you log a metric, a parameter, or start a training cycle, and Mlflow's autologging functionality is enabled. However, it's usually helpful to start the run explicitly, specially if you want to capture the total time for your experiment in the __Duration__ field. To start the run explicitly, use `mlflow.start_run()`.
-Regardless if you started the run manually or not, you'll eventually need to stop the run to inform MLflow that your experiment run has finished and marks its status as __Completed__. To do that, all `mlflow.end_run()`. We strongly recommend starting runs manually so you don't forget to end them when working on notebooks.
+Whether you start the run manually or not, you eventually need to stop the run, so that MLflow knows that your experiment run is done and can mark the run's status as __Completed__. To stop a run, use `mlflow.end_run()`.
-```python
-mlflow.start_run()
+We strongly recommend starting runs manually, so that you don't forget to end them when you're working in notebooks.
-# Your code
+- To start a run manually and end it when you're done working in the notebook:
-mlflow.end_run()
-```
+ ```python
+ mlflow.start_run()
+
+ # Your code
+
+ mlflow.end_run()
+ ```
-To help you avoid forgetting to end the run, it's usually helpful to use the context manager paradigm:
+- It's usually helpful to use the context manager paradigm to help you remember to end the run:
-```python
-with mlflow.start_run() as run:
- # Your code
-```
+ ```python
+ with mlflow.start_run() as run:
+ # Your code
+ ```
-When you start a new run with `mlflow.start_run()`, it may be interesting to indicate the parameter `run_name` which will then translate to the name of the run in Azure Machine Learning user interface and help you identify the run quicker:
+- When you start a new run with `mlflow.start_run()`, it can be useful to specify the `run_name` parameter, which later translates to the name of the run in the Azure Machine Learning user interface and help you to identify the run quicker:
-```python
-with mlflow.start_run(run_name="hello-world-example") as run:
- # Your code
-```
+ ```python
+ with mlflow.start_run(run_name="hello-world-example") as run:
+ # Your code
+ ```
# [Working with jobs](#tab/jobs)
-Azure Machine Learning jobs allow you to submit long running training or inference routines as isolated and reproducible executions.
+Azure Machine Learning jobs allow you to submit long-running training or inference routines as isolated and reproducible executions.
-### Creating a training routine
+### Create a training routine
-When working with jobs, you typically place all your training logic inside of a folder, for instance `src`. Place all the files you need in that folder. Particularly, one of them will be a Python file with your training code entry point. The following example shows a `hello_world.py` example:
+When working with jobs, you typically place all your training logic as files inside a folder, for instance `src`. One of these files is a Python file with your training code entry point. The following example shows a `hello_world.py` example:
:::code language="python" source="~/azureml-examples-main/cli/jobs/basics/src/hello-mlflow.py" highlight="9-10,12":::
-The previous code example doesn't uses `mlflow.start_run()` but if used you can expect MLflow to reuse the current active run so there's no need to remove those lines if migrating to Azure Machine Learning.
+The previous code example doesn't uses `mlflow.start_run()` but if used, MLflow reuses the current active run. Therefore, you don't need to remove the line that uses `mlflow.start_run()` if you're migrating code to Azure Machine Learning.
-### Adding tracking to your routine
+### Add tracking to your routine
-Use MLflow SDK to track any metric, parameter, artifacts, or models. For detailed examples about how to log each, see [Log metrics, parameters and files with MLflow](how-to-log-view-metrics.md).
+Use the MLflow SDK to track any metric, parameter, artifacts, or models. For examples about how to log these, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md).
### Ensure your job's environment has MLflow installed
-All Azure Machine Learning environments already have MLflow installed for you, so no action is required if you're using a curated environment. If you want to use a custom environment:
+All Azure Machine Learning environments already have MLflow installed for you, so no action is required if you're using a curated environment. However, if you want to use a custom environment:
1. Create a `conda.yaml` file with the dependencies you need:
All Azure Machine Learning environments already have MLflow installed for you, s
1. Reference the environment in the job you're using.
-### Configuring job's name
+### Configure your job's name
-Use the parameter `display_name` of Azure Machine Learning jobs to configure the name of the run. The following example shows how:
+Use the Azure Machine Learning jobs parameter `display_name` to configure the name of the run.
1. Use the `display_name` property to configure the job.
Use the parameter `display_name` of Azure Machine Learning jobs to configure the
To submit the job, create a YAML file with your job definition in a `job.yml` file. This file should be created outside the `src` directory.
- :::code language="yaml" source="~/azureml-examples-main/cli/jobs/basics/hello-world-org.yml" highlight="8" range="1-9":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/jobs/basics/hello-world-org.yml" highlight="7" range="1-9":::
# [Python SDK](#tab/python)
Use the parameter `display_name` of Azure Machine Learning jobs to configure the
) ```
-2. Ensure you're not using `mlflow.start_run(run_name="")` inside of your training routine.
+2. Ensure you're not using `mlflow.start_run(run_name="")` inside your training routine.
-### Submitting the job
+### Submit the job
-1. First, let's connect to Azure Machine Learning workspace where we're going to work on.
+1. First, connect to the Azure Machine Learning workspace where you'll work.
# [Azure CLI](#tab/cli)
Use the parameter `display_name` of Azure Machine Learning jobs to configure the
# [Python SDK](#tab/python)
- The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
+ The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, you connect to the workspace where you'll perform deployment tasks.
1. Import the required libraries:
- ```python
- from azure.ai.ml import MLClient
- from azure.identity import DefaultAzureCredential
- ```
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ ```
2. Configure workspace details and get a handle to the workspace:
- ```python
- subscription_id = "<subscription>"
- resource_group = "<resource-group>"
- workspace = "<workspace>"
-
- ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
- ```
+ ```python
+ subscription_id = "<subscription>"
+ resource_group = "<resource-group>"
+ workspace = "<workspace>"
+
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+ ```
1. Submit the job # [Azure CLI](#tab/cli)
- Use the Azure Machine Learning CLI [to submit your job](how-to-train-model.md). Jobs using MLflow and running on Azure Machine Learning will automatically log any tracking information to the workspace. Open your terminal and use the following to submit the job.
+ Use the Azure Machine Learning CLI [to submit your job](how-to-train-model.md). Jobs that use MLflow and run on Azure Machine Learning automatically log any tracking information to the workspace. Open your terminal and use the following code to submit the job.
```azurecli az ml job create -f job.yml --web
Use the parameter `display_name` of Azure Machine Learning jobs to configure the
# [Python SDK](#tab/python)
- Use the Python SDK [to submit your job](how-to-train-model.md). Jobs using MLflow and running on Azure Machine Learning will automatically log any tracking information to the workspace.
+ Use the Python SDK [to submit your job](how-to-train-model.md). Jobs that use MLflow and run on Azure Machine Learning automatically log any tracking information to the workspace.
```python returned_job = ml_client.jobs.create_or_update(command_job) returned_job.studio_url ```
-1. You can monitor the job process in Azure Machine Learning studio.
+1. Monitor the job progress in Azure Machine Learning studio.
-## Autologging
+## Enable MLflow autologging
-You can [log metrics, parameters and files with MLflow](how-to-log-view-metrics.md) manually. However, you can also rely on MLflow automatic logging capability. Each machine learning framework supported by MLflow decides what to track automatically for you.
+You can [log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md) manually. However, you can also rely on MLflow's automatic logging capability. Each machine learning framework supported by MLflow decides what to track automatically for you.
-To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
+To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging), insert the following code before your training code:
```python mlflow.autolog()
mlflow.autolog()
## View metrics and artifacts in your workspace
-The metrics and artifacts from MLflow logging are tracked in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in [Azure Machine Learning studio](https://ml.azure.com).
+The metrics and artifacts from MLflow logging are tracked in your workspace. You can view and access them in the studio anytime or access them programatically via the MLflow SDK.
+To view metrics and artifacts in the studio:
-Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you've created your desired view, you can save it for future use and share it with your teammates using a direct link.
+1. Go to [Azure Machine Learning studio](https://ml.azure.com).
+1. Navigate to your workspace.
+1. Find the experiment by name in your workspace.
+1. Select the logged metrics to render charts on the right side. You can customize the charts by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish.
+1. Once you've created your desired view, save it for future use and share it with your teammates, using a direct link.
-You can also access or __query metrics, parameters and artifacts programatically__ using the MLflow SDK. Use [mlflow.get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run) as explained bellow:
+ :::image type="content" source="media/how-to-log-view-metrics/metrics.png" alt-text="Screenshot of the metrics view." lightbox="media/how-to-log-view-metrics/metrics.png":::
+
+To __access or query__ metrics, parameters, and artifacts programatically via the MLflow SDK, use [mlflow.get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run).
```python import mlflow
print(metrics, params, tags)
``` > [!TIP]
-> For metrics, the previous example will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use `mlflow.get_metric_history` method as explained at [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
+> For metrics, the previous example code will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use the `mlflow.get_metric_history` method. For more information on retrieving values of a metric, see [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
-To download artifacts you've logged, like files and models, you can use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts)
+To __download__ artifacts you've logged, such as files and models, use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts).
```python mlflow.artifacts.download_artifacts(run_id="<RUN_ID>", artifact_path="helloworld.txt") ```
-For more details about how to __retrieve or compare__ information from experiments and runs in Azure Machine Learning using MLflow view [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md)
-
-## Example notebooks
-
-If you're looking for examples about how to use MLflow in Jupyter notebooks, please see our example's repository [Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow).
-
-## Limitations
+For more information about how to __retrieve or compare__ information from experiments and runs in Azure Machine Learning, using MLflow, see [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
-Some methods available in the MLflow API may not be available when connected to Azure Machine Learning. For details about supported and unsupported operations please read [Support matrix for querying runs and experiments](how-to-track-experiments-mlflow.md#support-matrix-for-querying-runs-and-experiments).
+## Related content
-## Next steps
+* [Deploy MLflow models](how-to-deploy-mlflow-models.md)
+* [Manage models with MLflow](how-to-manage-models-mlflow.md)
+* [Using MLflow (Jupyter Notebooks)](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow)
-* [Deploy MLflow models](how-to-deploy-mlflow-models.md).
-* [Manage models with MLflow](how-to-manage-models-mlflow.md).
mariadb Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for Mar
* **Region Name:** This column lists the name of Azure region where Azure Database for PostgreSQL - Single Server is offered. * **Gateway IP address subnets:** This column lists the IP address subnets of the gateway rings located in the particular region. As we retire older gateway hardware, we recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets in the region you're operating.
+* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
+* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
++
+| **Region name** | **Gateway IP address subnets** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
+|:--|:--|:|:|
+| Australia Central | 20.36.105.32/29 | | |
+| Australia Central 2 | 20.36.113.32/29 | | |
+| Australia East | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 | 13.75.149.87 | |
+| Australia South East | 13.77.49.32/29 | 13.73.109.251 | |
+| Brazil South | 191.233.200.32/29, 191.234.144.32/29 | | 104.41.11.5 |
+| Canada Central | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 | | |
+| Canada East | 40.69.105.32/29 | 40.86.226.166 | |
+| Central US | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 | 13.67.215.62 | |
+| China East | 52.130.112.136/29 | | |
+| China East 2 | 52.130.120.88/29 | | |
+| China East 3 | 52.130.128.88/29 | | |
+| China North | 52.130.128.88/29 | | |
+| China North 2 | 52.130.40.64/29 | | |
+| China North 3 | 13.75.32.192/29, 13.75.33.192/29 | | |
+| East Asia | 13.75.32.192/29, 13.75.33.192/29 | | |
+| East US | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29 | 40.121.158.30 | 191.238.6.43 |
+| East US 2 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 | 52.177.185.181 | |
+| France Central | 40.79.136.32/29, 40.79.144.32/29 | | |
+| France South | 40.79.176.40/29, 40.79.177.32/29 | | |
+| Germany West Central | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 | | |
+| India Central | 104.211.86.32/29, 20.192.96.32/29 | | |
+| India South | 40.78.192.32/29, 40.78.193.32/29 | | |
+| India West | 104.211.144.32/29, 104.211.145.32/29 | 104.211.160.80 | |
+| Japan East | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 | 13.78.61.196 | |
+| Japan West | 40.74.96.32/29 | 104.214.148.156 | |
+| Korea Central | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29 | 52.231.32.42 | |
+| Korea South | 52.231.145.0/29 | 52.231.200.86 | |
+| North Central US | 52.162.105.192/29 | 23.96.178.199 | |
+| North Europe | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 | 40.113.93.91 | 191.235.193.75 |
+| South Africa North | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 | | |
+| South Africa West | 102.133.25.32/29 | | |
+| South Central US | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 | 13.66.62.124 | 23.98.162.75 |
+| South East Asia | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 | 104.43.15.0 | |
+| Switzerland North | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 | | |
+| Switzerland West | 51.107.153.32/29 | | |
+| UAE Central | 20.37.72.96/29, 20.37.73.96/29 | | |
+| UAE North | 40.120.72.32/29, 65.52.248.32/29 | | |
+| UK South | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 | | |
+| UK West | 51.140.208.96/29, 51.140.209.32/29 | | |
+| West Central US | 13.71.193.32/29 | 13.78.145.25 | |
+| West Europe | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 | 40.68.37.158 | 191.237.232.75 |
+| West US | 13.86.217.224/29 | 104.42.238.205 | 23.99.34.75 |
+| West US 2 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 | | |
+| West US 3 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 | | |
+
-| **Region name** | **Gateway IP address subnets** |
-|:-|:|
-| Australia Central | 20.36.105.32/29 |
-| Australia Central 2 | 20.36.113.32/29 |
-| Australia East | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
-| Australia South East |13.77.49.32/29 |
-| Brazil South | 191.233.200.32/29, 191.234.144.32/29|
-| Canada Central | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29|
-| Canada East | 40.69.105.32/29 |
-| Central US | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29
-| China East | 52.130.112.136/29|
-| China East 2 | 52.130.120.88/29|
-| China East 3 | 52.130.128.88/29|
-| China North | 52.130.128.88/29 |
-| China North 2 | 52.130.40.64/29|
-| China North 3 | 13.75.32.192/29, 13.75.33.192/29 |
-| East Asia | 13.75.32.192/29, 13.75.33.192/29|
-| East US |20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29|
-| East US 2 |104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29|
-| France Central | 40.79.136.32/29, 40.79.144.32/29 |
-| France South | 40.79.176.40/29, 40.79.177.32/29|
-| Germany West Central | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29|
-| India Central | 104.211.86.32/29, 20.192.96.32/29|
-| India South | 40.78.192.32/29, 40.78.193.32/29|
-| India West | 104.211.144.32/29, 104.211.145.32/29 |
-| Japan East | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
-| Japan West | 40.74.96.32/29 |
-| Korea Central | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 |
-| Korea South | 52.231.145.0/29 |
-| North Central US | 52.162.105.192/29|
-| North Europe |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
-| South Africa North | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 |
-| South Africa West | 102.133.25.32/29|
-| South Central US |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29|
-| South East Asia | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 |
-| Switzerland North |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27|
-| Switzerland West | 51.107.153.32/29|
-| UAE Central | 20.37.72.96/29, 20.37.73.96/29 |
-| UAE North | 40.120.72.32/29, 65.52.248.32/29 |
-| UK South |51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29|
-| UK West | 51.140.208.96/29, 51.140.209.32/29 |
-| West Central US | 13.71.193.32/29 |
-| West Europe | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29|
-| West US |13.86.217.224/29|
-| West US 2 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29|
-| West US 3 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
## Connection redirection
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
Title: Support for Hyper-V assessment in Azure Migrate
-description: Learn about support for Hyper-V assessment with Azure Migrate Discovery and assessment
+ Title: Support for Hyper-V assessment in Azure Migrate and Modernize
+description: 'Learn about support for Hyper-V assessment with Azure Migrate: Discovery and assessment.'
ms.
ms.cutom: engagement-fy24
# Support matrix for Hyper-V assessment > [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+> This article references CentOS, a Linux distribution that's nearing end-of-life status. Please consider your use and plan accordingly.
-This article summarizes prerequisites and support requirements when you discover and assess on-premises servers running in a Hyper-V environment for migration to Azure, using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate servers running on Hyper-V to Azure, review the [migration support matrix](migrate-support-matrix-hyper-v-migration.md).
+This article summarizes prerequisites and support requirements when you discover and assess on-premises servers running in a Hyper-V environment for migration to Azure by using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate servers running on Hyper-V to Azure, see the [migration support matrix](migrate-support-matrix-hyper-v-migration.md).
-To set up discovery and assessment of servers running on Hyper-V, you create a project, and add the Azure Migrate: Discovery and assessment tool to the project. After the tool is added, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers and sends server metadata and performance data to Azure. After discovery is complete, you gather discovered servers into groups, and run an assessment for a group.
+To set up discovery and assessment of servers running on Hyper-V, you create a project and add the Azure Migrate: Discovery and assessment tool to the project. After the tool is added, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers and sends server metadata and performance data to Azure. After discovery is complete, you gather discovered servers into groups and run an assessment for a group.
## Limitations
-**Support** | **Details**
+Support | Details
|
-**Assessment limits** | You can discover and assess up to 35,000 servers in a single [project](migrate-support-matrix.md#project).
-**Project limits** | You can create multiple projects in an Azure subscription. In addition to servers on Hyper-V, a project can include servers on VMware and physical servers, up to the assessment limits for each.
-**Discovery** | The Azure Migrate appliance can discover up to 5000 servers running on Hyper-V.<br/><br/> The appliance can connect to up to 300 Hyper-V hosts.
-**Assessment** | You can add up to 35,000 servers in a single group.<br/><br/> You can assess up to 35,000 servers in a single assessment for a group.
+Assessment limits | You can discover and assess up to 35,000 servers in a single [project](migrate-support-matrix.md#project).
+Project limits | You can create multiple projects in an Azure subscription. In addition to servers on Hyper-V, a project can include servers on VMware and physical servers, up to the assessment limits for each.
+Discovery | The Azure Migrate appliance can discover up to 5,000 servers running on Hyper-V.<br/><br/> The appliance can connect to up to 300 Hyper-V hosts.
+Assessment | You can add up to 35,000 servers in a single group.<br/><br/> You can assess up to 35,000 servers in a single assessment for a group.
[Learn more](concepts-assessment-calculation.md) about assessments. ## Hyper-V host requirements
-| **Support** | **Details**
+| Support | Details
| :- | :- |
-| **Hyper-V host** | The Hyper-V host can be standalone or deployed in a cluster.<br/><br/> The Hyper-V host can run Windows Server 2022, Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2. Server core installations of these operating systems are also supported. <br/>You can't assess servers located on Hyper-V hosts running Windows Server 2012.
-| **Permissions** | You need Administrator permissions on the Hyper-V host. <br/> If you don't want to assign Administrator permissions, create a local or domain user account, and add the user account to these groups- Remote Management Users, Hyper-V Administrators, and Performance Monitor Users. |
-| **PowerShell remoting** | [PowerShell remoting](/powershell/module/microsoft.powershell.core/enable-psremoting) must be enabled on each Hyper-V host. |
-| **Hyper-V Replica** | If you use Hyper-V Replica (or you have multiple servers with the same server identifiers), and you discover both the original and replicated servers using Azure Migrate, the assessment generated by Azure Migrate might not be accurate. |
+| Hyper-V host | The Hyper-V host can be standalone or deployed in a cluster.<br/><br/> The Hyper-V host can run Windows Server 2022, Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2. Server core installations of these operating systems are also supported. <br/>You can't assess servers located on Hyper-V hosts running Windows Server 2012.
+| Permissions | You need Administrator permissions on the Hyper-V host. <br/> If you don't want to assign Administrator permissions, create a local or domain user account and add the user account to these groups: Remote Management Users, Hyper-V Administrators, and Performance Monitor Users. |
+| PowerShell remoting | [PowerShell remoting](/powershell/module/microsoft.powershell.core/enable-psremoting) must be enabled on each Hyper-V host. |
+| Hyper-V Replica | If you use Hyper-V Replica (or you have multiple servers with the same server identifiers), and you discover both the original and replicated servers by using Azure Migrate and Modernize, the assessment generated by Azure Migrate and Modernize might not be accurate. |
## Server requirements
-| **Support** | **Details**
+| Support | Details
| :-- | :- |
-| **Operating system** | All operating systems can be assessed for migration. |
-| **Integration Services** | [Hyper-V Integration Services](/virtualization/hyper-v-on-windows/reference/integration-services) must be running on servers that you assess, in order to capture operating system information. |
-| **Storage** | Local Disk, DAS, JBOD, Storage Spaces, CSV, SMB. These Hyper-V Host storages on which VHD/VHDX are stored are supported. <br/> IDE and SCSI virtual controllers are supported|
+| Operating system | All operating systems can be assessed for migration. |
+| Integration Services | [Hyper-V Integration Services](/virtualization/hyper-v-on-windows/reference/integration-services) must be running on servers that you assess to capture operating system information. |
+| Storage | Local disk, DAS, JBOD, Storage Spaces, CSV, and SMB. These Hyper-V Host storages on which VHD/VHDX are stored are supported. <br/> IDE and SCSI virtual controllers are supported.|
## Azure Migrate appliance requirements
-Azure Migrate uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. You can deploy the appliance using a compressed Hyper-V VHD that you download from the portal or using a [PowerShell script](deploy-appliance-script.md).
+Azure Migrate and Modernize uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. You can deploy the appliance by using a compressed Hyper-V VHD that you download from the portal or by using a [PowerShell script](deploy-appliance-script.md). For more information:
- Learn about [appliance requirements](migrate-appliance.md#appliancehyper-v) for Hyper-V. - Learn about URLs that the appliance needs to access in [public](migrate-appliance.md#public-cloud-urls) and [government](migrate-appliance.md#government-cloud-urls) clouds.-- In Azure Government, you must deploy the appliance [using the script](deploy-appliance-script-government.md).
+- [Use the script](deploy-appliance-script-government.md) to deploy the appliance in Azure Government.
## Port access The following table summarizes port requirements for assessment.
-**Device** | **Connection**
+Device | Connection
|
-**Appliance** | Inbound connections on TCP port 3389 to allow remote desktop connections to the appliance.<br/><br/> Inbound connections on port 44368 to remotely access the appliance management app using the URL: ``` https://<appliance-ip-or-name>:44368 ```<br/><br/> Outbound connections on ports 443 (HTTPS), to send discovery and performance metadata to Azure Migrate.
-**Hyper-V host/cluster** | Inbound connection on WinRM port 5985 (HTTP) to pull metadata and performance data for servers on Hyper-V, using a Common Information Model (CIM) session.
-**Servers** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22 (TCP) to perform software inventory and agentless dependency analysis
+Appliance | Inbound connections on TCP port 3389 to allow remote desktop connections to the appliance.<br/><br/> Inbound connections on port 44368 to remotely access the appliance management app by using the URL: ``` https://<appliance-ip-or-name>:44368 ```<br/><br/> Outbound connections on ports 443 (HTTPS) to send discovery and performance metadata to Azure Migrate and Modernize.
+Hyper-V host/cluster | Inbound connection on WinRM port 5985 (HTTP) to pull metadata and performance data for servers on Hyper-V by using a Common Information Model (CIM) session.
+Servers | Windows servers need access on port 5985 (HTTP). Linux servers need access on port 22 (TCP) to perform software inventory and agentless dependency analysis.
## Software inventory requirements
-In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles and features running on Windows and Linux servers, discovered using Azure Migrate. It helps you to identify and plan a migration path tailored for your on-premises workloads.
+In addition to discovering servers, Azure Migrate: Discovery and assessment can perform software inventory on servers. Software inventory provides the list of applications, roles, and features running on Windows and Linux servers that are discovered by using Azure Migrate and Modernize. It helps you to identify and plan a migration path tailored for your on-premises workloads.
Support | Details |
-**Supported servers** | You can perform software inventory on up to 5,000 servers running across Hyper-V host(s)/cluster(s) added to each Azure Migrate appliance.
-**Operating systems** | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/reference/integration-services) enabled.
-**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.<br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers to pull the application data: list, tail, awk, grep, locate, head, sed, ps, print, sort, uniq. Based on OS type and the type of package manager being used, here are some additional commands: rpm/snap/dpkg, yum/apt-cache, mssql-server.
-**Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers.
-**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).<br /> <br />If using domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /><br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations
-**Discovery** | Software inventory is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
+Supported servers | You can perform software inventory on up to 5,000 servers running across Hyper-V hosts/clusters added to each Azure Migrate appliance.
+Operating systems | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/reference/integration-services) enabled.
+Server requirements | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.<br/><br/> Linux servers must have Secure Shell (SSH) connectivity enabled and ensure that the following commands can be executed on the Linux servers to pull the application data: list, tail, awk, grep, locate, head, sed, ps, print, sort, uniq. Based on OS type and the type of package manager being used, here are some more commands: rpm/snap/dpkg, yum/apt-cache, mssql-server.
+Server access | You can add multiple domain and nondomain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-sudo access) for all Linux servers.
+Port access | Windows servers need access on port 5985 (HTTP). Linux servers need access on port 22 (TCP).<br /> <br />If you use domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /><br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations
+Discovery | Software inventory is performed by directly connecting to the servers by using the server credentials added on the appliance. <br/><br/> The appliance gathers the information about the software inventory from Windows servers by using PowerShell remoting and from Linux servers by using the SSH connection. <br/><br/> Software inventory is agentless. No agent is installed on the servers.
## SQL Server instance and database discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. Using this information, the appliance attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials that are provided in the appliance configuration manager. Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+[Software inventory](how-to-discover-applications.md) identifies SQL Server instances. The appliance uses this information and attempts to connect to respective SQL Server instances through the Windows authentication or SQL Server authentication credentials that are provided in the appliance configuration manager. The appliance can connect to only those SQL Server instances to which it has network line of sight. Software inventory by itself might not need network line of sight.
After the appliance is connected, it gathers configuration and performance data for SQL Server instances and databases. SQL Server configuration data is updated once every 24 hours. Performance data is captured every 30 seconds. Support | Details |
-**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments and IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It's recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
-**Windows servers** | Windows Server 2008 and later are supported.
-**Linux servers** | Currently not supported.
-**Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
-**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
-**SQL Server versions** | SQL Server 2008 and later are supported.
-**SQL Server editions** | Enterprise, Standard, Developer, and Express editions are supported.
-**Supported SQL configuration** | Discovery of standalone, highly available, and disaster protected SQL deployments is supported. Discovery of HADR SQL deployments powered by Always On Failover Cluster Instances and Always On Availability Groups is also supported.
-**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) isn't supported.
+Supported servers | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and physical/bare-metal environments and infrastructure as a service (IaaS) servers of other public clouds, such as Azure Web Services and Google Cloud Platform. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. We recommend that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
+Windows servers | Windows Server 2008 and later are supported.
+Linux servers | Currently not supported.
+Authentication mechanism | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
+SQL Server access | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance.
+SQL Server versions | SQL Server 2008 and later are supported.
+SQL Server editions | Enterprise, Standard, Developer, and Express editions are supported.
+Supported SQL configuration | Discovery of standalone, highly available, and disaster-protected SQL deployments is supported. Discovery of high-availability and disaster recovery SQL deployments powered by Always On failover cluster instances and Always On availability groups is also supported.
+Supported SQL services | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services, SQL Server Integration Services, and SQL Server Analysis Services isn't supported.
> [!NOTE]
-> By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
+> By default, Azure Migrate and Modernize uses the most secure way of connecting to SQL instances. That is, Azure Migrate and Modernize encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the `TrustServerCertificate` property to `true`. Also, the transport layer uses Secure Socket Layer to encrypt the channel and bypass the certificate chain to validate trust. For this reason, the appliance server must be set up to trust the certificate's root authority.
>
-> However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
+> However, you can modify the connection settings by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
### Configure the custom login for SQL Server discovery
-The following are sample scripts for creating a login and provisioning it with the necessary permissions.
+Use the following sample scripts to create a login and provision it with the necessary permissions.
-#### Windows Authentication
+#### Windows authentication
```sql -- Create a login to run the assessment
The following are sample scripts for creating a login and provisioning it with t
PRINT N'Login creation failed' GO
- -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ -- Create a user in every database other than tempdb, model, and secondary AG databases (with connection_type = ALL) and provide minimal read-only permissions.
USE master; EXECUTE sp_MSforeachdb ' USE [?];
The following are sample scripts for creating a login and provisioning it with t
--GO ```
-#### SQL Server Authentication
+#### SQL Server authentication
```sql Create a login to run the assessment use master;
- -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- NOTE: SQL instances that host replicas of Always On availability groups must use the same SID for the SQL login.
-- After the account is created in one of the members, copy the SID output from the script and include this value -- when executing against the remaining replicas. -- When the SID needs to be specified, add the value to the @SID variable definition below.
The following are sample scripts for creating a login and provisioning it with t
PRINT N'Login creation failed' GO
- -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ -- Create a user in every database other than tempdb, model, and secondary AG databases (with connection_type = ALL) and provide minimal read-only permissions.
USE master; EXECUTE sp_MSforeachdb ' USE [?];
The following are sample scripts for creating a login and provisioning it with t
## Web apps discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have a web server installed, Azure Migrate discovers web apps on the server.
-The user can add both domain and non-domain credentials on the appliance. Ensure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in the source environment.
-After the appliance is connected, it gathers configuration data for ASP.NET web apps(IIS web server) and Java web apps(Tomcat servers). Web apps configuration data is updated once every 24 hours.
+[Software inventory](how-to-discover-applications.md) identifies the web server role that exists on discovered servers. If a server is found to have a web server installed, Azure Migrate and Modernize discovers web apps on the server.
+
+You can add both domain and nondomain credentials on the appliance. Ensure that the account used has local admin privileges on source servers. Azure Migrate and Modernize automatically maps credentials to the respective servers, so you don't have to map them manually. These credentials are never sent to Microsoft and remain on the appliance running in the source environment.
+
+After the appliance is connected, it gathers configuration data for ASP.NET web apps (IIS web server) and Java web apps (Tomcat servers). Web apps configuration data is updated once every 24 hours.
Support | ASP.NET web apps | Java web apps | |
-**Stack** | VMware, Hyper-V, and Physical servers | VMware, Hyper-V, and Physical servers
-**Windows servers** | Windows Server 2008 R2 and later are supported. | Not supported.
-**Linux servers** | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7.
-**Web server versions** | IIS 7.5 and later. | Tomcat 8 or later.
-**Required privileges** | local admin | root or sudo user
+Stack | VMware, Hyper-V, and physical servers. | VMware, Hyper-V, and physical servers.
+Windows servers | Windows Server 2008 R2 and later are supported. | Not supported.
+Linux servers | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, and Red Hat Enterprise Linux 5/6/7.
+Web server versions | IIS 7.5 and later. | Tomcat 8 or later.
+Required privileges | Local admin. | Root or sudo user.
> [!NOTE] > Data is always encrypted at rest and during transit. ## Dependency analysis requirements (agentless)
-[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers, which can be easily visualized with a map view in Azure Migrate project and can be used to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis:
+[Dependency analysis](concepts-dependency-visualization.md) helps you analyze the dependencies between the discovered servers. You can easily visualize dependencies with a map view in an Azure Migrate project. You can use dependencies to group related servers for migration to Azure. The following table summarizes the requirements for setting up agentless dependency analysis.
Support | Details |
-**Supported servers** | You can enable agentless dependency analysis on up to 1000 servers (across multiple Hyper-V hosts/clusters), discovered per appliance.
-**Operating systems** | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/about/supported-guest-os) enabled.
-**Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date.
-**Windows server access** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
-**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP).
-**Discovery method** | Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
+Supported servers | You can enable agentless dependency analysis on up to 1,000 servers (across multiple Hyper-V hosts/clusters) discovered per appliance.
+Operating systems | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/about/supported-guest-os) enabled.
+Server requirements | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date.
+Windows server access | A user account (local or domain) with administrator permissions on servers.
+Linux server access | A sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time a sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set by using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+Port access | Windows servers need access on port 5985 (HTTP). Linux servers need access on port 22 (TCP).
+Discovery method | Agentless dependency analysis is performed by directly connecting to the servers by using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers by using PowerShell remoting and from Linux servers by using the SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
## Agent-based dependency analysis requirements [Dependency analysis](concepts-dependency-visualization.md) helps you to identify dependencies between on-premises servers that you want to assess and migrate to Azure. The table summarizes the requirements for setting up agent-based dependency analysis. Hyper-V currently only supports agent-based dependency visualization.
-**Requirement** | **Details**
+Requirement | Details
|
-**Before deployment** | You should have a project in place, with the Azure Migrate: Discovery and assessment tool added to the project.<br/><br/> You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br/><br/> [Learn how](create-manage-projects.md) to create a project for the first time.<br/> [Learn how](how-to-assess.md) to add Azure Migrate: Discovery and assessment tool to an existing project.<br/> Learn how to set up the appliance for discovery and assessment of [servers on Hyper-V](how-to-set-up-appliance-hyper-v.md).
-**Azure Government** | Dependency visualization isn't available in Azure Government.
-**Log Analytics** | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. The workspace for a project can't be modified after it's added. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br/><br/> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> In Log Analytics, the workspace associated with Azure Migrate is tagged with the Migration Project key, and the project name.
-**Required agents** | On each server you want to analyze, install the following agents:<br/><br/> The [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md).<br/> The [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md).<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
-**Log Analytics workspace** | The workspace must be in the same subscription as the project.<br/><br/> Azure Migrate supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> The workspace for a project can't be modified after it's added.
-**Costs** | The Service Map solution doesn't incur any charges for the first 180 days (from the day that you associate the Log Analytics workspace with the project)/<br/><br/> After 180 days, standard Log Analytics charges will apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace will incur [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace isn't deleted along with it. After deleting the project, Service Map usage isn't free, and each node will be charged as per the paid tier of Log Analytics workspace/<br/><br/>If you have projects that you created before Azure Migrate general availability (GA- 28 February 2018), you might have incurred additional Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project, since existing workspaces before GA are still chargeable.
-**Management** | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate, unless you delete the project. If you do, the dependency visualization functionality won't work as expected.
-**Internet connectivity** | If servers aren't connected to the internet, you need to install the Log Analytics gateway on them.
-**Azure Government** | Agent-based dependency analysis isn't supported.
+Before deployment | You should have a project in place with the Azure Migrate: Discovery and assessment tool added to the project.<br/><br/> You deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br/><br/> [Learn how](create-manage-projects.md) to create a project for the first time.<br/> [Learn how](how-to-assess.md) to add the Azure Migrate: Discovery and assessment tool to an existing project.<br/> Learn how to set up the appliance for discovery and assessment of [servers on Hyper-V](how-to-set-up-appliance-hyper-v.md).
+Azure Government | Dependency visualization isn't available in Azure Government.
+Log Analytics | Azure Migrate and Modernize uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br/><br/> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after you add the workspace. <br/><br/> The workspace must be in the same subscription as the project.<br/><br/> The workspace must reside in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br/><br/> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> In Log Analytics, the workspace associated with Azure Migrate and Modernize is tagged with the Migration Project key and the project name.
+Required agents | On each server you want to analyze, install the following agents:<br/><br/> [Microsoft Monitoring agent (MMA)](../azure-monitor/agents/agent-windows.md)<br/> [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br/><br/> If on-premises servers aren't connected to the internet, you need to download and install the Log Analytics gateway on them.<br/><br/> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and [MMA](how-to-create-group-machine-dependencies.md#install-the-mma).
+Log Analytics workspace | The workspace must be in the same subscription as the project.<br/><br/> Azure Migrate and Modernize supports workspaces residing in the East US, Southeast Asia, and West Europe regions.<br/><br/> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br/><br/> You can't modify the workspace for a project after you add the workspace.
+Costs | The Service Map solution doesn't incur any charges for the first 180 days. The count starts from the day that you associate the Log Analytics workspace with the project.<br/><br/> After 180 days, standard Log Analytics charges apply.<br/><br/> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br/><br/> When the project is deleted, the workspace isn't deleted along with it. After you delete the project, Service Map usage isn't free. Each node is charged according to the paid tier of the Log Analytics workspace.<br/><br/>If you have projects that you created before Azure Migrate general availability (GA on February 28, 2018), you might incur other Service Map charges. To ensure payment after 180 days only, we recommend that you create a new project. Workspaces that were created before GA are still chargeable.
+Management | When you register agents to the workspace, you use the ID and key provided by the project.<br/><br/> You can use the Log Analytics workspace outside Azure Migrate and Modernize.<br/><br/> If you delete the associated project, the workspace isn't deleted automatically. [Delete it manually](../azure-monitor/logs/manage-access.md).<br/><br/> Don't delete the workspace created by Azure Migrate and Modernize unless you delete the project. If you do, the dependency visualization functionality doesn't work as expected.
+Internet connectivity | If servers aren't connected to the internet, you need to install the Log Analytics gateway on them.
+Azure Government | Agent-based dependency analysis isn't supported.
## Next steps
-[Prepare for assessment of servers running on Hyper-V](./tutorial-discover-hyper-v.md).
+Prepare for [assessment of servers running on Hyper-V](./tutorial-discover-hyper-v.md).
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
Title: Support for physical server migration in Azure Migrate
-description: Learn about support for physical server migration in Azure Migrate.
+ Title: Support for physical server migration in Azure Migrate and Modernize
+description: Learn about support for physical server migration in Azure Migrate and Modernize.
ms.
# Support matrix for migration of physical servers, AWS VMs, and GCP VMs
-This article summarizes support settings and limitations for migrating physical servers, AWS VMs, and GCP VMs to Azure with [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) . If you're looking for information about assessing physical servers for migration to Azure, review the [assessment support matrix](migrate-support-matrix-physical.md).
+This article summarizes support settings and limitations for migrating physical servers, Amazon Web Services (AWS) virtual machines (VMs), and Google Cloud Platform (GCP) VMs to Azure with [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) . If you're looking for information about assessing physical servers for migration to Azure, see the [assessment support matrix](migrate-support-matrix-physical.md).
-## Migrating machines as physical
+## Migrate machines as physical
-You can migrate on-premises machines as physical servers, using agent-based replication. Using this tool, you can migrate a wide range of machines to Azure:
+You can migrate on-premises machines as physical servers by using agent-based replication. By using this tool, you can migrate a wide range of machines to Azure, such as:
- On-premises physical servers.-- VMs virtualized by platforms such as Xen, KVM.-- Hyper-V VMs or VMware VMs if for some reason you don't want to use the standard [Hyper-V](tutorial-migrate-hyper-v.md) or [VMware](server-migrate-overview.md) flows.
+- VMs virtualized by platforms, such as Xen and KVM.
+- Hyper-V VMs or VMware VMs, if for some reason you don't want to use the standard [Hyper-V](tutorial-migrate-hyper-v.md) or [VMware](server-migrate-overview.md) flows.
- VMs running in private clouds.-- VMs running in public clouds, including Amazon Web Services (AWS) or Google Cloud Platform (GCP).-
+- VMs running in public clouds, including AWS or GCP.
## Migration limitations
-You can select up to 10 machines at once for replication. If you want to migrate more machines, then replicate in groups of 10.
-
+You can select up to 10 machines at once for replication. If you want to migrate more machines, replicate them in groups of 10.
## Physical server requirements
-The table summarizes support for physical servers, AWS VMs, and GCP VMs that you want to migrate using agent-based migration.
+The following table summarizes support for physical servers, AWS VMs, and GCP VMs that you want to migrate by using agent-based migration.
-**Support** | **Details**
+Support | Details
|
-**Machine workload** | Azure Migrate supports migration of any workload (say Active Directory, SQL server, etc.) running on a supported machine.
-**Operating systems** | For the latest information, review the [operating system support](../site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines) for Site Recovery. Azure Migrate provides identical operating system support.
-**Linux file system/guest storage** | For the latest information, review the [Linux file system support](../site-recovery/vmware-physical-azure-support-matrix.md#linux-file-systemsguest-storage) for Site Recovery. Azure Migrate provides identical Linux file system support.
-**Network/Storage** | For the latest information, review the [network](../site-recovery/vmware-physical-azure-support-matrix.md#network) and [storage](../site-recovery/vmware-physical-azure-support-matrix.md#storage) prerequisites for Site Recovery. Azure Migrate provides identical network/storage requirements.
-**Azure requirements** | For the latest information, review the [Azure network](../site-recovery/vmware-physical-azure-support-matrix.md#azure-vm-network-after-failover), [storage](../site-recovery/vmware-physical-azure-support-matrix.md#azure-storage), and [compute](../site-recovery/vmware-physical-azure-support-matrix.md#azure-compute) requirements for Site Recovery. Azure Migrate has identical requirements for physical server migration.
-**Mobility service** | Install the Mobility service agent on each machine you want to migrate.
-**UEFI boot** | Supported. UEFI-based machines will be migrated to Azure generation 2 VMs. <br/><br/> The OS disk should have up to four partitions, and volumes should be formatted with NTFS.
-**UEFI - Secure boot** | Not supported for migration.
-**Target disk** | Machines can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
-**Ultra disk** | Ultra disk migration isn't supported from the Azure Migrate portal. You have to do an out-of-band migration for the disks that are recommended as Ultra disks. That is, you can migrate selecting it as premium disk type and change it to Ultra disk after migration.
-**Disk size** | up to 2 TB OS disk for gen 1 VM; up to 4 TB OS disk for gen 2 VM; 32 TB for data disks.
-**Disk limits** | Up to 63 disks per machine.
-**Encrypted disks/volumes** | Machines with encrypted disks/volumes aren't supported for migration.
-**Shared disk cluster** | Not supported.
-**Independent disks** | Supported.
-**Passthrough disks** | Supported.
-**NFS** | NFS volumes mounted as volumes on the machines won't be replicated.
-**ReiserFS** | Not supported.
-**iSCSI targets** | Machines with iSCSI targets aren't supported for agentless migration.
-**Multipath IO** | Supported for Windows servers with Microsoft or vendor-specific Device Specific Module (DSM) installed.
-**Teamed NICs** | Not supported.
-**IPv6** | Not supported.
-**PV drivers / XenServer tools** | Not supported.
--
+Machine workload | Azure Migrate and Modernize supports migration of any workload (such as Microsoft Entra ID or SQL Server) running on a supported machine.
+Operating systems | For the latest information, see the [operating system (OS) support](../site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines) for Azure Site Recovery. Azure Migrate and Modernize provides identical OS support.
+Linux file system/guest storage | For the latest information, see the [Linux file system support](../site-recovery/vmware-physical-azure-support-matrix.md#linux-file-systemsguest-storage) for Site Recovery. Azure Migrate and Modernize provides identical Linux file system support.
+Network/Storage | For the latest information, see the [network](../site-recovery/vmware-physical-azure-support-matrix.md#network) and [storage](../site-recovery/vmware-physical-azure-support-matrix.md#storage) prerequisites for Site Recovery. Azure Migrate and Modernize provides identical network/storage requirements.
+Azure requirements | For the latest information, see the [Azure network](../site-recovery/vmware-physical-azure-support-matrix.md#azure-vm-network-after-failover), [storage](../site-recovery/vmware-physical-azure-support-matrix.md#azure-storage), and [compute](../site-recovery/vmware-physical-azure-support-matrix.md#azure-compute) requirements for Site Recovery. Azure Migrate and Modernize has identical requirements for physical server migration.
+Mobility service | Install the Mobility service agent on each machine you want to migrate.
+UEFI boot | Supported. UEFI-based machines are migrated to Azure generation 2 VMs. <br/><br/> The OS disk should have up to four partitions, and volumes should be formatted with NTFS.
+UEFI - Secure boot | Not supported for migration.
+Target disk | Machines can be migrated only to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
+Ultra disk | Ultra disk migration isn't supported from the Azure Migrate and Modernize portal. You have to do an out-of-band migration for the disks that are recommended as Ultra disks. That is, you can migrate selecting it as premium disk type and change it to Ultra disk after migration.
+Disk size | Up to 2-TB OS disk for gen 1 VM. Up to 4-TB OS disk for gen 2 VM and 32 TB for data disks.
+Disk limits | Up to 63 disks per machine.
+Encrypted disks/volumes | Machines with encrypted disks/volumes aren't supported for migration.
+Shared disk cluster | Not supported.
+Independent disks | Supported.
+Passthrough disks | Supported.
+NFS | NFS volumes mounted as volumes on the machines aren't replicated.
+ReiserFS | Not supported.
+iSCSI targets | Machines with iSCSI targets aren't supported for agentless migration.
+Multipath IO | Supported for Windows servers with Microsoft or vendor-specific Device-Specific Module installed.
+Teamed NICs | Not supported.
+IPv6 | Not supported.
+PV drivers / XenServer tools | Not supported.
## Replication appliance requirements
-If you set up the replication appliance manually, then make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as an VMware VM using the OVA template provided in the Azure Migrate hub, the appliance is set up with Windows Server 2016, and complies with the support requirements.
+If you set up the replication appliance manually, make sure that it complies with the requirements summarized in the table. When you set up the Azure Migrate replication appliance as a VMware VM by using the Open Virtual Appliance template provided in the Azure Migrate and Modernize hub, the appliance is set up with Windows Server 2016 and complies with the support requirements.
- Learn about [replication appliance requirements](migrate-replication-appliance.md#appliance-requirements). - Install MySQL on the appliance. Learn about [installation options](migrate-replication-appliance.md#mysql-installation).
If you set up the replication appliance manually, then make sure that it complie
All on-premises VMs replicated to Azure must meet the Azure VM requirements summarized in this table. When Site Recovery runs a prerequisites check for replication, the check fails if some of the requirements aren't met.
-**Component** | **Requirements** | **Details**
+Component | Requirements | Details
| |
-Guest operating system | Verifies supported operating systems.<br/> You can migrate any workload running on a supported operating system. | Check fails if unsupported.
-Guest operating system architecture | 64-bit. | Check fails if unsupported.
+Guest operating system | Verifies supported operating systems.<br/> You can migrate any workload running on a supported OS. | Check fails if unsupported.
+Guest operating system architecture | 64 bit. | Check fails if unsupported.
Operating system disk size | Up to 2,048 GB. | Check fails if unsupported.
-Operating system disk count | 1 | Check fails if unsupported.
+Operating system disk count | 1. | Check fails if unsupported.
Data disk count | 64 or less. | Check fails if unsupported.
-Data disk size | Up to 32 TB | Check fails if unsupported.
-Network adapters | Multiple adapters are supported.
+Data disk size | Up to 32 TB. | Check fails if unsupported.
+Network adapters | Multiple adapters are supported.
Shared VHD | Not supported. | Check fails if unsupported. FC disk | Not supported. | Check fails if unsupported. BitLocker | Not supported. | Disable BitLocker before you enable replication for a machine. VM name | From 1 to 63 characters.<br/> Restricted to letters, numbers, and hyphens.<br/><br/> The machine name must start and end with a letter or number. | Update the value in the machine properties in Site Recovery.
-Connect after migration-Windows | To connect to Azure VMs running Windows after migration:<br/> - Before migration enables RDP on the on-premises VM. Make sure that TCP, and UDP rules are added for the **Public** profile, and that RDP is allowed in **Windows Firewall** > **Allowed Apps**, for all profiles.<br/> For site-to-site VPN access, enable RDP and allow RDP in **Windows Firewall** -> **Allowed apps and features** for **Domain and Private** networks. In addition, check that the operating system's SAN policy is set to **OnlineAll**. [Learn more](prepare-for-migration.md).
-Connect after migration-Linux | To connect to Azure VMs after migration using SSH:<br/> Before migration, on the on-premises machine, check that the Secure Shell service is set to Start, and that firewall rules allow an SSH connection.<br/> After failover, on the Azure VM, allow incoming connections to the SSH port for the network security group rules on the failed over VM, and for the Azure subnet to which it's connected. In addition, add a public IP address for the VM.
-
+Connect after migration-Windows | To connect to Azure VMs running Windows after migration:<br/> - Before migration enables Remote Desktop Protocol (RDP) on the on-premises VM. Make sure that TCP and UDP rules are added for the **Public** profile, and that RDP is allowed in **Windows Firewall** > **Allowed Apps** for all profiles.<br/> - For site-to-site virtual private network access, enable RDP and allow RDP in **Windows Firewall** > **Allowed apps and features** for **Domain and Private** networks. Also check that the OS storage area network policy is set to **OnlineAll**. [Learn more](prepare-for-migration.md).
+Connect after migration-Linux | To connect to Azure VMs after migration by using Secure Shell (SSH):<br/> - Before migration, on the on-premises machine, check that the SSH service is set to **Start** and that firewall rules allow an SSH connection.<br/> - After failover, on the Azure VM, allow incoming connections to the SSH port for the network security group rules on the failed over VM, and for the Azure subnet to which it's connected. Also add a public IP address for the VM.
## Next steps
migrate Tutorial Migrate Gcp Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md
Title: Discover, assess, and migrate Google Cloud Platform (GCP) VM instances to Azure
-description: This article describes how to migrate GCP VMs to Azure with Azure Migrate.
+ Title: Discover, assess, and migrate Google Cloud Platform (GCP) VMs to Azure
+description: This article describes how to migrate GCP VMs to Azure with Azure Migrate and Modernize.
# Discover, assess, and migrate Google Cloud Platform (GCP) VMs to Azure
-This tutorial shows you how to discover, assess, and migrate Google Cloud Platform (GCP) virtual machines (VMs) to Azure VMs, using Azure Migrate: Server Assessment and Migration and modernization tools.
+This tutorial shows you how to discover, assess, and migrate Google Cloud Platform (GCP) virtual machines (VMs) to Azure VMs by using the Azure Migrate: Server Assessment and Migration and modernization tools.
-
-In this tutorial, you will learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > > * Verify prerequisites for migration.
-> * Prepare Azure resources with the Migration and modernization tool. Set up permissions for your Azure account and resources to work with Azure Migrate.
+> * Prepare Azure resources with the Migration and modernization tool. Set up permissions for your Azure account and resources to work with Azure Migrate and Modernize.
> * Prepare GCP VM instances for migration.
-> * Add the Migration and modernization tool in the Azure Migrate hub.
+> * Add the Migration and modernization tool in the Azure Migrate and Modernize hub.
> * Set up the replication appliance and deploy the configuration server. > * Install the Mobility service on GCP VMs you want to migrate. > * Enable replication for VMs.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Discover and assess
-Before you migrate to Azure, we recommend that you perform a VM discovery and migration assessment. This assessment helps right-size your GCP VMs for migration to Azure, and estimate potential Azure run costs.
+Before you migrate to Azure, we recommend that you perform a VM discovery and migration assessment. This assessment helps right-size your GCP VMs for migration to Azure and estimate potential Azure run costs.
-Set up an assessment as follows:
+To set up an assessment:
1. Follow the [tutorial](./tutorial-discover-gcp.md) to set up Azure and prepare your GCP VMs for an assessment. Note that:
- - Azure Migrate uses password authentication when discovering GCP VM instances. GCP instances don't support password authentication by default. Before you can discover, you need to enable password authentication.
- - For Windows machines, allow WinRM port 5985 (HTTP). This allows remote WMI calls.
+ - Azure Migrate and Modernize uses password authentication to discover GCP VM instances. GCP instances don't support password authentication by default. Before you can discover an instance, you need to enable password authentication.
+ - For Windows machines, allow WinRM port 5985 (HTTP). This port allows remote WMI calls.
- For Linux machines:
- 1. Sign in to each Linux machine.
- 2. Open the sshd_config file: vi /etc/ssh/sshd_config
- 3. In the file, locate the **PasswordAuthentication** line, and change the value to **yes**.
- 4. Save the file and close it. Restart the ssh service.
- - If you are using a root user to discover your Linux VMs, ensure root login is allowed on the VMs.
- 1. Sign into each Linux machine
- 2. Open the sshd_config file: vi /etc/ssh/sshd_config
- 3. In the file, locate the **PermitRootLogin** line, and change the value to **yes**.
- 4. Save the file and close it. Restart the ssh service.
-
-2. Then, follow this [tutorial](./tutorial-assess-gcp.md) to set up an Azure Migrate project and appliance to discover and assess your GCP VMs.
-
-Although we recommend that you try out an assessment, performing an assessment isnΓÇÖt a mandatory step to be able to migrate VMs.
+ 1. Sign in to each Linux machine.
+ 1. Open the *sshd_config* file: `vi /etc/ssh/sshd_config`.
+ 1. In the file, locate the `PasswordAuthentication` line and change the value to `yes`.
+ 1. Save the file and close it. Restart the ssh service.
+ - If you're using a root user to discover your Linux VMs, ensure that root login is allowed on the VMs.
+ 1. Sign in to each Linux machine.
+ 1. Open the *sshd_config* file: `vi /etc/ssh/sshd_config`.
+ 1. In the file, locate the `PermitRootLogin` line and change the value to `yes`.
+ 1. Save the file and close it. Restart the ssh service.
+1. Then, follow this [tutorial](./tutorial-assess-gcp.md) to set up an Azure Migrate project and appliance to discover and assess your GCP VMs.
+Although we recommend that you try out an assessment, performing an assessment isn't a mandatory step to be able to migrate VMs.
## Prerequisites -- Ensure that the GCP VMs you want to migrate are running a supported OS version. GCP VMs are treated like physical machines for the purpose of the migration. Review the [supported operating systems and kernel versions](../site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines) for the physical server migration workflow. You can use standard commands like *hostnamectl* or *uname -a* to check the OS and kernel versions for your Linux VMs. We recommend you perform a test migration to validate if the VM works as expected before proceeding with the actual migration.
+- Ensure that the GCP VMs you want to migrate are running a supported operating system (OS) version. GCP VMs are treated like physical machines for the migration. Review the [supported operating systems and kernel versions](../site-recovery/vmware-physical-azure-support-matrix.md#replicated-machines) for the physical server migration workflow. You can use standard commands like `hostnamectl` or `uname -a` to check the OS and kernel versions for your Linux VMs. We recommend that you perform a test migration to validate if the VM works as expected before you proceed with the actual migration.
- Make sure your GCP VMs comply with the [supported configurations](./migrate-support-matrix-physical-migration.md#physical-server-requirements) for migration to Azure.-- Verify that the GCP VMs that you replicate to Azure comply with [Azure VM requirements.](./migrate-support-matrix-physical-migration.md#azure-vm-requirements)-- There are some changes needed on the VMs before you migrate them to Azure.
- - For some operating systems, Azure Migrate makes these changes automatically.
- - It's important to make these changes before you begin migration. If you migrate the VM before you make the change, the VM might not boot up in Azure.
+- Verify that the GCP VMs that you replicate to Azure comply with [Azure VM requirements](./migrate-support-matrix-physical-migration.md#azure-vm-requirements).
+- Some changes are needed on the VMs before you migrate them to Azure:
+ - For some operating systems, Azure Migrate and Modernize makes these changes automatically.
+ - Make these changes before you begin migration. If you migrate the VM before you make the change, the VM might not boot up in Azure.
Review [Windows](prepare-for-migration.md#windows-machines) and [Linux](prepare-for-migration.md#linux-machines) changes you need to make. ### Prepare Azure resources for migration Prepare Azure for migration with the Migration and modernization tool.
-**Task** | **Details**
+Task | Details
|
-**Create an Azure Migrate project** | Your Azure account needs Contributor or Owner permissions to [create a new project](./create-manage-projects.md).
-**Verify permissions for your Azure account** | Your Azure account needs permissions to create a VM, and write to an Azure managed disk.
+Create an Azure Migrate project | Your Azure account needs Contributor or Owner permissions to [create a new project](./create-manage-projects.md).
+Verify permissions for your Azure account | Your Azure account needs permissions to create a VM and write to an Azure managed disk.
-### Assign permissions to create project
+### Assign permissions to create a project
-1. In the Azure portal, open the subscription, and select **Access control (IAM)**.
-2. In **Check access**, find the relevant account, and click it to view permissions.
-3. You should have **Contributor** or **Owner** permissions.
+1. In the Azure portal, open the subscription and select **Access control (IAM)**.
+1. In **Check access**, find the relevant account and select it to view permissions.
+1. You should have **Contributor** or **Owner** permissions.
- If you just created a free Azure account, you're the owner of your subscription. - If you're not the subscription owner, work with the owner to assign the role. ### Assign Azure account permissions
-Assign the Virtual Machine Contributor role to the Azure account. This provides permissions to:
+Assign the VM Contributor role to the Azure account. This role provides permissions to:
- Create a VM in the selected resource group. - Create a VM in the selected virtual network.
Assign the Virtual Machine Contributor role to the Azure account. This provides
### Create an Azure network
-[Set up](../virtual-network/manage-virtual-network.md#create-a-virtual-network) an Azure virtual network (VNet). When you replicate to Azure, the Azure VMs that are created are joined to the Azure VNet that you specify when you set up migration.
+[Set up](../virtual-network/manage-virtual-network.md#create-a-virtual-network) an Azure virtual network. When you replicate to Azure, the Azure VMs that are created are joined to the Azure virtual network that you specified when you set up migration.
## Prepare GCP instances for migration
To prepare for GCP to Azure migration, you need to prepare and deploy a replicat
### Prepare a machine for the replication appliance
-The Migration and modernization tool uses a replication appliance to replicate machines to Azure. The replication appliance runs the following components.
+The Migration and modernization tool uses a replication appliance to replicate machines to Azure. The replication appliance runs the following components:
-- **Configuration server**: The configuration server coordinates communications between the GCP VMs and Azure, and manages data replication.-- **Process server**: The process server acts as a replication gateway. It receives replication data, optimizes it with caching, compression, and encryption, and sends it to a cache storage account in Azure.
+- **Configuration server**: The configuration server coordinates communications between the GCP VMs and Azure and manages data replication.
+- **Process server**: The process server acts as a replication gateway. It receives replication data and optimizes that data with caching, compression, and encryption. Then it sends the data to a cache storage account in Azure.
-Prepare for appliance deployment as follows:
+To prepare for appliance deployment:
- Set up a separate GCP VM to host the replication appliance. This instance must be running Windows Server 2012 R2 or Windows Server 2016. [Review](./migrate-replication-appliance.md#appliance-requirements) the hardware, software, and networking requirements for the appliance.-- The appliance shouldn't be installed on a source VM that you want to replicate or on the Azure Migrate discovery and assessment appliance you may have installed before. It should be deployed on a different VM.-- The source GCP VMs to be migrated should have a network line of sight to the replication appliance. Configure necessary Firewall rules to enable this. It is recommended that the replication appliance is deployed in the same VPC network as the source VMs to be migrated. If the replication appliance needs to be in a different VPC, the VPCs need to be connected through VPC peering.
+- The appliance shouldn't be installed on a source VM that you want to replicate or on the Azure Migrate: Discovery and assessment appliance you might have installed before. It should be deployed on a different VM.
+- The source GCP VMs to be migrated should have a network line of sight to the replication appliance. Configure necessary firewall rules to enable this capability. We recommend that you deploy the replication appliance in the same virtual private cloud (VPC) network as the source VMs to be migrated. If the replication appliance needs to be in a different VPC, the VPCs must be connected through VPC peering.
- The source GCP VMs communicate with the replication appliance on ports HTTPS 443 (control channel orchestration) and TCP 9443 (data transport) inbound for replication management and replication data transfer. The replication appliance in turn orchestrates and sends replication data to Azure over port HTTPS 443 outbound. To configure these rules, edit the security group inbound/outbound rules with the appropriate ports and source IP information.
- ![GCP firewall rules](./media/tutorial-migrate-gcp-virtual-machines/gcp-firewall-rules.png)
+ ![Screenshot that shows GCP firewall rules.](./media/tutorial-migrate-gcp-virtual-machines/gcp-firewall-rules.png)
-
- ![Edit firewall rules](./media/tutorial-migrate-gcp-virtual-machines/gcp-edit-firewall-rule.png)
+ ![Screenshot that shows editing firewall rules.](./media/tutorial-migrate-gcp-virtual-machines/gcp-edit-firewall-rule.png)
- The replication appliance uses MySQL. Review the [options](migrate-replication-appliance.md#mysql-installation) for installing MySQL on the appliance. - Review the Azure URLs required for the replication appliance to access [public](migrate-replication-appliance.md#url-access) and [government](migrate-replication-appliance.md#azure-government-url-access) clouds. ## Set up the replication appliance
-The first step of migration is to set up the replication appliance. To set up the appliance for GCP VMs migration, you must download the installer file for the appliance, and then run it on the [VM you prepared](#prepare-a-machine-for-the-replication-appliance).
+The first step of migration is to set up the replication appliance. To set up the appliance for GCP VMs migration, you must download the installer file for the appliance and then run it on the [VM you prepared](#prepare-a-machine-for-the-replication-appliance).
### Download the replication appliance installer
-1. In the Azure Migrate project > **Servers, databases and web apps**, in **Migration and modernization**, select **Discover**.
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Discover**.
- ![Discover VMs](./media/tutorial-migrate-physical-virtual-machines/migrate-discover.png)
+ ![Screenshot that shows the Discover button.](./media/tutorial-migrate-physical-virtual-machines/migrate-discover.png)
-2. In **Discover machines** > **Are your machines virtualized?**, click **Not virtualized/Other**.
-3. In **Target region**, select the Azure region to which you want to migrate the machines.
-4. Select **Confirm that the target region for migration is \<region-name\>**.
-5. Click **Create resources**. This creates an Azure Site Recovery vault in the background.
- - If you've already set up migration with the Migration and modernization tool, the target option can't be configured, since resources were set up previously.
- - You can't change the target region for this project after clicking this button.
- - To migrate your VMs to a different region, you'll need to create a new/different Azure Migrate project.
+1. In **Discover machines** > **Are your machines virtualized?**, select **Not virtualized/Other**.
+1. In **Target region**, select the Azure region to which you want to migrate the machines.
+1. Select **Confirm that the target region for migration is \<region-name\>**.
+1. Select **Create resources**. This step creates an Azure Site Recovery vault in the background.
+ - If you already set up migration with the Migration and modernization tool, the target option can't be configured because the resources were set up previously.
+ - You can't change the target region for this project after you select this button.
+ - To migrate your VMs to a different region, you need to create a new or different Azure Migrate project.
> [!NOTE]
- > If you selected private endpoint as the connectivity method for the Azure Migrate project when it was created, the Recovery Services vault will also be configured for private endpoint connectivity. Ensure that the private endpoints are reachable from the replication appliance: [**Learn more**](troubleshoot-network-connectivity.md)
-
-6. In **Do you want to install a new replication appliance?**, select **Install a replication appliance**.
-7. In **Download and install the replication appliance software**, download the appliance installer, and the registration key. You need to the key in order to register the appliance. The key is valid for five days after it's downloaded.
-
- ![Download provider](media/tutorial-migrate-physical-virtual-machines/download-provider.png)
-
-8. Copy the appliance setup file and key file to the Windows Server 2016 or Windows Server 2012 GCP VM you created for the replication appliance.
-9. Run the replication appliance setup file, as described in the next procedure.
- 9.1. Under **Before You Begin**, select **Install the configuration server and process server**, and then select **Next**.
- 9.2 In **Third-Party Software License**, select **I accept the third-party license agreement**, and then select **Next**.
- 9.3 In **Registration**, select **Browse**, and then go to where you put the vault registration key file. Select **Next**.
- 9.4 In **Internet Settings**, select **Connect to Azure Site Recovery without a proxy server**, and then select **Next**.
- 9.5 The **Prerequisites Check** page runs checks for several items. When it's finished, select **Next**.
- 9.6 In **MySQL Configuration**, provide a password for the MySQL DB, and then select **Next**.
- 9.7 In **Environment Details**, select **No**. You don't need to protect your VMs. Then, select **Next**.
- 9.8 In **Install Location**, select **Next** to accept the default.
- 9.9 In **Network Selection**, select **Next** to accept the default.
- 9.10 In **Summary**, select **Install**.
- 9.11 **Installation Progress** shows you information about the installation process. When it's finished, select **Finish**. A window displays a message about a reboot. Select **OK**.
- 9.12 Next, a window displays a message about the configuration server connection passphrase. Copy the passphrase to your clipboard and save the passphrase in a temporary text file on the source VMs. YouΓÇÖll need this passphrase later, during the mobility service installation process.
-10. After the installation completes, the Appliance configuration wizard will be launched automatically (You can also launch the wizard manually by using the cspsconfigtool shortcut that is created on the desktop of the appliance). In this tutorial, we'll be manually installing the Mobility Service on source VMs to be replicated, so create a dummy account in this step and proceed. You can provide the following details for creating the dummy account - "guest" as the friendly name, "username" as the username, and "password" as the password for the account. You will be using this dummy account in the Enable Replication stage.
-
- ![Finalize registration](./media/tutorial-migrate-physical-virtual-machines/finalize-registration.png)
+ > If you selected private endpoint as the connectivity method for the Azure Migrate project when it was created, the Recovery Services vault is also configured for private endpoint connectivity. Ensure that the private endpoints are reachable from the replication appliance. [Learn more](troubleshoot-network-connectivity.md).
+
+1. In **Do you want to install a new replication appliance?**, select **Install a replication appliance**.
+1. In **Download and install the replication appliance software**, download the appliance installer and the registration key. You need the key to register the appliance. The key is valid for five days after download.
+
+ ![Screenshot that shows the Download button.](media/tutorial-migrate-physical-virtual-machines/download-provider.png)
+
+1. Copy the appliance setup file and key file to the Windows Server 2016 or Windows Server 2012 GCP VM you created for the replication appliance.
+1. Run the replication appliance setup file, as described in the next procedure.
+ 1. Under **Before You Begin**, select **Install the configuration server and process server** and then select **Next**.
+ 1. In **Third-Party Software License**, select **I accept the third-party license agreement** and then select **Next**.
+ 1. In **Registration**, select **Browse** and then go to where you put the vault registration key file and then select **Next**.
+ 1. In **Internet Settings**, select **Connect to Azure Site Recovery without a proxy server** and then select **Next**.
+ 1. The **Prerequisites Check** page runs checks for several items. After it's finished, select **Next**.
+ 1. In **MySQL Configuration**, enter a password for the MySQL database and then select **Next**.
+ 1. In **Environment Details**, select **No**. You don't need to protect your VMs. Then select **Next**.
+ 1. In **Install Location**, select **Next** to accept the default.
+ 1. In **Network Selection**, select **Next** to accept the default.
+ 1. In **Summary**, select **Install**.
+ 1. **Installation Progress** shows you information about the installation process. After it's finished, select **Finish**. A window displays a message about a reboot. Select **OK**.
+ 1. Next, a window displays a message about the configuration server connection passphrase. Copy the passphrase to your clipboard and save the passphrase in a temporary text file on the source VMs. You need this passphrase later during the Mobility service installation process.
+1. After the installation completes, the Appliance configuration wizard launches automatically. (You can also launch the wizard manually by using the `cspsconfigtool` shortcut that was created on the appliance desktop.) In this tutorial, we manually install the Mobility service on source VMs to be replicated. You need to create a dummy account in this step to proceed. For the dummy account, use "guest" as the friendly name, "username" as the username, and "password" as the password for the account. You use this dummy account in the Enable Replication stage.
+
+ ![Screenshot that shows Finalize registration.](./media/tutorial-migrate-physical-virtual-machines/finalize-registration.png)
## Install the Mobility service agent
-A Mobility service agent must be pre-installed on the source GCP VMs to be migrated before you can initiate replication. The approach you choose to install the Mobility service agent may depend on your organization's preferences and existing tools, but be aware that the "push" installation method built into Azure Site Recovery is not currently supported. Approaches you may want to consider:
+A Mobility service agent must be preinstalled on the source GCP VMs to be migrated before you can initiate replication. The approach you choose to install the Mobility service agent might depend on your organization's preferences and existing tools. The "push" installation method built into Azure Site Recovery isn't currently supported. Approaches you might want to consider:
- [System Center Configuration Manager](../site-recovery/vmware-azure-mobility-install-configuration-mgr.md)-- [Arc for Servers and Custom Script Extensions](../azure-arc/servers/overview.md)
+- [Azure Arc for servers and custom script extensions](../azure-arc/servers/overview.md)
- [Manual installation](../site-recovery/vmware-physical-mobility-service-overview.md)
-1. Extract the contents of the installer tarball to a local folder (for example /tmp/MobSvcInstaller) on the GCP VM, as follows:
+1. Extract the contents of the installer tarball to a local folder (for example, /tmp/MobSvcInstaller) on the GCP VM, as follows:
+ ``` mkdir /tmp/MobSvcInstaller tar -C /tmp/MobSvcInstaller -xvf <Installer tarball> cd /tmp/MobSvcInstaller ```
-2. Run the installer script:
+1. Run the installer script:
+ ``` sudo ./install -r MS -v VmWare -q -c CSLegacy ```
-3. Register the agent with the replication appliance:
+1. Register the agent with the replication appliance:
+ ``` /usr/local/ASR/Vx/bin/UnifiedAgentConfigurator.sh -i <replication appliance IP address> -P <Passphrase File Path> ```
A Mobility service agent must be pre-installed on the source GCP VMs to be migra
> [!NOTE] > Through the portal, you can add up to 10 VMs for replication at once. To replicate more VMs simultaneously, you can add them in batches of 10.
-1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization**, select **Replicate**.
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Replicate**.
- ![Replicate VMs](./media/tutorial-migrate-physical-virtual-machines/select-replicate.png)
+ ![Screenshot that shows selecting Replicate.](./media/tutorial-migrate-physical-virtual-machines/select-replicate.png)
-2. In **Replicate**, > **Source settings** > **Are your machines virtualized?**, select **Not virtualized/Other**.
-3. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up.
-4. In **Process Server**, select the name of the replication appliance.
-5. In **Guest credentials**, please select the dummy account created previously during the [replication installer setup](#download-the-replication-appliance-installer) to install the Mobility service manually (push install is not supported). Then click **Next: Virtual machines**.
+1. In **Replicate**, > **Source settings** > **Are your machines virtualized?**, select **Not virtualized/Other**.
+1. In **On-premises appliance**, select the name of the Azure Migrate appliance that you set up.
+1. In **Process Server**, select the name of the replication appliance.
+1. In **Guest credentials**, select the dummy account created previously during the [replication installer setup](#download-the-replication-appliance-installer) to install the Mobility service manually. (Push installation isn't supported.) Then select **Next: Virtual machines**.
- ![Replicate Settings](./media/tutorial-migrate-physical-virtual-machines/source-settings.png)
-6. In **Virtual Machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**.
-7. Check each VM you want to migrate. Then click **Next: Target settings**.
+ ![Screenshot that shows Replicate settings.](./media/tutorial-migrate-physical-virtual-machines/source-settings.png)
+1. In **Virtual machines**, in **Import migration settings from an assessment?**, leave the default setting **No, I'll specify the migration settings manually**.
+1. Check each VM you want to migrate. Then select **Next: Target settings**.
- :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/select-vms-inline.png" alt-text="Screenshot on selecting VMs." lightbox="./media/tutorial-migrate-physical-virtual-machines/select-vms-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/select-vms-inline.png" alt-text="Screenshot that shows selecting VMs." lightbox="./media/tutorial-migrate-physical-virtual-machines/select-vms-expanded.png":::
-8. In **Target settings**, select the subscription, and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration.
-9. In **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
-10. In **Cache storage account**, keep the default option to use the cache storage account that is automatically created for the project. Use the dropdown if you'd like to specify a different storage account to use as the cache storage account for replication. <br/>
+1. In **Target settings**, select the subscription and target region to which you'll migrate, and specify the resource group in which the Azure VMs will reside after migration.
+1. In **Virtual Network**, select the Azure virtual network/subnet to which the Azure VMs will be joined after migration.
+1. In **Cache storage account**, keep the default option to use the cache storage account that was automatically created for the project. Use the dropdown list if you want to specify a different storage account to use as the cache storage account for replication. <br/>
> [!NOTE] >
- > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault)
- > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [**Learn more**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1)
-11. In **Availability options**, select:
- - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones
- - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
- - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
-12. In **Disk encryption type**, select:
- - Encryption-at-rest with platform-managed key
- - Encryption-at-rest with customer-managed key
- - Double encryption with platform-managed and customer-managed keys
+ > - If you selected private endpoint as the connectivity method for the Azure Migrate project, grant the Recovery Services vault access to the cache storage account. [Learn more](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault).
+ > - To replicate using ExpressRoute with private peering, create a private endpoint for the cache storage account. [Learn more](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1).
+1. In **Availability options**, select:
+ - **Availability Zone**: Pins the migrated machine to a specific availability zone in the region. Use this option to distribute servers that form a multinode application tier across availability zones. If you select this option, you need to specify the availability zone to use for each of the selected machines on the **Compute** tab. This option is only available if the target region selected for the migration supports availability zones.
+ - **Availability Set**: Place the migrated machine in an availability set. The target resource group that was selected must have one or more availability sets in order to use this option.
+ - **No infrastructure redundancy required**: Use this option if you don't need either of these availability configurations for the migrated machines.
+1. In **Disk encryption type**, select:
+
+ - Encryption-at-rest with platform-managed key.
+ - Encryption-at-rest with customer-managed key.
+ - Double encryption with platform-managed and customer-managed keys.
> [!NOTE]
- > To replicate VMs with CMK, you'll need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set) under the target Resource Group. A disk encryption set object maps Managed Disks to a Key Vault that contains the CMK to use for SSE.
+ > To replicate VMs with customer-managed keys, you need to [create a disk encryption set](../virtual-machines/disks-enable-customer-managed-keys-portal.md#set-up-your-disk-encryption-set) under the target resource group. A disk encryption set object maps managed disks to an Azure Key Vault instance that contains the customer-managed key to use for server-side encryption.
-13. In **Azure Hybrid Benefit**:
+1. In **Azure Hybrid Benefit**:
- - Select **No** if you don't want to apply Azure Hybrid Benefit. Then click **Next**.
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then click **Next**.
+ - Select **No** if you don't want to apply Azure Hybrid Benefit. Then select **Next**.
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions and you want to apply the benefit to the machines you're migrating. Then select **Next**.
- ![Target settings](./media/tutorial-migrate-vmware/target-settings.png)
+ ![Screenshot that shows Target settings.](./media/tutorial-migrate-vmware/target-settings.png)
-14. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-physical-migration.md#azure-vm-requirements).
+1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-physical-migration.md#azure-vm-requirements).
- - **VM size**: If you're using assessment recommendations, the VM size dropdown shows the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
+ - **VM size**: If you're using assessment recommendations, the VM size dropdown list shows the recommended size. Otherwise, Azure Migrate and Modernize picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
- **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer.
- - **Availability Zone**: Specify the Availability Zone to use.
- - **Availability Set**: Specify the Availability Set to use.
+ - **Availability Zone**: Specify the availability zone to use.
+ - **Availability Set**: Specify the availability set to use.
-15. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then click **Next**.
+1. In **Disks**, specify whether the VM disks should be replicated to Azure, and select the disk type (standard SSD/HDD or premium managed disks) in Azure. Then select **Next**.
- You can exclude disks from replication. - If you exclude disks, won't be present on the Azure VM after migration.
- :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/disks-inline.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-physical-virtual-machines/disks-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/disks-inline.png" alt-text="Screenshot that shows the Disks tab in the Replicate dialog." lightbox="./media/tutorial-migrate-physical-virtual-machines/disks-expanded.png":::
-1. In **Tags**, choose to add tags to your Virtual machines, Disks, and NICs.
+1. In **Tags**, choose to add tags to your VMs, disks, and NICs.
- :::image type="content" source="./media/tutorial-migrate-vmware/tags-inline.png" alt-text="Screenshot shows the tags tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-vmware/tags-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-vmware/tags-inline.png" alt-text="Screenshot that shows the Tags tab in the Replicate dialog." lightbox="./media/tutorial-migrate-vmware/tags-expanded.png":::
-16. In **Review and start replication**, review the settings, and click **Replicate** to start the initial replication for the servers.
+1. In **Review and start replication**, review the settings and select **Replicate** to start the initial replication for the servers.
> [!NOTE]
-> You can update replication settings any time before replication starts, **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
+> You can update replication settings any time before replication starts by selecting **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
## Track and monitor replication status -- When you click **Replicate** a Start Replication job begins.
+- When you select **Replicate**, a Start Replication job begins.
- When the Start Replication job finishes successfully, the VMs begin their initial replication to Azure. - After initial replication finishes, delta replication begins. Incremental changes to GCP VM disks are periodically replicated to the replica disks in Azure. You can track job status in the portal notifications.
-You can monitor replication status by clicking on **Replicating servers** in **Migration and modernization**.
+You can monitor replication status by selecting **Replicating servers** in **Migration and modernization**.
-![Monitor replication](./media/tutorial-migrate-physical-virtual-machines/replicating-servers.png)
+![Screenshot that shows the Replicating servers option.](./media/tutorial-migrate-physical-virtual-machines/replicating-servers.png)
## Run a test migration
-When delta replication begins, you can run a test migration for the VMs, before running a full migration to Azure. The test migration is highly recommended and provides an opportunity to discover any potential issues and fix them before you proceed with the actual migration. It is advised that you do this at least once for each VM before you migrate it.
+When delta replication begins, you can run a test migration for the VMs before you run a full migration to Azure. We highly recommend the test migration. It provides an opportunity to discover any potential issues and fix them before you proceed with the actual migration. We recommend that you do this step at least once for each VM before you migrate it.
-- Running a test migration checks that migration will work as expected, without impacting the GCP VMs, which remain operational, and continue replicating.-- Test migration simulates the migration by creating an Azure VM using replicated data (usually migrating to a non-production VNet in your Azure subscription).
+- Running a test migration checks that migration works as expected, without affecting the GCP VMs, which remain operational and continue replicating.
+- Test migration simulates the migration by creating an Azure VM using replicated data. (The test usually migrates to a nonproduction virtual network in your Azure subscription.)
- You can use the replicated test Azure VM to validate the migration, perform app testing, and address any issues before full migration.
-Do a test migration as follows:
+To do a test migration:
-1. In **Migration goals** > **Servers, databases and web apps** > **Migration and modernization**, select **Test migrated servers**.
+1. In **Migration goals**, select **Servers, databases, and web apps** > **Migration and modernization** > **Test migrated servers**.
- ![Test migrated servers](./media/tutorial-migrate-physical-virtual-machines/test-migrated-servers.png)
+ ![Screenshot that shows Test migrated servers.](./media/tutorial-migrate-physical-virtual-machines/test-migrated-servers.png)
-2. Right-click the VM to test, and click **Test migrate**.
+1. Right-click the VM to test and select **Test migrate**.
- :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/test-migrate-inline.png" alt-text="Screenshot showing the result after clicking test migration." lightbox="./media/tutorial-migrate-physical-virtual-machines/test-migrate-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/test-migrate-inline.png" alt-text="Screenshot that shows the result after selecting test migration." lightbox="./media/tutorial-migrate-physical-virtual-machines/test-migrate-expanded.png":::
-3. In **Test Migration**, select the Azure VNet in which the Azure VM will be located after the migration. We recommend you use a non-production VNet.
-4. The **Test migration** job starts. Monitor the job in the portal notifications.
-5. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**.
-6. After the test is done, right-click the Azure VM in **Replicating machines**, and click **Clean up test migration**.
+1. In **Test Migration**, select the Azure virtual network in which the Azure VM will be located after the migration. We recommend that you use a nonproduction virtual network.
+1. The Test Migration job starts. Monitor the job in the portal notifications.
+1. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has the suffix **-Test**.
+1. After the test is finished, right-click the Azure VM in **Replicating machines** and select **Clean up test migration**.
- :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/clean-up-inline.png" alt-text="Screenshot showing the result after the clean up of test migration." lightbox="./media/tutorial-migrate-physical-virtual-machines/clean-up-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-physical-virtual-machines/clean-up-inline.png" alt-text="Screenshot that shows the result after the cleanup of test migration." lightbox="./media/tutorial-migrate-physical-virtual-machines/clean-up-expanded.png":::
> [!NOTE]
- > You can now register your servers running SQL server with SQL VM RP to take advantage of automated patching, automated backup and simplified license management using SQL IaaS Agent Extension.
+ > You can now register your servers running SQL Server with SQL VM RP to take advantage of automated patching, automated backup, and simplified license management by using the SQL IaaS Agent Extension.
>- Select **Manage** > **Replicating servers** > **Machine containing SQL server** > **Compute and Network** and select **yes** to register with SQL VM RP.
- >- Select Azure Hybrid benefit for SQL Server if you have SQL Server instances that are covered with active Software Assurance or SQL Server subscriptions and you want to apply the benefit to the machines you're migrating.hs.
+ >- Select **Azure Hybrid Benefit for SQL Server** if you have SQL Server instances that are covered with active Software Assurance or SQL Server subscriptions and you want to apply the benefit to the machines you're migrating.
## Migrate GCP VMs
-After you've verified that the test migration works as expected, you can migrate the GCP VMs.
+After you verify that the test migration works as expected, you can migrate the GCP VMs.
-1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization**, click **Replicating servers**.
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Replicating servers**.
- ![Replicating servers](./media/tutorial-migrate-physical-virtual-machines/replicate-servers.png)
+ ![Screenshot that shows Replicating servers.](./media/tutorial-migrate-physical-virtual-machines/replicate-servers.png)
-2. In **Replicating machines**, right-click the VM > **Migrate**.
-3. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**.
+1. In **Replicating machines**, right-click the VM and select **Migrate**.
+1. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**.
> [!NOTE]
- > Automatic shutdown is not supported while migrating GCP virtual machines.
+ > Automatic shutdown isn't supported while you migrate GCP VMs.
-4. A migration job starts for the VM. You can view the job status by clicking the notification bell icon on the top right of the portal page or by going to the jobs page of the Migration and modernization tool (Click Overview on the tool tile > Select Jobs from the left menu).
-5. After the job finishes, you can view and manage the VM from the Virtual Machines page.
+1. A migration job starts for the VM. You can view the job status by selecting the notification bell icon on the top right of the portal page or by going to the **Jobs** page of the Migration and modernization tool. (Select **Overview** on the tool tile and select **Jobs** from the left menu.)
+1. After the job finishes, you can view and manage the VM from the **Virtual Machines** page.
### Complete the migration
-1. After the migration is done, right-click the VM > **Stop migration**. This does the following:
+1. After the migration is finished, right-click the VM and select **Stop migration**. This action:
- Stops replication for the GCP VM. - Removes the GCP VM from the **Replicating servers** count in the Migration and modernization tool. - Cleans up replication state information for the VM.
-1. Verify and [troubleshoot any Windows activation issues on the Azure VM.](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems)
+1. Verify and [troubleshoot any Windows activation issues on the Azure VM](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems).
1. Perform any post-migration app tweaks, such as updating host names, database connection strings, and web server configurations. 1. Perform final application and migration acceptance testing on the migrated application now running in Azure. 1. Cut over traffic to the migrated Azure VM instance. 1. Update any internal documentation to show the new location and IP address of the Azure VMs. -- ## Post-migration best practices - For increased resilience:
- - Keep data secure by backing up Azure VMs using the Azure Backup service. [Learn more](../backup/quick-backup-vm-portal.md).
+ - Keep data secure by backing up Azure VMs by using Azure Backup. [Learn more](../backup/quick-backup-vm-portal.md).
- Keep workloads running and continuously available by replicating Azure VMs to a secondary region with Site Recovery. [Learn more](../site-recovery/azure-to-azure-tutorial-enable-replication.md). - For increased security:
- - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md).
+ - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just-in-time administration](../security-center/security-center-just-in-time.md).
- Manage and govern updates on Windows and Linux machines with [Azure Update Manager](../update-manager/overview.md).
- - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/).
+ - Restrict network traffic to management endpoints with [network security groups](../virtual-network/network-security-groups-overview.md).
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks and keep data safe from theft and unauthorized access.
+ - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/) and [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/).
- For monitoring and management:
- - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
--
+ - Consider deploying [Microsoft Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
-## Troubleshooting / Tips
+## Troubleshooting and tips
-**Question:** I cannot see my GCP VM in the discovered list of servers for migration.
-**Answer:** Check if your replication appliance meets the requirements. Make sure Mobility Agent is installed on the source VM to be migrated and is registered the Configuration Server. Check the firewall rules to enable a network path between the replication appliance and source GCP VMs.
+**Question:** I can't see my GCP VM in the discovered list of servers for migration.<br>
+**Answer:** Check if your replication appliance meets the requirements. Make sure Mobility Agent is installed on the source VM to be migrated and is registered to the Configuration Server. Check the firewall rules to enable a network path between the replication appliance and source GCP VMs.
-**Question:** How do I know if my VM was successfully migrated?
-**Answer:** Post-migration, you can view and manage the VM from the Virtual Machines page. Connect to the migrated VM to validate.
+**Question:** How do I know if my VM was successfully migrated?<br>
+**Answer:** Post migration, you can view and manage the VM from the **Virtual Machines** page. Connect to the migrated VM to validate.
-**Question:** I am unable to import VMs for migration from my previously created Server Assessment results.
-**Answer:** Currently, we do not support the import of assessment for this workflow. As a workaround, you can export the assessment and then manually select the VM recommendation during the Enable Replication step.
+**Question:** I'm unable to import VMs for migration from my previously created Server Assessment results.<br>
+**Answer:** Currently, we don't support the import of assessment for this workflow. As a workaround, you can export the assessment and then manually select the VM recommendation during the Enable Replication step.
-**Question:** I am getting the error ΓÇ£Failed to fetch BIOS GUIDΓÇ¥ while trying to discover my GCP VMs.
-**Answer:** Use root login for authentication and not any pseudo user. If you are not able to use a root user, ensure the required capabilities are set on the user, as per the instructions provided in the [support matrix](migrate-support-matrix-physical.md#physical-server-requirements). Also review supported operating systems for GCP VMs.
+**Question:** I'm getting the error "Failed to fetch BIOS GUID" when I try to discover my GCP VMs.<br>
+**Answer:** Use root login for authentication and not any pseudo user. If you aren't able to use a root user, ensure that the required capabilities are set on the user, according to the instructions provided in the [support matrix](migrate-support-matrix-physical.md#physical-server-requirements). Also review supported operating systems for GCP VMs.
-**Question:** My replication status is not progressing.
-**Answer:** Check if your replication appliance meets the requirements. Make sure youΓÇÖve enabled the required ports on your replication appliance TCP port 9443 and HTTPS 443 for data transport. Ensure that there are no stale duplicate versions of the replication appliance connected to the same project.
+**Question:** My replication status isn't progressing.<br>
+**Answer:** Check if your replication appliance meets the requirements. Make sure that you enabled the required ports on your replication appliance TCP port 9443 and HTTPS 443 for data transport. Ensure that there are no stale duplicate versions of the replication appliance connected to the same project.
-**Question:** I am unable to Discover GCP Instances using Azure Migrate due to HTTP status code of 504 from the remote Windows management service.
-**Answer:** Make sure to review the Azure migrate appliance requirements and URL access needs. Make sure no proxy settings are blocking the appliance registration.
+**Question:** I'm unable to discover GCP instances by using Azure Migrate and Modernize because of the HTTP status code of 504 from the remote Windows management service.<br>
+**Answer:** Make sure to review the Azure Migrate appliance requirements and URL access needs. Make sure no proxy settings are blocking the appliance registration.
-**Question:** Do I have to make any changes before I migrate my GCP VMs to Azure?
-**Answer:** You may have to make these changes before migrating your GCP VMs to Azure:
+**Question:** Do I have to make any changes before I migrate my GCP VMs to Azure?<br>
+**Answer:** You might have to make the following changes before you migrate your GCP VMs to Azure:
-- If you are using cloud-init for your VM provisioning, you may want to disable cloud-init on the VM before replicating it to Azure. The provisioning steps performed by cloud-init on the VM maybe GCP specific and won't be valid after the migration to Azure. ​-- Review the [prerequisites](#prerequisites) section to determine whether there are any changes necessary for the operating system before you migrate them to Azure.-- We always recommend you run a test migration before the final migration.
+- If you're using cloud-init for your VM provisioning, you might want to disable cloud-init on the VM before you replicate it to Azure. The provisioning steps performed by cloud-init on the VM might be specific to GCP and won't be valid after the migration to Azure. ​
+- Review the [Prerequisites](#prerequisites) section to determine whether there are any changes necessary for the operating system before you migrate them to Azure.
+- We always recommend that you run a test migration before the final migration.
## Next steps
-Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Cloud Adoption Framework for Azure.
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
# Migrate Hyper-V VMs to Azure
-This article shows you how to migrate on-premises Hyper-V VMs to Azure with the [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) tool.
+This article shows you how to migrate on-premises Hyper-V virtual machines (VMs) to Azure with the [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) tool.
This tutorial is the third in a series that demonstrates how to assess and migrate machines to Azure. > [!NOTE]
-> Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof-of-concept. Tutorials use default options where possible, and don't show all possible settings and paths.
+> Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof of concept. Tutorials use default options where possible and don't show all possible settings and paths.
In this tutorial, you learn how to:
This tutorial is the third in a series that demonstrates how to assess and migra
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin. - ## Prerequisites - Before you begin this tutorial, you should: 1. [Review](hyper-v-migration-architecture.md) the Hyper-V migration architecture.
-1. [Review](migrate-support-matrix-hyper-v-migration.md#hyper-v-host-requirements) Hyper-V host requirements for migration, and the Azure URLs to which Hyper-V hosts and clusters need access for VM migration.
+1. [Review](migrate-support-matrix-hyper-v-migration.md#hyper-v-host-requirements) Hyper-V host requirements for migration and the Azure URLs to which Hyper-V hosts and clusters need access for VM migration.
1. [Review](migrate-support-matrix-hyper-v-migration.md#hyper-v-vms) the requirements for Hyper-V VMs that you want to migrate to Azure.
-1. We recommend that you [assess Hyper-V VMs](tutorial-assess-hyper-v.md) before migrating them to Azure, but you don't have to.
-1. Go to the already created project or [create a new project.](./create-manage-projects.md)
-1. Verify permissions for your Azure account - Your Azure account needs permissions to create a VM, write to an Azure managed disk, and manage failover operations for the Recovery Services Vault associated with your Azure Migrate project.
+1. We recommend that you [assess Hyper-V VMs](tutorial-assess-hyper-v.md) before you migrate them to Azure, but you don't have to.
+1. Go to the already created project or [create a new project](./create-manage-projects.md).
+1. Verify permissions for your Azure account. Your Azure account needs permissions to create a VM, write to an Azure managed disk, and manage failover operations for the Recovery Services vault associated with your Azure Migrate project.
> [!NOTE]
-> If you're planning to upgrade your Windows operating system, Azure Migrate may download the Windows SetupDiag for error details in case upgrade fails. Ensure the VM created in Azure post the migration has access to [SetupDiag](https://go.microsoft.com/fwlink/?linkid=870142). In case there is no access to SetupDiag, you may not be able to get detailed OS upgrade failure error codes but the upgrade can still proceed.
-
+> If you're planning to upgrade your Windows operating system (OS), Azure Migrate and Modernize might download the Windows SetupDiag for error details in case upgrade fails. Ensure that the VM created in Azure after the migration has access to [SetupDiag](https://go.microsoft.com/fwlink/?linkid=870142). If there's no access to SetupDiag, you might not be able to get detailed OS upgrade failure error codes, but the upgrade can still proceed.
## Download the provider
-For migrating Hyper-V VMs, the Migration and modernization tool installs software providers (Microsoft Azure Site Recovery provider and Microsoft Azure Recovery Service agent) on Hyper-V Hosts or cluster nodes. Note that the [Azure Migrate appliance](migrate-appliance.md) isn't used for Hyper-V migration.
+For migrating Hyper-V VMs, the Migration and modernization tool installs software providers (Azure Site Recovery provider and Recovery Services agent) on Hyper-V hosts or cluster nodes. The [Azure Migrate appliance](migrate-appliance.md) isn't used for Hyper-V migration.
-1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization**, select **Discover**.
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Discover**.
1. In **Discover machines** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. 1. In **Target region**, select the Azure region to which you want to migrate the machines. 1. Select **Confirm that the target region for migration is region-name**.
-1. Click **Create resources**. This creates a Recovery Services Vault in the background.
- - If you've already set up migration with the Migration and modernization tool, this option won't appear since resources were set up previously.
- - You can't change the target region for this project after clicking this button.
+1. Select **Create resources**. This step creates a Recovery Services vault in the background.
+ - If you already set up migration with the Migration and modernization tool, this option won't appear because resources were set up previously.
+ - You can't change the target region for this project after you select this button.
- All subsequent migrations are to this region.
-1. In **Prepare Hyper-V host servers**, download the Hyper-V Replication provider, and the registration key file.
+1. In **Prepare Hyper-V host servers**, download the Hyper-V Replication provider and the registration key file.
- The registration key is needed to register the Hyper-V host with the Migration and modernization tool. - The key is valid for five days after you generate it.
- ![Screenshot of Download provider and key.](./media/tutorial-migrate-hyper-v/download-provider-hyper-v.png)
+ ![Screenshot that shows the Download provider and key.](./media/tutorial-migrate-hyper-v/download-provider-hyper-v.png)
-1. Copy the provider setup file and registration key file to each Hyper-V host (or cluster node) running VMs you want to replicate.
+1. Copy the provider setup file and registration key file to each Hyper-V host (or cluster node) running the VMs you want to replicate.
## Install and register the provider
-Copy the provider setup file and registration key file to each Hyper-V host (or cluster node) running VMs you want to replicate. To install and register the provider, follow the steps below using either the UI or commands:
+To install and register the provider, use the following steps by using either the UI or commands.
-# [Using UI](#tab/UI)
+# [Use UI](#tab/UI)
-Run the provider setup file on each host, as described below:
+Run the provider setup file on each host:
1. Select the file icon in the taskbar to open the folder where the installer file and registration key are downloaded.
-1. Select **AzureSiteRecoveryProvider.exe** file.
- - In the provider installation wizard, ensure **On (recommended)** is checked, and then select **Next**.
- - Select **Install** to accept the default installation folder.
- - Select **Register** to register this server in the Recovery Services Vault.
- - Select **Browse**.
- - Locate the registration key and select **Open**.
- - Select **Next**.
- - Ensure **Connect directly to Azure Site Recovery without a proxy server** is selected, and then select **Next**.
- - Select **Finish**.
-
+1. Select the *AzureSiteRecoveryProvider.exe* file.
+ 1. In the provider installation wizard, ensure that **On (recommended)** is selected and then select **Next**.
+ 1. Select **Install** to accept the default installation folder.
+ 1. Select **Register** to register this server in the Recovery Services vault.
+ 1. Select **Browse**.
+ 1. Locate the registration key and select **Open**.
+ 1. Select **Next**.
+ 1. Ensure that **Connect directly to Azure Site Recovery without a proxy server** is selected and then select **Next**.
+ 1. Select **Finish**.
-# [Using commands](#tab/commands)
+# [Use commands](#tab/commands)
-Run the following commands on each host, as described below:
+Run the following commands on each host:
-1. Extract the contents of installer file (AzureSiteRecoveryProvider.exe) to a local folder (for example .\Temp) on the machine, as follows:
+1. Extract the contents of the installer file (*AzureSiteRecoveryProvider.exe*) to a local folder (for example, *.\Temp*) on the machine, as follows:
``` AzureSiteRecoveryProvider.exe /q /x:.\Temp\Extracted
Run the following commands on each host, as described below:
``` cd .\Temp\Extracted ```
-1. Install the Hyper-V replication provider. The results are logged to %Programdata%\ASRLogs\DRASetupWizard.log.
+1. Install the Hyper-V replication provider. The results are logged to *%Programdata%\ASRLogs\DRASetupWizard.log*.
``` .\setupdr.exe /i ```
-1. Register the Hyper-V host to Azure Migrate.
+1. Register the Hyper-V host to **Azure Migrate**.
> [!NOTE]
- > If your Hyper-V host was previously registered with another Azure Migrate project that you are no longer using or have deleted, you'll need to de-register it from that project and register it in the new one. Follow the [Remove servers and disable protection](../site-recovery/site-recovery-manage-registration-and-protection.md?WT.mc_id=modinfra-39236-thmaure) guide to do so.
+ > If your Hyper-V host was previously registered with another Azure Migrate project that you're no longer using or have deleted, you need to deregister it from that project and register it in the new one. For more information, see [Remove servers and disable protection](../site-recovery/site-recovery-manage-registration-and-protection.md?WT.mc_id=modinfra-39236-thmaure).
``` "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r /Credentials <key file path> ```
- **Configure proxy rules:** If you need to connect to the internet via a proxy, use the optional parameters /proxyaddress and /proxyport parameters to specify the proxy address (in the form http://ProxyIPAddress) and proxy listening port. For authenticated proxy, you can use the optional parameters /proxyusername and /proxypassword.
+ - **Configure proxy rules:** If you need to connect to the internet via a proxy, use the optional parameters `/proxyaddress` and `/proxyport` to specify the proxy address (in the form `http://ProxyIPAddress`) and proxy listening port. For authenticated proxy, you can use the optional parameters `/proxyusername` and `/proxypassword`.
- ```
- "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r [/proxyaddress http://ProxyIPAddress] [/proxyport portnumber] [/proxyusername username] [/proxypassword password]
- ```
+ ```
+ "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r [/proxyaddress http://ProxyIPAddress] [/proxyport portnumber] [/proxyusername username] [/proxypassword password]
+ ```
- **Configure proxy bypass rules:** To configure proxy bypass rules, use the optional parameter /AddBypassUrls and provide bypass URL(s) for proxy separated by ';' and run the following commands:
-
- ```
- "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r [/proxyaddress http://ProxyIPAddress]ΓÇ»[/proxyport portnumber] [/proxyusername username] [/proxypassword password] [/AddBypassUrls URLs]
- ```
- and
- ```
- "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /configure /AddBypassUrls URLs
- ```
+ - **Configure proxy bypass rules:** To configure proxy bypass rules, use the optional parameter `/AddBypassUrls` and provide bypass URLs for proxy separated by ';' and run the following commands:
+
+ ```
+ "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /r [/proxyaddress http://ProxyIPAddress]ΓÇ»[/proxyport portnumber] [/proxyusername username] [/proxypassword password] [/AddBypassUrls URLs]
+ ```
+ and
+ ```
+ "C:\Program Files\Microsoft Azure Site Recovery Provider\DRConfigurator.exe" /configure /AddBypassUrls URLs
+ ```
-After installing the provider on hosts, go to the Azure portal and in **Discover machines**, select **Finalize registration**.
+After you install the provider on hosts, go to the Azure portal and in **Discover machines**, select **Finalize registration**.
- ![Screenshot of the Finalize registration screen.](./media/tutorial-migrate-hyper-v/finalize-registration.png)
+![Screenshot that shows the Finalize registration screen.](./media/tutorial-migrate-hyper-v/finalize-registration.png)
-It can take up to 15 minutes after finalizing registration until discovered VMs appear in the Migration and modernization tile. As VMs are discovered, the **Discovered servers** count rises.
+It can take up to 15 minutes after finalizing registration until discovered VMs appear in the **Migration and modernization** tile. As VMs are discovered, the **Discovered servers** count rises.
## Replicate Hyper-V VMs
-With discovery completed, you can begin the replication of Hyper-V VMs to Azure.
+After discovery is finished, you can begin the replication of Hyper-V VMs to Azure.
> [!NOTE]
-> You can replicate up to 10 machines together. If you need to replicate more, then replicate them simultaneously in batches of 10.
+> You can replicate up to 10 machines together. If you need to replicate more, replicate them simultaneously in batches of 10.
-1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization**, select **Replicate**.
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Replicate**.
-1. In **Replicate** > **Source settings** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. Then click **Next: Virtual machines**.
+1. In **Replicate** > **Source settings** > **Are your machines virtualized?**, select **Yes, with Hyper-V**. Then select **Next: Virtual machines**.
1. In **Virtual machines**, select the machines you want to replicate.
- - If you've run an assessment for the VMs, you can apply VM sizing and disk type (premium/standard) recommendations from the assessment results. To do this, in **Import migration settings from an Azure Migrate assessment?**, select the **Yes** option.
- - If you didn't run an assessment, or you don't want to use the assessment settings, select the **No** option.
- - If you selected to use the assessment, select the VM group, and assessment name.
+ - If you ran an assessment for the VMs, you can apply VM sizing and disk type (premium/standard) recommendations from the assessment results. To do this step, in **Import migration settings from an Azure Migrate assessment?**, select **Yes**.
+ - If you didn't run an assessment, or you don't want to use the assessment settings, select **No**.
+ - If you selected to use the assessment, select the VM group and assessment name.
- ![Screenshot of the Select assessment screen.](./media/tutorial-migrate-hyper-v/select-assessment.png)
+ ![Screenshot that shows the Select assessment screen.](./media/tutorial-migrate-hyper-v/select-assessment.png)
-1. In **Virtual machines**, search for VMs as needed, and check each VM you want to migrate. Then, select **Next: Target settings**.
+1. In **Virtual machines**, search for VMs as needed and check each VM you want to migrate. Then, select **Next: Target settings**.
- :::image type="content" source="./media/tutorial-migrate-hyper-v/select-vms-inline.png" alt-text="Screenshot shows the selected VMs in the Replicate dialog box." lightbox="./media/tutorial-migrate-hyper-v/select-vms-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-hyper-v/select-vms-inline.png" alt-text="Screenshot that shows the selected VMs in the Replicate dialog." lightbox="./media/tutorial-migrate-hyper-v/select-vms-expanded.png":::
1. In **Target settings**, select the target region to which you'll migrate, the subscription, and the resource group in which the Azure VMs will reside after migration. 1. In **Replication Storage Account**, select the Azure Storage account in which replicated data will be stored in Azure.
-1. **Virtual Network**, select the Azure VNet/subnet to which the Azure VMs will be joined after migration.
+1. In **Virtual Network**, select the Azure virtual network/subnet to which the Azure VMs will be joined after migration.
1. In **Availability options**, select:
- - Availability Zone to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. If you select this option, you'll need to specify the Availability Zone to use for each of the selected machine in the Compute tab. This option is only available if the target region selected for the migration supports Availability Zones
- - Availability Set to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets in order to use this option.
- - No infrastructure redundancy required option if you don't need either of these availability configurations for the migrated machines.
+ - **Availability Zone**: Pins the migrated machine to a specific availability zone in the region. Use this option to distribute servers that form a multinode application tier across availability zones. If you select this option, you need to specify the availability zone to use for each of the selected machines on the **Compute** tab. This option is only available if the target region selected for the migration supports availability zones.
+ - **Availability Set**: Places the migrated machine in an availability set. The target resource group that was selected must have one or more availability sets to use this option.
+ - **No infrastructure redundancy required**: Use this option if you don't need either of these availability configurations for the migrated machines.
1. In **Azure Hybrid Benefit**:
- - Select **No** if you don't want to apply Azure Hybrid Benefit. Then, select **Next**.
- - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, and you want to apply the benefit to the machines you're migrating. Then select **Next**.
+ - Select **No** if you don't want to apply Azure Hybrid Benefit. Then select **Next**.
+ - Select **Yes** if you have Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions and you want to apply the benefit to the machines you're migrating. Then select **Next**.
- :::image type="content" source="./media/tutorial-migrate-hyper-v/target-settings.png" alt-text="Screenshot on target settings.":::
+ :::image type="content" source="./media/tutorial-migrate-hyper-v/target-settings.png" alt-text="Screenshot that shows Target settings.":::
1. In **Compute**, review the VM name, size, OS disk type, and availability configuration (if selected in the previous step). VMs must conform with [Azure requirements](migrate-support-matrix-hyper-v-migration.md#azure-vm-requirements).
- - **VM size**: If you're using assessment recommendations, the VM size dropdown will contain the recommended size. Otherwise Azure Migrate picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
+ - **VM size**: If you're using assessment recommendations, the VM size dropdown list contains the recommended size. Otherwise, Azure Migrate and Modernize picks a size based on the closest match in the Azure subscription. Alternatively, pick a manual size in **Azure VM size**.
- **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer. - **Availability Set**: If the VM should be in an Azure availability set after migration, specify the set. The set must be in the target resource group you specify for the migration. 1. In **Disks**, specify the VM disks that need to be replicated to Azure. Then select **Next**. - You can exclude disks from replication.
- - If you exclude disks, won't be present on the Azure VM after migration.
+ - If you exclude disks, they won't be present on the Azure VM after migration.
- :::image type="content" source="./media/tutorial-migrate-hyper-v/disks-inline.png" alt-text="Screenshot shows the Disks tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-hyper-v/disks-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-hyper-v/disks-inline.png" alt-text="Screenshot that shows the Disks tab on the Replicate dialog." lightbox="./media/tutorial-migrate-hyper-v/disks-expanded.png":::
-1. In **Tags**, choose to add tags to your Virtual machines, Disks, and NICs.
+1. In **Tags**, choose to add tags to your VMs, disks, and NICs.
- :::image type="content" source="./media/tutorial-migrate-vmware/tags-inline.png" alt-text="Screenshot shows the tags tab of the Replicate dialog box." lightbox="./media/tutorial-migrate-vmware/tags-expanded.png":::
+ :::image type="content" source="./media/tutorial-migrate-vmware/tags-inline.png" alt-text="Screenshot that shows the Tags tab on the Replicate dialog." lightbox="./media/tutorial-migrate-vmware/tags-expanded.png":::
-1. In **Review and start replication**, review the settings, and select **Replicate** to start the initial replication for the servers.
+1. In **Review and start replication**, review the settings and select **Replicate** to start the initial replication for the servers.
> [!NOTE]
-> You can update replication settings any time before replication starts, in **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
+> You can update replication settings any time before replication starts in **Manage** > **Replicating machines**. Settings can't be changed after replication starts.
## Provision for the first time
-If this is the first VM you're replicating in the Azure Migrate project, the Migration and modernization tool automatically provisions these resources in same resource group as the project.
+If this is the first VM you're replicating in the Azure Migrate project, the Migration and modernization tool automatically provisions these resources in the same resource group as the project.
-- **Cache storage account**: The Azure Site Recovery provider software installed on Hyper-V hosts uploads replication data for the VMs configured for replication to a storage account (known as the cache storage account, or log storage account) in your subscription. The Azure Migrate service then copies the uploaded replication data from the storage account to the replica-managed disks corresponding to the VM. The cache storage account needs to be specified while configuring replication for a VM and The Azure Migrate portal automatically creates one for the Azure Migrate project when replication is configured for the first time in the project.
+- **Cache storage account**: The Site Recovery provider software installed on Hyper-V hosts uploads replication data for the VMs configured for replication to a storage account (known as the cache storage account or log storage account) in your subscription. Azure Migrate and Modernize then copies the uploaded replication data from the storage account to the replica-managed disks corresponding to the VM. The cache storage account needs to be specified while configuring replication for a VM. The Azure Migrate portal automatically creates one for the Azure Migrate project when replication is configured for the first time in the project.
## Track and monitor
If this is the first VM you're replicating in the Azure Migrate project, the Mig
You can track job status in the portal notifications.
-You can monitor replication status by clicking on **Replicating servers** in **Migration and modernization**.
-
-![Monitor replication](./media/tutorial-migrate-hyper-v/replicating-servers.png)
+You can monitor replication status by selecting **Replicating servers** in **Migration and modernization**.
+![Screenshot that shows Monitor replication.](./media/tutorial-migrate-hyper-v/replicating-servers.png)
## Run a test migration
+When delta replication begins, you can run a test migration for the VMs before you run a full migration to Azure. We highly recommend that you do this step at least once for each machine before you migrate it.
-When delta replication begins, you can run a test migration for the VMs, before running a full migration to Azure. We highly recommend that you do this at least once for each machine, before you migrate it.
--- Running a test migration checks that migration will work as expected, without impacting the on-premises machines, which remain operational, and continue replicating.-- Test migration simulates the migration by creating an Azure VM using replicated data (usually migrating to a non-production Azure VNet in your Azure subscription).
+- Running a test migration checks that migration works as expected, without affecting the on-premises machines, which remain operational and continue replicating.
+- Test migration simulates the migration by creating an Azure VM by using replicated data. (The test usually migrates to a nonproduction Azure virtual network in your Azure subscription.)
- You can use the replicated test Azure VM to validate the migration, perform app testing, and address any issues before full migration.
-Do a test migration as follows:
-
+To do a test migration:
-1. In **Migration goals** > **Servers, databases and web apps** > **Migration and modernization**, select **Test migrated servers**.
+1. In **Migration goals**, select **Servers, databases, and web apps** > **Migration and modernization** > **Test migrated servers**.
- ![Screenshot of Test migrated servers in Migration and modernization tile.](./media/tutorial-migrate-hyper-v/test-migrated-servers.png)
+ ![Screenshot that shows Test migrated servers in Migration and modernization.](./media/tutorial-migrate-hyper-v/test-migrated-servers.png)
-1. Right-click the VM to test, and select **Test migrate**.
+1. Right-click the VM to test and select **Test migrate**.
- ![Screenshot of Test migration screen.](./media/tutorial-migrate-hyper-v/test-migrate.png)
+ ![Screenshot that shows the Test migration screen.](./media/tutorial-migrate-hyper-v/test-migrate.png)
-1. In **Test Migration**, select the Azure virtual network in which the Azure VM will be located after the migration. We recommend you use a non-production virtual network.
-1. You have an option to upgrade the Windows Server OS during test migration. For Hyper-V VMs, automatic detection of OS is not yet supported. To upgrade, select the **Check for upgrade** option. In the pane that appears, select the current OS version and the target version that you want to upgrade to. If the target version is available, it is processed accordingly. [Learn more](how-to-upgrade-windows.md).
-1. The **Test migration** job starts. Monitor the job in the portal notifications.
-1. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has a suffix **-Test**.
-1. After the test is done, right-click the Azure VM in **Replications**, and select **Clean up test migration**.
+1. In **Test Migration**, select the Azure virtual network in which the Azure VM will be located after the migration. We recommend that you use a nonproduction virtual network.
+1. You can upgrade the Windows Server OS during test migration. For Hyper-V VMs, automatic detection of an OS isn't yet supported. To upgrade, select the **Check for upgrade** option. In the pane that appears, select the current OS version and the target version to which you want to upgrade. If the target version is available, it's processed accordingly. [Learn more](how-to-upgrade-windows.md).
+1. The Test Migration job starts. Monitor the job in the portal notifications.
+1. After the migration finishes, view the migrated Azure VM in **Virtual Machines** in the Azure portal. The machine name has the suffix **-Test**.
+1. After the test is finished, right-click the Azure VM in **Replications** and select **Clean up test migration**.
- ![Screenshot of Clean up migration option.](./media/tutorial-migrate-hyper-v/clean-up.png)
+ ![Screenshot that shows the Clean up migration option.](./media/tutorial-migrate-hyper-v/clean-up.png)
> [!NOTE]
- > You can now register your servers running SQL server with SQL VM RP to take advantage of automated patching, automated backup and simplified license management using SQL IaaS Agent Extension.
+ > You can now register your servers running SQL Server with SQL VM RP to take advantage of automated patching, automated backup, and simplified license management by using the SQL IaaS Agent Extension.
>- Select **Manage** > **Replications** > **Machine containing SQL server** > **Compute and Network** and select **yes** to register with SQL VM RP.
- >- Select Azure Hybrid benefit for SQL Server if you have SQL Server instances that are covered with active Software Assurance or SQL Server subscriptions and you want to apply the benefit to the machines you're migrating.hs.
+ >- Select **Azure Hybrid Benefit for SQL Server** if you have SQL Server instances that are covered with active Software Assurance or SQL Server subscriptions and you want to apply the benefit to the machines you're migrating.
## Migrate VMs
-After you've verified that the test migration works as expected, you can migrate the on-premises machines.
+After you verify that the test migration works as expected, you can migrate the on-premises machines.
-1. In the Azure Migrate project > **Servers, databases and web apps** > **Migration and modernization**, select **Replicating servers**.
+1. In the Azure Migrate project, select **Servers, databases, and web apps** > **Migration and modernization** > **Replicating servers**.
- ![Replicating servers](./media/tutorial-migrate-hyper-v/replicate-servers.png)
+ ![Screenshot that shows Replicating servers.](./media/tutorial-migrate-hyper-v/replicate-servers.png)
-1. In **Replicating machines**, right-click the VM > **Migrate**.
+1. In **Replicating machines**, right-click the VM and select **Migrate**.
1. In **Migrate** > **Shut down virtual machines and perform a planned migration with no data loss**, select **Yes** > **OK**.
- - By default Azure Migrate shuts down the on-premises VM, and runs an on-demand replication to synchronize any VM changes that occurred since the last replication occurred. This ensures no data loss.
+ - By default, Azure Migrate and Modernize shuts down the on-premises VM and runs an on-demand replication to synchronize any VM changes that occurred since the last replication occurred. This action ensures no data loss.
- If you don't want to shut down the VM, select **No**.
-1. You have an option to upgrade the Windows Server OS during migration. For Hyper-V VMs, automatic detection of OS is not yet supported. To upgrade, select the **Check for upgrade** option. In the pane that appears, select the current OS version and the target version that you want to upgrade to. If the target version is available, it is processed accordingly. [Learn more](how-to-upgrade-windows.md).
+1. You can upgrade the Windows Server OS during migration. For Hyper-V VMs, automatic detection of OS isn't yet supported. To upgrade, select the **Check for upgrade** option. In the pane that appears, select the current OS version and the target version to which you want to upgrade. If the target version is available, it's processed accordingly. [Learn more](how-to-upgrade-windows.md).
1. A migration job starts for the VM. Track the job in Azure notifications. 1. After the job finishes, you can view and manage the VM from the **Virtual Machines** page. ## Complete the migration
-1. After the migration is done, right-click the VM > **Stop replication**. This does the following:
+1. After the migration is finished, right-click the VM and select **Stop replication**. This action:
- Stops replication for the on-premises machine. - Removes the machine from the **Replicating servers** count in the Migration and modernization tool. - Cleans up replication state information for the VM.
-1. Verify and [troubleshoot any Windows activation issues on the Azure VM.](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems)
+1. Verify and [troubleshoot any Windows activation issues on the Azure VM](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems).
1. Perform any post-migration app tweaks, such as updating host names, database connection strings, and web server configurations. 1. Perform final application and migration acceptance testing on the migrated application now running in Azure. 1. Cut over traffic to the migrated Azure VM instance.
After you've verified that the test migration works as expected, you can migrate
## Post-migration best practices - For increased resilience:
- - Keep data secure by backing up Azure VMs using the Azure Backup service. [Learn more](../backup/quick-backup-vm-portal.md).
+ - Keep data secure by backing up Azure VMs by using Azure Backup. [Learn more](../backup/quick-backup-vm-portal.md).
- Keep workloads running and continuously available by replicating Azure VMs to a secondary region with Site Recovery. [Learn more](../site-recovery/azure-to-azure-tutorial-enable-replication.md). - For increased security:
- - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md).
+ - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just-in-time administration](../security-center/security-center-just-in-time.md).
- Manage and govern updates on Windows and Linux machines with [Azure Update Manager](../update-manager/overview.md).
- - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/).
+ - Restrict network traffic to management endpoints with [network security groups](../virtual-network/network-security-groups-overview.md).
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks and keep data safe from theft and unauthorized access.
+ - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/) and [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/).
- For monitoring and management:-- Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
+- Consider deploying [Microsoft Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
## Next steps
-Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Cloud Adoption Framework for Azure.
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
Title: Migrate VMware VMs to Azure (agentless) - PowerShell
-description: Learn how to run an agentless migration of VMware VMs with Azure Migrate through PowerShell.
+description: Learn how to run an agentless migration of VMware VMs with Azure Migrate and Modernize through PowerShell.
ms.
# Migrate VMware VMs to Azure (agentless) - PowerShell
-In this article, you'll learn how to migrate discovered VMware VMs with the agentless method using Azure PowerShell for [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool).
+In this article, you learn how to migrate discovered VMware virtual machines (VMs) with the agentless method by using Azure PowerShell for [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool).
You learn how to:
You learn how to:
> * Run a full VM migration. > [!NOTE]
-> Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof-of-concept. Tutorials use default options where possible and don't show all possible settings and paths.
+> Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof of concept. Tutorials use default options where possible and don't show all possible settings and paths.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
-## 1. Prerequisites
+## Prerequisites
Before you begin this tutorial, you should:
-1. Complete the [Tutorial: Discover VMware VMs with Server Assessment](tutorial-discover-vmware.md) to prepare Azure and VMware for migration.
-2. Complete the [Tutorial: Assess VMware VMs for migration to Azure VMs](./tutorial-assess-vmware-azure-vm.md) before migrating them to Azure.
-3. [Install the Az PowerShell module](/powershell/azure/install-azure-powershell)
+- Complete the [Tutorial: Discover VMware VMs with Server Assessment](tutorial-discover-vmware.md) to prepare Azure and VMware for migration.
+- Complete the [Tutorial: Assess VMware VMs for migration to Azure VMs](./tutorial-assess-vmware-azure-vm.md) before you migrate them to Azure.
+- [Install the Az PowerShell module](/powershell/azure/install-azure-powershell).
-## 2. Install Azure Migrate PowerShell module
+## Install the Azure Migrate PowerShell module
-Azure Migrate PowerShell module is available as part of Azure PowerShell (`Az`). Run the `Get-InstalledModule -Name Az.Migrate` command to check if the Azure Migrate PowerShell module is installed on your machine.
+The Azure Migrate PowerShell module is available as part of Azure PowerShell (`Az`). Run the `Get-InstalledModule -Name Az.Migrate` command to check if the Azure Migrate PowerShell module is installed on your machine.
-## 3. Sign in to your Microsoft Azure subscription
+## Sign in to your Azure subscription
Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
Connect-AzAccount
### Select your Azure subscription
-Use the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet to get the list of Azure subscriptions you have access to. Select the Azure subscription that has your Azure Migrate project to work with using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+Use the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet to get the list of Azure subscriptions you have access to. Select the Azure subscription that has your Azure Migrate project to work with by using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
```azurepowershell-interactive Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 ```
-## 4. Retrieve the Azure Migrate project
+## Retrieve the Azure Migrate project
-An Azure Migrate project is used to store discovery, assessment, and migration metadata collected from the environment you're assessing or migrating.
-In a project you can track discovered assets, orchestrate assessments, and perform migrations.
+An Azure Migrate project is used to store discovery, assessment, and migration metadata collected from the environment you're assessing or migrating. In a project, you can track discovered assets, orchestrate assessments, and perform migrations.
-As part of prerequisites, you would have already created an Azure Migrate project. Use the [Get-AzMigrateProject](/powershell/module/az.migrate/get-azmigrateproject) cmdlet to retrieve details of an Azure Migrate project. You'll need to specify the name of the Azure Migrate project (`Name`) and the name of the resource group of the Azure Migrate project (`ResourceGroupName`).
+As part of the prerequisites, you already created an Azure Migrate project. Use the [Get-AzMigrateProject](/powershell/module/az.migrate/get-azmigrateproject) cmdlet to retrieve details of an Azure Migrate project. You need to specify the name of the Azure Migrate project (`Name`) and the name of the resource group of the Azure Migrate project (`ResourceGroupName`).
```azurepowershell-interactive # Get resource group of the Azure Migrate project
$MigrateProject = Get-AzMigrateProject -Name MyMigrateProject -ResourceGroupName
Write-Output $MigrateProject ```
-## 5. Retrieve discovered VMs in an Azure Migrate project
+## Retrieve discovered VMs in an Azure Migrate project
-Azure Migrate uses a lightweight [Azure Migrate appliance](migrate-appliance-architecture.md). As part of the prerequisites, you would have deployed the Azure Migrate appliance as a VMware VM.
-
-To retrieve a specific VMware VM in an Azure Migrate project, specify name of the Azure Migrate project (`ProjectName`), resource group of the Azure Migrate project (`ResourceGroupName`), and the VM name (`DisplayName`).
+Azure Migrate and Modernize uses a lightweight [Azure Migrate appliance](migrate-appliance-architecture.md). As part of the prerequisites, you deployed the Azure Migrate appliance as a VMware VM.
+To retrieve a specific VMware VM in an Azure Migrate project, specify the name of the Azure Migrate project (`ProjectName`), the resource group of the Azure Migrate project (`ResourceGroupName`), and the VM name (`DisplayName`).
```azurepowershell-interactive # Get a specific VMware VM in an Azure Migrate project
$DiscoveredServer = Get-AzMigrateDiscoveredServer -ProjectName $MigrateProject.N
Write-Output $DiscoveredServer ```
-We'll migrate this VM as part of this tutorial.
+We migrate this VM as part of this tutorial.
-You can also retrieve all VMware VMs in an Azure Migrate project by using the (`ProjectName`) and (`ResourceGroupName`) parameters.
+You can also retrieve all VMware VMs in an Azure Migrate project by using the `ProjectName` and `ResourceGroupName` parameters.
```azurepowershell-interactive # Get all VMware VMs in an Azure Migrate project $DiscoveredServers = Get-AzMigrateDiscoveredServer -ProjectName $MigrateProject.Name -ResourceGroupName $ResourceGroup.ResourceGroupName ```
-If you have multiple appliances in an Azure Migrate project, you can use (`ProjectName`), (`ResourceGroupName`), and (`ApplianceName`) parameters to retrieve all VMs discovered using a specific Azure Migrate appliance.
+If you have multiple appliances in an Azure Migrate project, you can use the `ProjectName`, `ResourceGroupName`, and `ApplianceName` parameters to retrieve all VMs discovered by using a specific Azure Migrate appliance.
```azurepowershell-interactive # Get all VMware VMs discovered by an Azure Migrate Appliance in an Azure Migrate project
$DiscoveredServers = Get-AzMigrateDiscoveredServer -ProjectName $MigrateProject.
```
-## 6. Initialize replication infrastructure
+## Initialize replication infrastructure
-[Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) leverages multiple Azure resources for migrating VMs. Migration and modernization provisions the following resources, in the same resource group as the project.
+[Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) uses multiple Azure resources for migrating VMs. Migration and modernization provisions the following resources, in the same resource group as the project.
- **Service bus**: Migration and modernization uses the service bus to send replication orchestration messages to the appliance. - **Gateway storage account**: Migration and modernization uses the gateway storage account to store state information about the VMs being replicated. - **Log storage account**: The Azure Migrate appliance uploads replication logs for VMs to a log storage account. Azure Migrate applies the replication information to the replica-managed disks.-- **Key vault**: The Azure Migrate appliance uses the key vault to manage connection strings for the service bus, and access keys for the storage accounts used in replication.
+- **Key vault**: The Azure Migrate appliance uses the key vault to manage connection strings for the service bus and access keys for the storage accounts used in replication.
-Before replicating the first VM in the Azure Migrate project, run the following command to provision the replication infrastructure. This command provisions and configures the aforementioned resources so that you can start migrating your VMware VMs.
+Before you replicate the first VM in the Azure Migrate project, run the following command to provision the replication infrastructure. This command provisions and configures the preceding resources so that you can start migrating your VMware VMs.
> [!NOTE]
-> One Azure Migrate project supports migrations to one Azure region only. Once you run this script, you can't change the target region to which you want to migrate your VMware VMs.
-> You'll need to run the `Initialize-AzMigrateReplicationInfrastructure` command if you configure a new appliance in your Azure Migrate project.
+> One Azure Migrate project supports migrations to one Azure region only. After you run this script, you can't change the target region to which you want to migrate your VMware VMs.
+> You need to run the `Initialize-AzMigrateReplicationInfrastructure` command if you configure a new appliance in your Azure Migrate project.
-In the article, we'll initialize the replication infrastructure so that we can migrate our VMs to `Central US` region.
+In this article, we initialize the replication infrastructure so that we can migrate our VMs to the `Central US` region.
```azurepowershell-interactive # Initialize replication infrastructure for the current Migrate project
Initialize-AzMigrateReplicationInfrastructure -ResourceGroupName $ResourceGroup.
```
-## 7. Replicate VMs
+## Replicate VMs
-After completing discovery and initializing replication infrastructure, you can begin replication of VMware VMs to Azure. You can run up to 500 replications simultaneously.
+After you finish discovery and initialize the replication infrastructure, you can begin replication of VMware VMs to Azure. You can run up to 500 replications simultaneously.
-You can specify the replication properties as follows.
+To specify the replication properties, use the following table.
-**Parameter** | **Type** | **Description**
+Parameter | Type | Description
| |
- Target subscription and resource group | Mandatory | Specify the subscription and resource group that the VM should be migrated to by providing the resource group ID using the (`TargetResourceGroupId`) parameter.
- Target virtual network and subnet | Mandatory | Specify the ID of the Azure Virtual Network and the name of the subnet that the VM should be migrated to by using the (`TargetNetworkId`) and (`TargetSubnetName`) parameters respectively.
- Machine ID | Mandatory | Specify the ID of the discovered machine that needs to be replicated and migrated. Use (`InputObject`) to specify the discovered VM object for replication.
- Target VM name | Mandatory | Specify the name of the Azure VM to be created by using the (`TargetVMName`) parameter.
- Target VM size | Mandatory | Specify the Azure VM size to be used for the replicating VM by using (`TargetVMSize`) parameter. For instance, to migrate a VM to D2_v2 VM in Azure, specify the value for (`TargetVMSize`) as "Standard_D2_v2".
- License | Mandatory | To use Azure Hybrid Benefit for your Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, specify the value for (`LicenseType`) parameter as **WindowsServer**. Otherwise, specify the value as **NoLicenseType**.
- OS Disk | Mandatory | Specify the unique identifier of the disk that has the operating system bootloader and installer. The disk ID to be used is the unique identifier (UUID) property for the disk retrieved using the [Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
- Disk Type | Mandatory | Specify the type of disk to be used.
- Infrastructure redundancy | Optional | Specify infrastructure redundancy option as follows. <br/><br/> - **Availability Zone** to pin the migrated machine to a specific Availability Zone in the region. Use this option to distribute servers that form a multi-node application tier across Availability Zones. This option is only available if the target region selected for the migration supports Availability Zones. To use availability zones, specify the availability zone value for (`TargetAvailabilityZone`) parameter. <br/> - **Availability Set** to place the migrated machine in an Availability Set. The target Resource Group that was selected must have one or more availability sets to use this option. To use availability set, specify the availability set ID for (`TargetAvailabilitySet`) parameter.
- Boot Diagnostic Storage Account | Optional | To use a boot diagnostic storage account, specify the ID for (`TargetBootDiagnosticStorageAccount`) parameter. <br/> - The storage account used for boot diagnostics should be in the same subscription that you're migrating your VMs to. <br/> - By default, no value is set for this parameter.
- Tags | Optional | Add tags to your migrated virtual machines, disks, and NICs. <br/> Use (`Tag`) to add tags to virtual machines, disks, and NICs. <br/> or <br/> Use (`VMTag`) for adding tags to your migrated virtual machines.<br/> Use (`DiskTag`) for adding tags to disks. <br/> Use (`NicTag`) for adding tags to network interfaces. <br/> For example, add the required tags to a variable $tags and pass the variable in the required parameter. $tags = @{Organization=ΓÇ¥ContosoΓÇ¥}
--
+ Target subscription and resource group | Mandatory | Specify the subscription and resource group that the VM should be migrated to by providing the resource group ID by using the `TargetResourceGroupId` parameter.
+ Target virtual network and subnet | Mandatory | Specify the ID of the Azure Virtual Network instance and the name of the subnet to which the VM should be migrated by using the `TargetNetworkId` and `TargetSubnetName` parameters, respectively.
+ Machine ID | Mandatory | Specify the ID of the discovered machine that needs to be replicated and migrated. Use `InputObject` to specify the discovered VM object for replication.
+ Target VM name | Mandatory | Specify the name of the Azure VM to be created by using the `TargetVMName` parameter.
+ Target VM size | Mandatory | Specify the Azure VM size to be used for the replicating VM by using the `TargetVMSize` parameter. For instance, to migrate a VM to D2_v2 VM in Azure, specify the value for `TargetVMSize` as `Standard_D2_v2`.
+ License | Mandatory | To use Azure Hybrid Benefit for your Windows Server machines that are covered with active Software Assurance or Windows Server subscriptions, specify the value for the `LicenseType` parameter as **WindowsServer**. Otherwise, specify the value as **NoLicenseType**.
+ OS disk | Mandatory | Specify the unique identifier of the disk that has the operating system bootloader and installer. The disk ID to be used is the unique identifier (UUID) property for the disk retrieved by using the [Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
+ Disk type | Mandatory | Specify the type of disk to be used.
+ Infrastructure redundancy | Optional | Specify the infrastructure redundancy option as follows:<br/><br/> - **Availability zone**: Pins the migrated machine to a specific availability zone in the region. Use this option to distribute servers that form a multinode application tier across availability zones. This option is only available if the target region selected for the migration supports availability zones. To use availability zones, specify the availability zone value for the `TargetAvailabilityZone` parameter. <br/> - **Availability set**: Places the migrated machine in an availability set. The target resource group that was selected must have one or more availability sets to use this option. To use an availability set, specify the availability set ID for the `TargetAvailabilitySet` parameter.
+ Boot Diagnostic Storage Account | Optional | To use a boot diagnostic storage account, specify the ID for the `TargetBootDiagnosticStorageAccount` parameter. <br/> - The storage account used for boot diagnostics should be in the same subscription to which you're migrating your VMs. <br/> - By default, no value is set for this parameter.
+ Tags | Optional | Add tags to your migrated VMs, disks, and NICs. <br/> Use `Tag` to add tags to VMs, disks, and NICs or: <br/>- Use `VMTag` to add tags to your migrated VMs.<br/> - Use `DiskTag` to add tags to disks. <br/> - Use `NicTag` to add tags to network interfaces. <br/> For example, add the required tags to the variable `$tags` and pass the variable in the required parameter: `$tags = @{Organization=ΓÇ¥ContosoΓÇ¥}`.
### Replicate VMs with all disks
-In this tutorial, we'll replicate all the disks of the discovered VM and specify a new name for the VM in Azure. We specify the first disk of the discovered server as OS Disk and migrate all disks as Standard HDD. The OS disk is the disk that has the operating system bootloader and installer. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
+In this tutorial, we replicate all the disks of the discovered VM and specify a new name for the VM in Azure. We specify the first disk of the discovered server as **OS Disk** and migrate all disks as **Standard HDD**. The OS disk is the disk that has the operating system bootloader and installer. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
```azurepowershell-interactive # Retrieve the resource group that you want to migrate to
Write-Output $MigrateJob.State
### Replicate VMs with select disks
-You can also selectively replicate the disks of the discovered VM by using [New-AzMigrateDiskMapping](/powershell/module/az.migrate/new-azmigratediskmapping) cmdlet and providing that as an input to the (`DiskToInclude`) parameter in the [New-AzMigrateServerReplication](/powershell/module/az.migrate/new-azmigrateserverreplication) cmdlet. You can also use [New-AzMigrateDiskMapping](/powershell/module/az.migrate/new-azmigratediskmapping) cmdlet to specify different target disk types for each individual disk to be replicated.
+You can also selectively replicate the disks of the discovered VM by using the [New-AzMigrateDiskMapping](/powershell/module/az.migrate/new-azmigratediskmapping) cmdlet and providing that as an input to the `DiskToInclude` parameter in the [New-AzMigrateServerReplication](/powershell/module/az.migrate/new-azmigrateserverreplication) cmdlet. You can also use the [New-AzMigrateDiskMapping](/powershell/module/az.migrate/new-azmigratediskmapping) cmdlet to specify different target disk types for each individual disk to be replicated.
-Specify values for the following parameters of the [New-AzMigrateDiskMapping](/powershell/module/az.migrate/new-azmigratediskmapping) cmdlet.
+Specify values for the following parameters of the [New-AzMigrateDiskMapping](/powershell/module/az.migrate/new-azmigratediskmapping) cmdlet:
-- **DiskId** - Specify the unique identifier for the disk to be migrated. The disk ID to be used is the unique identifier (UUID) property for the disk retrieved using the [Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.-- **IsOSDisk** - Specify "true" if the disk to be migrated is the OS disk of the VM, else "false".-- **DiskType** - Specify the type of disk to be used in Azure.
+- **DiskId**: Specify the unique identifier for the disk to be migrated. The disk ID to be used is the UUID property for the disk retrieved by using the [Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
+- **IsOSDisk**: Specify `true` if the disk to be migrated is the OS disk of the VM. Otherwise, specify `false`.
+- **DiskType**: Specify the type of disk to be used in Azure.
-In the following example, we'll replicate only two disks of the discovered VM. We'll specify the OS disk and use different disk types for each disk to be replicated. The cmdlet returns a job which can be tracked for monitoring the status of the operation.
+In the following example, we replicate only two disks of the discovered VM. We specify the OS disk and use different disk types for each disk to be replicated. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
```azurepowershell-interactive # View disk details of the discovered server
while (($MigrateJob.State -eq 'InProgress') -or ($MigrateJob.State -eq 'NotStart
Write-Output $MigrateJob.State ```
-## 8. Monitor replication
+## Monitor replication
-Replication occurs as follows:
+Replication occurs in the following cases:
- When the Start Replication job finishes successfully, the machines begin their initial replication to Azure. - During initial replication, a VM snapshot is created. Disk data from the snapshot is replicated to replica-managed disks in Azure.
Replication occurs as follows:
Track the status of the replication by using the [Get-AzMigrateServerReplication](/powershell/module/az.migrate/get-azmigrateserverreplication) cmdlet. -- ```azurepowershell-interactive # List replicating VMs and filter the result for selecting a replicating VM. This cmdlet will not return all properties of the replicating VM. $ReplicatingServer = Get-AzMigrateServerReplication -ProjectName $MigrateProject.Name -ResourceGroupName $ResourceGroup.ResourceGroupName -MachineName MyTestVM
$ReplicatingServer = Get-AzMigrateServerReplication -TargetObjectID $Replicating
You can track the **Migration State** and **Migration State Description** properties in the output. -- For initial replication, the values for **Migration State** and **Migration State Description** properties will be `InitialSeedingInProgress` and `Initial replication` respectively.-- During delta replication, the values for **Migrate State** and **Migration State Description** properties will be `Replicating` and `Ready to migrate` respectively.-- After you complete the migration, the values for **Migrate State** and **Migration State Description** properties will be `Migration succeeded` and `Migrated` respectively.
+- For initial replication, the values for the **Migration State** and **Migration State Description** properties are `InitialSeedingInProgress` and `Initial replication`, respectively.
+- During delta replication, the values for the **Migrate State** and **Migration State Description** properties are `Replicating` and `Ready to migrate`, respectively.
+- After you finish the migration, the values for the **Migrate State** and **Migration State Description** properties are `Migration succeeded` and `Migrated`, respectively.
```Output AllowedOperation : {DisableMigration, TestMigrate, Migrate}
TestMigrateStateDescription : None
Type : Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationMigrationItems ```
-For details on replication progress, run the following cmdlet.
+For details on replication progress, run the following cmdlet:
```azurepowershell-interactive Write-Output $replicatingserver.ProviderSpecificDetail ```
-You can track the initial replication progress using the **Initial Seeding Progress Percentage** properties in the output.
+You can track the initial replication progress by using the **Initial Seeding Progress Percentage** properties in the output.
```Output "DataMoverRunAsAccountId": "/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.OffAzure/VMwareSites/xxx/runasaccounts/xxx",
You can track the initial replication progress using the **Initial Seeding Progr
"PerformAutoResync": "true", ```
-Replication occurs as follows:
+Replication occurs in the following cases:
- When the Start Replication job finishes successfully, the machines begin their initial replication to Azure. - During initial replication, a VM snapshot is created. Disk data from the snapshot is replicated to replica-managed disks in Azure. - After initial replication finishes, delta replication begins. Incremental changes to on-premises disks are periodically replicated to the replica disks in Azure.
-## 9. Retrieve the status of a job
+## Retrieve the status of a job
You can monitor the status of a job by using the [Get-AzMigrateJob](/powershell/module/az.migrate/get-azmigratejob) cmdlet.
You can monitor the status of a job by using the [Get-AzMigrateJob](/powershell/
$job = Get-AzMigrateJob -InputObject $job ```
-## 10. Update properties of a replicating VM
+## Update properties of a replicating VM
-[Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) allows you to change target properties, such as name, size, resource group, NIC configuration and so on, for a replicating VM.
+[Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) allows you to change target properties, such as name, size, resource group, NIC configuration, and so on, for a replicating VM.
The following properties can be updated for a VM. -
-**Parameter** | **Type** | **Description**
+Parameter | Type | Description
| |
-VM Name | Optional | Specify the name of the Azure VM to be created by using the [`TargetVMName`] parameter.
-VM size | Optional | Specify the Azure VM size to be used for the replicating VM by using [`TargetVMSize`] parameter. For instance, to migrate a VM to D2_v2 VM in Azure, specify the value for [`TargetVMSize`] as `Standard_D2_v2`.
-Virtual Network | Optional | Specify the ID of the Azure Virtual Network that the VM should be migrated to by using the [`TargetNetworkId`] parameter.
-Resource Group | Optional | IC configuration can be specified using the [New-AzMigrateNicMapping](/powershell/module/az.migrate/new-azmigratenicmapping) cmdlet. The object is then passed an input to the [`NicToUpdate`] parameter in the [Set-AzMigrateServerReplication](/powershell/module/az.migrate/set-azmigrateserverreplication) cmdlet. <br/><br/> - **Change IP allocation** - To specify a static IP for a NIC, provide the IPv4 address to be used as static IP for the VM using the [`TargetNicIP`] parameter. To dynamically assign an IP for a NIC, provide `auto` as the value for the **TargetNicIP** parameter. <br/> - Use values `Primary`, `Secondary` or `DoNotCreate` for [`TargetNicSelectionType`] parameter to specify whether the NIC should be primary, secondary, or is not to be created on the migrated VM. Only one NIC can be specified as the primary NIC for the VM. <br/> - To make a NIC primary, you'll also need to specify the other NICs that should be made secondary or are not to be created on the migrated VM. <br/> - To change the subnet for the NIC, specify the name of the subnet by using the [`TargetNicSubnet`] parameter.
-Network Interface | Optional | Specify the name of the Azure VM to be created by using the [`TargetVMName`] parameter.
-Availability Zone | Optional | To use availability zones, specify the availability zone value for [`TargetAvailabilityZone`] parameter.
-Availability Set | Optional | To use availability set, specify the availability set ID for [`TargetAvailabilitySet`] parameter.
-Tags | Optional | For updating tags, use the following parameters [`UpdateTag`] or [`UpdateVMTag`], [`UpdateDiskTag`], [`UpdateNicTag`], and type of update tag operation [`UpdateTagOperation`] or [`UpdateVMTagOperation`], [`UpdateDiskTagOperation`], [`UpdateNicTagOperation`]. The update tag operation takes the following values ΓÇô Merge, Delete, and Replace. <br/> Use [`UpdateTag`] to update all tags across virtual machines, disks, and NICs. <br/> Use [`UpdateVMTag`] for updating virtual machine tags. <br/> Use [`UpdateDiskTag`] for updating disk tags. <br/> Use [`UpdateNicTag`] for updating NIC tags. <br/> Use [`UpdateTagOperation`] to update the operation for all tags across virtual machines, disks, and NICs. <br/> Use [`UpdateVMTagOperation`] for updating virtual machine tags. <br/> Use [`UpdateDiskTagOperation`] for updating disk tags. <br/> Use [`UpdateNicTagOperation`] for updating NIC tags. <br/> <br/> The *replace* option replaces the entire set of existing tags with a new set. <br/> The *merge* option allows adding tags with new names and updating the values of tags with existing names. <br/> The *delete* option allows selectively deleting tags based on given names or name/value pairs.
-Disk(s) | Optional | For the OS disk: <br/> Update the name of the OS disk by using the [`TargetDiskName`] parameter. <br/><br/> For updating multiple disks: <br/> Use [Set-AzMigrateDiskMapping](/powershell/module/az.migrate/set-azmigratediskmapping) to set the disk names to a variable *$DiskMapping* and then use the [`DiskToUpdate`] parameter and pass along the variable. <br/> <br/> **Note:** The disk ID to be used in [Set-AzMigrateDiskMapping](/powershell/module/az.migrate/set-azmigratediskmapping) is the unique identifier (UUID) property for the disk retrieved using theΓÇ»[Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
-NIC(s) name | Optional | Use [New-AzMigrateNicMapping](/powershell/module/az.migrate/new-azmigratenicmapping) to set the NIC names to a variable *$NICMapping* and then use the [`NICToUpdate`] parameter and pass the variable.
+VM name | Optional | Specify the name of the Azure VM to be created by using the `TargetVMName` parameter.
+VM size | Optional | Specify the Azure VM size to be used for the replicating VM by using the `TargetVMSize` parameter. For instance, to migrate a VM to D2_v2 VM in Azure, specify the value for `TargetVMSize` as `Standard_D2_v2`.
+Virtual network | Optional | Specify the ID of the Azure virtual network that the VM should be migrated to by using the `TargetNetworkId` parameter.
+Resource group | Optional | IC configuration can be specified by using the [New-AzMigrateNicMapping](/powershell/module/az.migrate/new-azmigratenicmapping) cmdlet. The object is then passed an input to the `NicToUpdate` parameter in the [Set-AzMigrateServerReplication](/powershell/module/az.migrate/set-azmigrateserverreplication) cmdlet. <br/><br/> - **Change IP allocation**: To specify a static IP for a NIC, provide the IPv4 address to be used as the static IP for the VM by using the `TargetNicIP` parameter. To dynamically assign an IP for a NIC, provide `auto` as the value for the `TargetNicIP` parameter. <br/> - Use the values `Primary`, `Secondary` or `DoNotCreate` for the `TargetNicSelectionType` parameter to specify whether the NIC should be primary, secondary, or shouldn't be created on the migrated VM. Only one NIC can be specified as the primary NIC for the VM. <br/> - To make a NIC primary, you also need to specify the other NICs that should be made secondary or aren't to be created on the migrated VM. <br/> - To change the subnet for the NIC, specify the name of the subnet by using the `TargetNicSubnet` parameter.
+Network interface | Optional | Specify the name of the Azure VM to be created by using the `TargetVMName` parameter.
+Availability zone | Optional | To use availability zones, specify the availability zone value for the `TargetAvailabilityZone` parameter.
+Availability set | Optional | To use availability sets, specify the availability set ID for the `TargetAvailabilitySet` parameter.
+Tags | Optional | For updating tags, use the following parameters: `UpdateTag` or `UpdateVMTag`, `UpdateDiskTag`, `UpdateNicTag`, and type of update tag operation `UpdateTagOperation` or `UpdateVMTagOperation`, `UpdateDiskTagOperation`, `UpdateNicTagOperation`. The update tag operation takes the following values: Merge, Delete, and Replace. <br/> Use `UpdateTag` to update all tags across VMs, disks, and NICs. <br/> Use `UpdateVMTag` to update VM tags. <br/> Use `UpdateDiskTag` to update disk tags. <br/> Use `UpdateNicTag` to update NIC tags. <br/> Use `UpdateTagOperation` to update the operation for all tags across VMs, disks, and NICs. <br/> Use `UpdateVMTagOperation` to update VM tags. <br/> Use `UpdateDiskTagOperation` to update disk tags. <br/> Use `UpdateNicTagOperation` to update NIC tags. <br/> <br/> The *replace* option replaces the entire set of existing tags with a new set. <br/> The *merge* option allows adding tags with new names and updating the values of tags with existing names. <br/> The *delete* option allows selectively deleting tags based on specific names or name/value pairs.
+Disks | Optional | For the OS disk: <br/> - Update the name of the OS disk by using the `TargetDiskName` parameter. <br/><br/> To update multiple disks: <br/> - Use [Set-AzMigrateDiskMapping](/powershell/module/az.migrate/set-azmigratediskmapping) to set the disk names to a variable `$DiskMapping`. Then use the `DiskToUpdate` parameter and pass along the variable. <br/> <br/> The disk ID to be used in [Set-AzMigrateDiskMapping](/powershell/module/az.migrate/set-azmigratediskmapping) is the UUID property for the disk retrieved by using theΓÇ»[Get-AzMigrateDiscoveredServer](/powershell/module/az.migrate/get-azmigratediscoveredserver) cmdlet.
+NIC's name | Optional | Use [New-AzMigrateNicMapping](/powershell/module/az.migrate/new-azmigratenicmapping) to set the NIC names to a variable `$NICMapping`. Then use the `NICToUpdate` parameter and pass the variable.
-The [Get-AzMigrateServerReplication](/powershell/module/az.migrate/get-azmigrateserverreplication) cmdlet returns a job which can be tracked for monitoring the status of the operation.
+The [Get-AzMigrateServerReplication](/powershell/module/az.migrate/get-azmigrateserverreplication) cmdlet returns a job that can be tracked for monitoring the status of the operation.
```azurepowershell-interactive # List replicating VMs and filter the result for selecting a replicating VM. This cmdlet will not return all properties of the replicating VM.
$ReplicatingServer = Get-AzMigrateServerReplication -TargetObjectID $Replicating
Write-Output $ReplicatingServer.ProviderSpecificDetail.VMNic ```
-In the following example, we'll update the NIC configuration by making the first NIC as primary and assigning a static IP to it. we'll discard the second NIC for migration, update the target VM name & size, and customizing NIC names.
+In the following example, we update the NIC configuration by making the first NIC as primary and assigning a static IP to it. We discard the second NIC for migration, update the target VM name and size, and customize NIC names.
```azurepowershell-interactive # Specify the NIC properties to be updated for a replicating VM.
$NicMapping += $NicMapping2
$UpdateJob = Set-AzMigrateServerReplication -InputObject $ReplicatingServer -TargetVMSize Standard_DS13_v2 -TargetVMName MyMigratedVM -NicToUpdate $NicMapping ```
-In the following example, we'll customize the disk name.
+In the following example, we customize the disk name.
```azurepowershell-interactive # Customize the Disk names for a replicating VM
$DiskMapping = $OSDisk, $DataDisk1
$UpdateJob = Set-AzMigrateServerReplication InputObject $ReplicatingServer DiskToUpdate $DiskMapping ```
-In the following example, we'll add tags to the replicating VMs.
+In the following example, we add tags to the replicating VMs.
```azurepowershell-interactive # Update all tags across virtual machines, disks, and NICs.
Set-azmigrateserverreplication UpdateTag $UpdateTag UpdateTagOperation Merge/Rep
# Update virtual machines tags Set-azmigrateserverreplication UpdateVMTag $UpdateVMTag UpdateVMTagOperation Merge/Replace/Delete ```
-Use the following example to track the job status
+
+Use the following example to track the job status.
```azurepowershell-interactive # Track job status to check for completion
while (($UpdateJob.State -eq 'InProgress') -or ($UpdateJob.State -eq 'NotStarted
Write-Output $UpdateJob.State ```
-## 11. Run a test migration
+## Run a test migration
-When delta replication begins, you can run a test migration for the VMs before running a full migration to Azure. We highly recommend that you do test migration at least once for each machine before you migrate it. The cmdlet returns a job which can be tracked for monitoring the status of the operation.
+When delta replication begins, you can run a test migration for the VMs before you run a full migration to Azure. We highly recommend that you do test migration at least once for each machine before you migrate it. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
-- Running a test migration checks that migration will work as expected. Test migration doesn't impact the on-premises machine, which remains operational, and continues replicating.-- Test migration simulates the migration by creating an Azure VM using replicated data (usually migrating to a non-production VNet in your Azure subscription).
+- Running a test migration checks that migration works as expected. Test migration doesn't affect the on-premises machine, which remains operational and continues replicating.
+- Test migration simulates the migration by creating an Azure VM by using replicated data. The test usually migrates to a nonproduction virtual network in your Azure subscription.
- You can use the replicated test Azure VM to validate the migration, perform app testing, and address any issues before full migration.
-Select the Azure Virtual Network to be used for testing by specifying the ID of the virtual network using the [`TestNetworkID`] parameter.
+Select the Azure virtual network to be used for testing by specifying the ID of the virtual network by using the `TestNetworkID` parameter.
```azurepowershell-interactive # Retrieve the Azure virtual network created for testing
while (($TestMigrationJob.State -eq 'InProgress') -or ($TestMigrationJob.State -
Write-Output $TestMigrationJob.State ```
-After testing is complete, clean-up the test migration using the [Start-AzMigrateTestMigrationCleanup](/powershell/module/az.migrate/start-azmigratetestmigrationcleanup) cmdlet. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
+After testing is complete, clean up the test migration by using the [Start-AzMigrateTestMigrationCleanup](/powershell/module/az.migrate/start-azmigratetestmigrationcleanup) cmdlet. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
```azurepowershell-interactive # Clean-up test migration for a replicating server
while (($CleanupTestMigrationJob.State -eq "InProgress") -or ($CleanupTestMigrat
Write-Output $CleanupTestMigrationJob.State ```
-## 12. Migrate VMs
+## Migrate VMs
-After you've verified that the test migration works as expected, you can migrate the replicating server using the following cmdlet. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
+After you verify that the test migration works as expected, you can migrate the replicating server by using the following cmdlet. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
-If you don't want to turn-off the source server, then don't use [`TurnOffSourceServer`] parameter.
+If you don't want to turn off the source server, don't use the `TurnOffSourceServer` parameter.
```azurepowershell-interactive # Start migration for a replicating server and turn off source server as part of migration
while (($MigrateJob.State -eq 'InProgress') -or ($MigrateJob.State -eq 'NotStart
Write-Output $MigrateJob.State ```
-## 13. Complete the migration
+## Complete the migration
-1. After the migration is done, stop replication for the on-premises machine and clean-up replication state information for the VM using the following cmdlet. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
+1. After the migration is finished, stop replication for the on-premises machine and clean up replication state information for the VM by using the following cmdlet. The cmdlet returns a job that can be tracked for monitoring the status of the operation.
```azurepowershell-interactive # Stop replication for a migrated server
Write-Output $MigrateJob.State
1. Remove the on-premises VMs from local backups. 1. Update any internal documentation to show the new location and IP address of the Azure VMs.
-## 14. Post-migration best practices
+## Post-migration best practices
- For increased resilience:
- - Keep data secure by backing up Azure VMs using the Azure Backup service. [Learn more](../backup/quick-backup-vm-portal.md).
- - Keep workloads running and continuously available by replicating Azure VMs to a secondary region with Site Recovery. [Learn more](../site-recovery/azure-to-azure-tutorial-enable-replication.md).
+ - Keep data secure by backing up Azure VMs by using Azure Backup. [Learn more](../backup/quick-backup-vm-portal.md).
+ - Keep workloads running and continuously available by replicating Azure VMs to a secondary region with Azure Site Recovery. [Learn more](../site-recovery/azure-to-azure-tutorial-enable-replication.md).
- For increased security:
- - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md).
+ - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just-in-time administration](../security-center/security-center-just-in-time.md).
- Manage and govern updates on Windows and Linux machines with [Azure Update Manager](../update-manager/overview.md).
- - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/).
+ - Restrict network traffic to management endpoints with [network security groups](../virtual-network/network-security-groups-overview.md).
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks and keep data safe from theft and unauthorized access.
+ - Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/) and [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/).
- For monitoring and management:-- Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
+ - Consider deploying [Microsoft Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
The following table lists the gateway IP addresses of the Azure Database for MyS
* **Region Name:** This column lists the name of Azure region where Azure Database for PostgreSQL - Single Server is offered. * **Gateway IP address subnets:** This column lists the IP address subnets of the gateway rings located in the particular region. As we retire older gateway hardware, we recommend that you open the client-side firewall to allow outbound traffic for the IP address subnets in the region you're operating.
+* **Gateway IP addresses (decommissioning):** This column lists the IP addresses of the gateways hosted on an older generation of hardware that is being decommissioned right now. If you're provisioning a new server, you can ignore these IP addresses. If you have an existing server, continue to retain the outbound rule for the firewall for these IP addresses as we haven't decommissioned it yet. If you drop the firewall rules for these IP addresses, you may get connectivity errors. Instead, you're expected to proactively add the new IP addresses listed in Gateway IP addresses column to the outbound firewall rule as soon as you receive the notification for decommissioning. This will ensure when your server is migrated to latest gateway hardware, there's no interruptions in connectivity to your server.
+* **Gateway IP addresses (decommissioned):** This column lists the IP addresses of the gateway rings, which are decommissioned and are no longer in operations. You can safely remove these IP addresses from your outbound firewall rule.
++
+| **Region name** | **Gateway IP address subnets** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** |
+|:--|:--|:|:|
+| Australia Central | 20.36.105.32/29 | | |
+| Australia Central 2 | 20.36.113.32/29 | | |
+| Australia East | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 | 13.75.149.87 | |
+| Australia South East | 13.77.49.32/29 | 13.73.109.251 | |
+| Brazil South | 191.233.200.32/29, 191.234.144.32/29 | | 104.41.11.5 |
+| Canada Central | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 | | |
+| Canada East | 40.69.105.32/29 | 40.86.226.166 | |
+| Central US | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 | 13.67.215.62 | |
+| China East | 52.130.112.136/29 | | |
+| China East 2 | 52.130.120.88/29 | | |
+| China East 3 | 52.130.128.88/29 | | |
+| China North | 52.130.128.88/29 | | |
+| China North 2 | 52.130.40.64/29 | | |
+| China North 3 | 13.75.32.192/29, 13.75.33.192/29 | | |
+| East Asia | 13.75.32.192/29, 13.75.33.192/29 | | |
+| East US | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29 | 40.121.158.30 | 191.238.6.43 |
+| East US 2 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 | 52.177.185.181 | |
+| France Central | 40.79.136.32/29, 40.79.144.32/29 | | |
+| France South | 40.79.176.40/29, 40.79.177.32/29 | | |
+| Germany West Central | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 | | |
+| India Central | 104.211.86.32/29, 20.192.96.32/29 | | |
+| India South | 40.78.192.32/29, 40.78.193.32/29 | | |
+| India West | 104.211.144.32/29, 104.211.145.32/29 | 104.211.160.80 | |
+| Japan East | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 | 13.78.61.196 | |
+| Japan West | 40.74.96.32/29 | 104.214.148.156 | |
+| Korea Central | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29 | 52.231.32.42 | |
+| Korea South | 52.231.145.0/29 | 52.231.200.86 | |
+| North Central US | 52.162.105.192/29 | 23.96.178.199 | |
+| North Europe | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 | 40.113.93.91 | 191.235.193.75 |
+| South Africa North | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 | | |
+| South Africa West | 102.133.25.32/29 | | |
+| South Central US | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 | 13.66.62.124 | 23.98.162.75 |
+| South East Asia | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 | 104.43.15.0 | |
+| Switzerland North | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 | | |
+| Switzerland West | 51.107.153.32/29 | | |
+| UAE Central | 20.37.72.96/29, 20.37.73.96/29 | | |
+| UAE North | 40.120.72.32/29, 65.52.248.32/29 | | |
+| UK South | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 | | |
+| UK West | 51.140.208.96/29, 51.140.209.32/29 | | |
+| West Central US | 13.71.193.32/29 | 13.78.145.25 | |
+| West Europe | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 | 40.68.37.158 | 191.237.232.75 |
+| West US | 13.86.217.224/29 | 104.42.238.205 | 23.99.34.75 |
+| West US 2 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 | | |
+| West US 3 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 | | |
-| **Region name** | **Gateway IP address subnets** |
-|:-|:|
-| Australia Central | 20.36.105.32/29 |
-| Australia Central 2 | 20.36.113.32/29 |
-| Australia East | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
-| Australia South East |13.77.49.32/29 |
-| Brazil South | 191.233.200.32/29, 191.234.144.32/29|
-| Canada Central | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29|
-| Canada East | 40.69.105.32/29 |
-| Central US | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29
-| China East | 52.130.112.136/29|
-| China East 2 | 52.130.120.88/29|
-| China East 3 | 52.130.128.88/29|
-| China North | 52.130.128.88/29 |
-| China North 2 | 52.130.40.64/29|
-| China North 3 | 13.75.32.192/29, 13.75.33.192/29 |
-| East Asia | 13.75.32.192/29, 13.75.33.192/29|
-| East US |20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29|
-| East US 2 |104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29|
-| France Central | 40.79.136.32/29, 40.79.144.32/29 |
-| France South | 40.79.176.40/29, 40.79.177.32/29|
-| Germany West Central | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29|
-| India Central | 104.211.86.32/29, 20.192.96.32/29|
-| India South | 40.78.192.32/29, 40.78.193.32/29|
-| India West | 104.211.144.32/29, 104.211.145.32/29 |
-| Japan East | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
-| Japan West | 40.74.96.32/29 |
-| Korea Central | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 |
-| Korea South | 52.231.145.0/29 |
-| North Central US | 52.162.105.192/29|
-| North Europe |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
-| South Africa North | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 |
-| South Africa West | 102.133.25.32/29|
-| South Central US |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29|
-| South East Asia | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 |
-| Switzerland North |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27|
-| Switzerland West | 51.107.153.32/29|
-| UAE Central | 20.37.72.96/29, 20.37.73.96/29 |
-| UAE North | 40.120.72.32/29, 65.52.248.32/29 |
-| UK South |51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29|
-| UK West | 51.140.208.96/29, 51.140.209.32/29 |
-| West Central US | 13.71.193.32/29 |
-| West Europe | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29|
-| West US |13.86.217.224/29|
-| West US 2 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29|
-| West US 3 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
- ## Connection redirection
nat-gateway Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/manage-nat-gateway.md
Previously updated : 03/20/2023 Last updated : 02/16/2024
+#Customer intent: As a network administrator, I want to learn how to create and remove a NAT gateway resource from a virtual network subnet. I also want to learn how to add and remove public IP addresses and prefixes used for outbound connectivity.
# Manage NAT gateway
nat-gateway Nat Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-availability-zones.md
Title: NAT gateway and availability zones
-description: Key concepts and design guidance on using NAT gateway with availability zones.
+description: Learn about key concepts and design guidance on deploying Azure NAT Gateway with availability zones.
Previously updated : 09/14/2022 Last updated : 02/15/2024
+#Customer intent: For customers who want to understand how to use NAT gateway with availability zones.
# NAT gateway and availability zones+ NAT gateway is a zonal resource, which means it can be deployed and operate out of individual availability zones. With zone isolation scenarios, you can align your zonal NAT gateway resources with zonally designated IP based resources, such as virtual machines, to provide zone resiliency against outages. Review this document to understand key concepts and fundamental design guidance. :::image type="content" source="./media/nat-availability-zones/zonal-nat-gateway.png" alt-text="Diagram of zonal deployment of NAT gateway."::: *Figure 1: Zonal deployment of NAT gateway.*
-NAT gateway can either be designated to a specific zone within a region or to ΓÇÿno zoneΓÇÖ. Which zone property you select for your NAT gateway resource will inform the zone property of the public IP address that can be used for outbound connectivity as well.
+NAT gateway can either be designated to a specific zone within a region or to **no zone**. Which zone property you select for your NAT gateway resource informs the zone property of the public IP address that can be used for outbound connectivity as well.
-## NAT gateway has built in resiliency
+## NAT gateway includes built-in resiliency
-Virtual networks and their subnets are regional. Subnets aren't restricted to a zone. While NAT gateway is a zonal resource, it's a highly resilient and reliable method by which to connect outbound to the internet from virtual network subnets. NAT gateway uses [software defined networking](/azure-stack/hci/concepts/software-defined-networking) to operate as a fully managed and distributed service. NAT gateway infrastructure has built in redundancy. It can survive multiple infrastructure component failures. Availability zones build on this resiliency with zone isolation scenarios for NAT gateway.
+Virtual networks and their subnets are regional. Subnets aren't restricted to a zone. While NAT gateway is a zonal resource, it's a highly resilient and reliable method by which to connect outbound to the internet from virtual network subnets. NAT gateway uses [software defined networking](/azure-stack/hci/concepts/software-defined-networking) to operate as a fully managed and distributed service. NAT gateway infrastructure includes built-in redundancy. It can survive multiple infrastructure component failures. Availability zones build on this resiliency with zone isolation scenarios for NAT gateway.
## Zonal
-You can place your NAT gateway resource in a specific zone for a region. When NAT gateway is deployed to a specific zone, it will provide outbound connectivity to the internet explicitly from that zone. The public IP address or prefix configured to NAT gateway must match the same zone. NAT gateway resources with public IP addresses from a different zone, zone-redundancy or with no zone aren't allowed.
+You can place your NAT gateway resource in a specific zone for a region. When NAT gateway is deployed to a specific zone, it provides outbound connectivity to the internet explicitly from that zone. The public IP address or prefix configured to NAT gateway must match the same zone. NAT gateway resources with public IP addresses from a different zone, zone-redundancy or with no zone aren't allowed.
NAT gateway can provide outbound connectivity for virtual machines from other availability zones different from itself. The virtual machineΓÇÖs subnet needs to be configured to the NAT gateway resource to provide outbound connectivity. Additionally, multiple subnets can be configured to the same NAT gateway resource. While virtual machines in subnets from different availability zones can all be configured to a single zonal NAT gateway resource, this configuration doesn't provide the most effective method for ensuring zone-resiliency against zonal outages. For more information on how to safeguard against zonal outages, see [Design considerations](#design-considerations) later in this article.
-## Non-zonal
-If no zone is selected at the time that the NAT gateway resource is deployed, then it's placed in ΓÇÿno zoneΓÇÖ by default. When NAT gateway is placed in **no zone**, Azure places the resource in a zone for you. You won't have visibility into which zone Azure chooses for your NAT gateway. After NAT gateway is deployed, zonal configurations can't be changed. **No zone** NAT gateway resources, while still zonal resources can be associated to public IP addresses from a zone, no zone, or that are zone-redundant.
+## Nonzonal
+
+If no zone is selected at the time that the NAT gateway resource is deployed, the NAT gateway is placed in **no zone** by default. When NAT gateway is placed in **no zone**, Azure places the resource in a zone for you. There isn't visibility into which zone Azure chooses for your NAT gateway. After NAT gateway is deployed, zonal configurations can't be changed. **No zone** NAT gateway resources, while still zonal resources can be associated to public IP addresses from a zone, no zone, or that are zone-redundant.
## Design considerations
Now that you understand the zone-related properties for NAT gateway, see the fol
### Single zonal NAT gateway resource for zone-spanning resources
-A single zonal NAT gateway resource can be configured to either a subnet that contains virtual machines that span across multiple availability zones or to multiple subnets with different zonal virtual machines. When this type of deployment is configured, NAT gateway will provide outbound connectivity to the internet for all subnet resources from the specific zone it's located. If the zone that NAT gateway is deployed in goes down, then outbound connectivity across all virtual machine instances associated with the NAT gateway will also go down. This set up doesn't provide the best method of zone-resiliency.
+A single zonal NAT gateway resource can be configured to either a subnet that contains virtual machines that span across multiple availability zones or to multiple subnets with different zonal virtual machines. When this type of deployment is configured, NAT gateway provides outbound connectivity to the internet for all subnet resources from the specific zone where the NAT gateway is located. If the zone that NAT gateway is deployed in goes down, then outbound connectivity across all virtual machine instances associated with the NAT gateway goes down. This set up doesn't provide the best method of zone-resiliency.
:::image type="content" source="./media/nat-availability-zones/single-nat-gw-zone-spanning-subnet.png" alt-text="Diagram of single zonal NAT gateway resource.":::
A zonal promise for zone isolation scenarios exists when a virtual machine insta
:::image type="content" source="./media/nat-availability-zones/multiple-zonal-nat-gateways.png" alt-text="Diagram of zonal isolation by creating zonal stacks.":::
-*Figure 3: Zonal isolation by creating zonal stacks with the same zone NAT gateway, public IPs, and virtual machines provides the best method of ensuring zone resiliency against outages.*
+*Figure 3: Zonal isolation by creating zonal stacks with the same zone NAT gateway, public IPs, and virtual machines provide the best method of ensuring zone resiliency against outages.*
> [!NOTE] > Creating zonal stacks for each availability zone within a region is the most effective method for building zone-resiliency against outages for NAT gateway. However, ths configuration only safeguards the remaining availability zones where the outage did **not** take place. With this configuration, failure of outbound connectivity from a zone outage is isolated to the specific zone affected. The outage won't affect the other zonal stacks where other NAT gateways are deployed with their own subnets and zonal public IPs. -- ### Integration of inbound with a standard load balancer If your scenario requires inbound endpoints, you have two options: | Option | Pattern | Example | Pro | Con | ||||||
-| (1) | **Align** the inbound endpoints with the respective **zonal stacks** you're creating for outbound. | Create a standard load balancer with a zonal frontend. | Same failure model for inbound and outbound. Simpler to operate. | Individual IP addresses per zone may need to be masked by a common DNS name. |
+| (1) | **Align** the inbound endpoints with the respective **zonal stacks** you're creating for outbound. | Create a standard load balancer with a zonal frontend. | Same failure model for inbound and outbound. Simpler to operate. | A common Domain Name System (DNS) name needs to mask individual IP addresses per zone. |
| (2) | **Overlay** the zonal stacks with a cross-zone inbound endpoint. | Create a standard load balancer with a zone-redundant front-end. | Single IP address for inbound endpoint. | Varying models for inbound and outbound. More complex to operate. | > [!NOTE]
nat-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-overview.md
Title: What is Azure NAT Gateway?-
-description: Overview of Azure NAT Gateway features, resources, architecture, and implementation. Learn how Azure NAT Gateway works and how to use NAT gateway resources in Azure.
-+
+description: Overview of Azure NAT Gateway features, resources, architecture, and implementation. Learn about what NAT gateway is and how to use it.
Previously updated : 12/06/2022 Last updated : 02/15/2024
+#Customer intent: I want to understand what Azure NAT Gateway is and how to use it.
# What is Azure NAT Gateway?
NAT Gateway provides dynamic SNAT port functionality to automatically scale outb
*Figure: Azure NAT Gateway* Azure NAT Gateway provides outbound connectivity for many Azure resources, including:
-* Azure virtual machines or virtual machine scale-sets in a private subnet
-* [Azure Kubernetes Services (AKS) clusters](/azure/aks/nat-gateway)
-* [Azure Container group](/azure/container-instances/container-instances-nat-gateway)
-* [Azure Function Apps](/azure/azure-functions/functions-how-to-use-nat-gateway)
-* [Azure Firewall subnet](/azure/firewall/integrate-with-nat-gateway)
-* [Azure App Services instances](/azure/app-service/networking/nat-gateway-integration) (web applications, REST APIs, and mobile backends) through [virtual network integration](/azure/app-service/overview-vnet-integration)
-* [Azure Databricks](/azure/databricks/security/network/secure-cluster-connectivity#egress-with-default-managed-vnet) or with [VNet injection](/azure/databricks/security/network/secure-cluster-connectivity#egress-with-vnet-injection).
+
+* Azure virtual machines or virtual machine scale-sets in a private subnet.
+
+* [Azure Kubernetes Services (AKS) clusters](/azure/aks/nat-gateway).
+
+* [Azure Container group](/azure/container-instances/container-instances-nat-gateway).
+
+* [Azure Function Apps](/azure/azure-functions/functions-how-to-use-nat-gateway).
+
+* [Azure Firewall subnet](/azure/firewall/integrate-with-nat-gateway).
+
+* [Azure App Services instances](/azure/app-service/networking/nat-gateway-integration) (web applications, REST APIs, and mobile backends) through [virtual network integration](/azure/app-service/overview-vnet-integration).
+
+* [Azure Databricks](/azure/databricks/security/network/secure-cluster-connectivity#egress-with-default-managed-vnet) or with [virtual network injection](/azure/databricks/security/network/secure-cluster-connectivity#egress-with-vnet-injection).
## Azure NAT Gateway benefits ### Simple Setup
-Deployments are intentionally made simple with NAT gateway. Attach NAT gateway to a subnet and public IP address and start connecting outbound to the internet right away. There's zero maintenance and routing configurations required. More public IPs or subnets can be added later without impact to your existing configuration.
+Deployments are intentionally made simple with NAT gateway. Attach NAT gateway to a subnet and public IP address and start connecting outbound to the internet right away. There's zero maintenance and routing configurations required. More public IPs or subnets can be added later without effect to your existing configuration.
+
+The following steps are an example of how to set up a NAT gateway:
-NAT gateway deployment steps:
-1. Create a non-zonal or zonal NAT gateway.
-2. Assign a public IP address or public IP prefix.
-3. Configure virtual network subnet to use a NAT gateway
+* Create a nonzonal or zonal NAT gateway.
-If necessary, modify TCP idle timeout (optional). Review [timers](/azure/nat-gateway/nat-gateway-resource#idle-timeout-timers) before you change the default.
+* Assign a public IP address or public IP prefix.
+
+* Configure virtual network subnet to use a NAT gateway.
+
+If necessary, modify Transmission Control Protocol (TCP) idle timeout (optional). Review [timers](/azure/nat-gateway/nat-gateway-resource#idle-timeout-timers) before you change the default.
### Security
Azure NAT Gateway is a fully managed and distributed service. It doesn't depend
NAT gateway is scaled out from creation. There isn't a ramp up or scale-out operation required. Azure manages the operation of NAT gateway for you.
-Attach NAT gateway to a subnet to provide outbound connectivity for all private resources in that subnet. All subnets in a virtual network can use the same NAT gateway resource. Outbound connectivity can be scaled out by assigning up to 16 public IP addresses or a /28 size public IP prefix to NAT gateway. When a NAT gateway is associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound.
+Attach NAT gateway to a subnet to provide outbound connectivity for all private resources in that subnet. All subnets in a virtual network can use the same NAT gateway resource. Outbound connectivity can be scaled out by assigning up to 16 public IP addresses or a /28 size public IP prefix to NAT gateway. When a NAT gateway is associated to a public IP prefix, it automatically scales to the number of IP addresses needed for outbound.
### Performance
A NAT gateway doesn't affect the network bandwidth of your compute resources. Le
### Outbound connectivity * NAT gateway is the recommended method for outbound connectivity.+ * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure NAT Gateway](./tutorial-migrate-outbound-nat.md). >[!NOTE] >On September 30th, 2025, [default outbound access](/azure/virtual-network/ip-services/default-outbound-access#when-is-default-outbound-access-provided) for new deployments will be retired. It is recommended to use an explicit form of outbound connectivity instead, like NAT gateway.
-* Outbound connectivity with NAT gateway is defined at a per subnet level. NAT gateway replaces the default Internet destination of a subnet.
+* Egress is defined at a per subnet level with NAT gateway. NAT gateway replaces the default Internet destination of a subnet.
-* No traffic routing configurations are required to use NAT gateway.
+* Traffic routing configurations aren't required to use NAT gateway.
* NAT gateway allows flows to be created from the virtual network to the services outside your virtual network. Return traffic from the internet is only allowed in response to an active flow. Services outside your virtual network canΓÇÖt initiate an inbound connection through NAT gateway.
-* NAT gateway takes precedence over other outbound connectivity methods, including Load balancer, instance-level public IP addresses, and Azure Firewall.
+* NAT gateway takes precedence over other outbound connectivity methods, including a load balancer, instance-level public IP addresses, and Azure Firewall.
-* When NAT gateway is configured to a virtual network where a different outbound connectivity method already exists, NAT gateway takes over all outbound traffic moving forward. There are no drops in traffic flow for existing connections on Load balancer. All new connections use NAT gateway.
+* When NAT gateway is configured to a virtual network where a different outbound connectivity method already exists, NAT gateway takes over all outbound traffic moving forward. There are no drops in traffic flow for existing connections on Azure Load Balancer. All new connections use NAT gateway.
* NAT gateway doesn't have the same limitations of SNAT port exhaustion as does [default outbound access](../virtual-network/ip-services/default-outbound-access.md) and [outbound rules of a load balancer](../load-balancer/outbound-rules.md).
-* NAT gateway supports TCP and UDP protocols only. ICMP isn't supported.
+* NAT gateway supports TCP and User Datagram Protocol (UDP) protocols only. Internet Control Message Protocol (ICMP) isn't supported.
### Traffic routes
-* NAT gateway replaces a subnetΓÇÖs [system default route](/azure/virtual-network/virtual-networks-udr-overview#default) to the internet when configured. When NAT gateway is attached to the subnet, all traffic within the 0.0.0.0/0 prefix will route to NAT gateway before connecting outbound to the internet.
+* NAT gateway replaces a subnetΓÇÖs [system default route](/azure/virtual-network/virtual-networks-udr-overview#default) to the internet when configured. When NAT gateway is attached to the subnet, all traffic within the 0.0.0.0/0 prefix routes to NAT gateway before connecting outbound to the internet.
+
+* You can override NAT gateway as a subnetΓÇÖs system default route to the internet with the creation of a custom user-defined route (UDR) for 0.0.0.0/0 traffic.
-* You can override NAT gateway as a subnetΓÇÖs system default route to the internet with the creation of a custom user-defined route (UDR) for 0.0.0.0/0 traffic.
+* Presence of User Defined Routes (UDRs) for virtual appliances, VPN Gateway, and ExpressRoute for a subnet's 0.0.0.0/0 traffic causes traffic to route to these services instead of NAT gateway.
-* Presence of UDRs for virtual appliances, VPN Gateway and ExpressRoute for a subnet's 0.0.0.0/0 traffic will cause traffic to route to these services instead of NAT gateway.
+* Outbound connectivity follows this order of precedence among different routing and outbound connectivity methods:
-* Outbound connectivity follows this order of precedence among different routing and outbound connectivity methods:
-Virtual appliance UDR / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-level public IP address on a virtual machine >> Load balancer outbound rules >> default system route to the internet
+ * Virtual appliance UDR / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-level public IP address on a virtual machine >> Load balancer outbound rules >> default system route to the internet.
### NAT gateway configurations
Virtual appliance UDR / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-le
* A NAT gateway canΓÇÖt be deployed in a [gateway subnet](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub).
-* A NAT gateway resource can use up to 16 IP addresses in any combination of:
+* A NAT gateway resource can use up to 16 IP addresses in any combination of the following types:
- * Public IP addresses
+ * Public IP addresses.
- * Public IP prefixes
+ * Public IP prefixes.
* Public IP addresses and prefixes derived from custom IP prefixes (BYOIP), to learn more, see [Custom IP address prefix (BYOIP)](../virtual-network/ip-services/custom-ip-address-prefix.md).
-* NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix.
+* NAT gateway canΓÇÖt be associated to an IPv6 public IP address or IPv6 public IP prefix.
-* NAT gateway can be used with Load balancer using outbound rules to provide dual-stack outbound connectivity, see [dual stack outbound connectivity with NAT gateway and Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal).
+* NAT gateway can be used with Load balancer using outbound rules to provide dual-stack outbound connectivity. See [dual stack outbound connectivity with NAT gateway and Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal).
-* NAT gateway works with any virtual machine network interface or IP configuration. NAT gateway can SNAT multiple IP configurations on a NIC.
+* NAT gateway works with any virtual machine network interface or IP configuration. NAT gateway can SNAT multiple IP configurations on a network interface.
* NAT gateway can be associated to an Azure Firewall subnet in a hub virtual network and provide outbound connectivity from spoke virtual networks peered to the hub. To learn more, see [Azure Firewall integration with NAT gateway](../firewall/integrate-with-nat-gateway.md). ### Availability zones
-* A NAT gateway can be created in a specific availability zone or placed in 'no zone'.
+* A NAT gateway can be created in a specific availability zone or placed in **no zone**.
* NAT gateway can be isolated in a specific zone when you create [zone isolation scenarios](./nat-availability-zones.md). This deployment is called a zonal deployment. After NAT gateway is deployed, the zone selection can't be changed.
-* NAT gateway is placed in 'no zone' by default. A [non-zonal NAT gateway](./nat-availability-zones.md#non-zonal) is placed in a zone for you by Azure.
+* NAT gateway is placed in **no zone** by default. A [non-zonal NAT gateway](./nat-availability-zones.md#nonzonal) is placed in a zone for you by Azure.
-### NAT gateway and basic SKU resources
+### NAT gateway and basic resources
-* NAT gateway is compatible with standard SKU public IP addresses or public IP prefix resources or a combination of both.
+* NAT gateway is compatible with standard public IP addresses or public IP prefix resources or a combination of both.
-* Basic SKU resources, such as basic load balancer or basic public IPs aren't compatible with NAT gateway. NAT gateway can't be used with subnets where basic SKU resources exist. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
+* Basic resources, such as basic load balancer or basic public IPs aren't compatible with NAT gateway. NAT gateway can't be used with subnets where basic resources exist. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway.
- * Upgrade a load balancer from basic to standard, see [Upgrade a public basic Azure Load Balancer](/azure/load-balancer/upgrade-basic-standard-with-powershell).
+ * For more information about upgrading a load balancer from basic to standard, see [Upgrade a public basic Azure Load Balancer](/azure/load-balancer/upgrade-basic-standard-with-powershell).
- * Upgrade a public IP from basic to standard, see [Upgrade a public IP address](../virtual-network/ip-services/public-ip-upgrade-portal.md).
-
- * Upgrade a basic public IP attached to a VM from basic to standard, see [Upgrade a basic public IP attached to a VM](/azure/virtual-network/ip-services/public-ip-upgrade-vm).
+ * For more information about upgrading a public IP from basic to standard, see [Upgrade a public IP address](../virtual-network/ip-services/public-ip-upgrade-portal.md).
+
+ * For more information about upgrading a basic public IP attached to a virtual machine from basic to standard, see [Upgrade a basic public IP attached to a virtual machine](/azure/virtual-network/ip-services/public-ip-upgrade-vm).
### Connection timeouts and timers
-* NAT gateway sends a TCP Reset (RST) packet for any connection flow that it doesn't recognize as an existing connection. The connection flow may no longer exist if the NAT gateway idle timeout was reached or the connection was closed earlier.
+* NAT gateway sends a TCP Reset (RST) packet for any connection flow that it doesn't recognize as an existing connection. The connection flow no longer exists if the NAT gateway idle timeout was reached or the connection was closed earlier.
* When the sender of traffic on the nonexisting connection flow receives the NAT gateway TCP RST packet, the connection is no longer usable.
Virtual appliance UDR / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-le
* UDP traffic has a port reuse timer of 65 seconds for which a port is in hold down before it's available for reuse to the same destination endpoint.
-## Pricing and SLA
+## Pricing and Service Level Agreement (SLA)
For Azure NAT Gateway pricing, see [NAT gateway pricing](https://azure.microsoft.com/pricing/details/azure-nat-gateway/).
For information on the SLA, see [SLA for Azure NAT Gateway](https://azure.micros
## Next steps
-* To create and validate a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md).
+* For more information about creating and validating a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md).
* To view a video on more information about Azure NAT Gateway, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4).
-* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
+* For more information about the NAT gateway resource, see [NAT gateway resource](./nat-gateway-resource.md).
+
+* Learn more about Azure NAT Gateway in the following module:
+
+ * [Learn module: Introduction to Azure NAT Gateway](/training/modules/intro-to-azure-virtual-network-nat).
-* [Learn module: Introduction to Azure NAT Gateway](/training/modules/intro-to-azure-virtual-network-nat).
+* For more information about architecture options for Azure NAT Gateway, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway).
-* To learn more about architecture options for Azure NAT Gateway, see [Azure Well-Architected Framework review of an Azure NAT gateway](/azure/architecture/networking/guide/well-architected-network-address-translation-gateway).
nat-gateway Troubleshoot Nat And Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat-and-azure-services.md
Title: Troubleshoot outbound connectivity with Azure services-
-description: Troubleshoot issues with NAT gateway and Azure services.
+
+description: Get started learning how to troubleshoot issues with Azure NAT Gateway and Azure resources and services.
Previously updated : 08/29/2022 Last updated : 02/15/2024
NAT gateway can be used with Azure app services to allow applications to make ou
To use NAT gateway with Azure App services, follow these steps:
-1. Ensure that your application(s) have virtual network integration configured, see [Enable virtual network integration](../app-service/configure-vnet-integration-enable.md).
+1. Ensure that your applications have virtual network integration configured, see [Enable virtual network integration](../app-service/configure-vnet-integration-enable.md).
-2. Ensure that **Route All** is enabled for your virtual network integration, see [Configure virtual network integration routing](../app-service/configure-vnet-integration-routing.md).
+1. Ensure that **Route All** is enabled for your virtual network integration, see [Configure virtual network integration routing](../app-service/configure-vnet-integration-routing.md).
-3. Create a NAT gateway resource.
+1. Create a NAT gateway resource.
-4. Create a new public IP address or attach an existing public IP address in your network to NAT gateway.
+1. Create a new public IP address or attach an existing public IP address in your network to NAT gateway.
-5. Assign NAT gateway to the same subnet being used for Virtual network integration with your application(s).
+1. Assign NAT gateway to the same subnet being used for Virtual network integration with your applications.
-To see step-by-step instructions on how to configure NAT gateway with virtual network integration, see [Configuring NAT gateway integration](../app-service/networking/nat-gateway-integration.md#configure-nat-gateway-integration)
+To see step-by-step instructions on how to configure NAT gateway with virtual network integration, see [Configuring NAT gateway integration.](../app-service/networking/nat-gateway-integration.md#configure-nat-gateway-integration)
Important notes about the NAT gateway and Azure App Services integration: * Virtual network integration doesn't provide inbound private access to your app from the virtual network.
-* Because of the nature of how virtual network integration operates, the traffic from virtual network integration doesn't show up in Azure Network Watcher or NSG flow logs.
+* Virtual network integration traffic doesn't appear in Azure Network Watcher or Network Security Group (NSG) flow logs due to the nature of how it operates.
### App services isn't using the NAT gateway public IP address to connect outbound
-App services can still connect outbound to the internet even if VNet integration isn't enabled. By default, apps that are hosted in App Service are accessible directly through the internet and can reach only internet-hosted endpoints. To learn more, see [App Services Networking Features](/azure/app-service/networking-features).
+App services can still connect outbound to the internet even if virtual network integration isn't enabled. By default, apps that are hosted in App Service are accessible directly through the internet and can reach only internet-hosted endpoints. To learn more, see [App Services Networking Features](/azure/app-service/networking-features).
-If you notice that the IP address used to connect outbound isn't your NAT gateway public IP address or addresses, check that virtual network integration has been enabled. Ensure the NAT gateway is configured to the subnet used for integration with your application(s).
+If you notice that the IP address used to connect outbound isn't your NAT gateway public IP address or addresses, check that virtual network integration is enabled. Ensure the NAT gateway is configured to the subnet used for integration with your applications.
To validate that web applications are using the NAT gateway public IP, ping a virtual machine on your Web Apps and check the traffic via a network capture. ## Azure Kubernetes Service
-### How to deploy NAT gateway with AKS clusters
+### How to deploy NAT gateway with Azure Kubernetes Service (AKS) clusters
NAT gateway can be deployed with AKS clusters in order to allow for explicit outbound connectivity. There are two different ways to deploy NAT gateway with AKS clusters:
-1. **Managed NAT gateway**: NAT gateway is provisioned by Azure at the time of the AKS cluster creation and managed by AKS.
+- **Managed NAT gateway**: Azure deploys a NAT gateway at the time of the AKS cluster creation. AKS manages the NAT gateway.
-2. **User-Assigned NAT gateway**: NAT gateway is provisioned by you to an existing virtual network for the AKS cluster.
+- **User-Assigned NAT gateway**: You deploy a NAT gateway to an existing virtual network for the AKS cluster.
Learn more at [Managed NAT Gateway](../aks/nat-gateway.md). ### Can't update my NAT gateway IPs or idle timeout timer for an AKS cluster
-Public IP addresses and the idle timeout timer for NAT gateway can be updated with the az aks update command for a Managed NAT gateway ONLY.
+Public IP addresses and the idle timeout timer for NAT gateway can be updated with the `az aks update` command for a managed NAT gateway ONLY.
-If you've provisioned a User-Assigned NAT gateway to your AKS subnets, then you can't use the az aks update command to update public IP addresses or the idle timeout timer. A User-Assigned NAT gateway is managed by the user rather than by AKS. You'll need to update these configurations manually on your NAT gateway resource.
+If you deployed a User-Assigned NAT gateway to your AKS subnets, then you can't use the `az aks update` command to update public IP addresses or the idle timeout timer. The user manages a User-Assigned NAT gateway. You need to update these configurations manually on your NAT gateway resource.
Update your public IP addresses on your User-Assigned NAT gateway with the following steps:
-1. In your resource group, select on your NAT gateway resource in the portal
+1. In your resource group, select your NAT gateway resource in the portal.
-2. Under Settings on the left-hand navigation bar, select Outbound IP
+1. Under Settings on the left-hand navigation bar, select Outbound IP.
-3. To manage your Public IP addresses, select the blue Change
+1. To manage your Public IP addresses, select the blue Change.
-4. From the Manage public IP addresses and prefixes configuration that slides in from the right, update your assigned public IPs from the drop-down menu or select **Create a new public IP address**.
+1. From the Manage public IP addresses and prefixes configuration that slides in from the right, update your assigned public IPs from the drop-down menu or select **Create a new public IP address**.
-5. Once you're done updating your IP configurations, select the OK button at the bottom of the screen.
+1. Once you're done updating your IP configurations, select the OK button at the bottom of the screen.
-6. After the configuration page disappears, select the Save button to save your changes
+1. After the configuration page disappears, select the Save button to save your changes.
-7. Use steps 3 - 6 to do the same for public IP prefixes.
+1. Repeat steps 3 - 6 to do the same for public IP prefixes.
Update your idle timeout timer configuration on your User-Assigned NAT gateway with the following steps:
-1. In your resource group, select on your NAT gateway resource in the portal
+1. In your resource group, select on your NAT gateway resource in the portal.
-2. Under Settings on the left-hand navigation bar, select Configuration
+1. Under Settings on the left-hand navigation bar, select Configuration.
-3. In the TCP idle timeout (minutes) text bar, adjust the idle timeout timer (the timer can be configured 4 ΓÇô 120 minutes).
+1. In the TCP idle timeout (minutes) text bar, adjust the idle timeout timer (the timer can be configured 4 ΓÇô 120 minutes).
-4. Select the Save button when youΓÇÖre done.
+1. Select the Save button when youΓÇÖre done.
>[!Note] >Increasing the TCP idle timeout timer to longer than 4 minutes can increase the risk of SNAT port exhaustion. ## Azure Firewall
-### SNAT exhaustion when connecting outbound with Azure Firewall
+### Source Network Address Translation (SNAT) exhaustion when connecting outbound with Azure Firewall
-Azure Firewall can provide outbound connectivity to the internet from virtual networks. Azure Firewall provides only 2,496 SNAT ports per public IP address. While Azure Firewall can be associated with up to 250 public IP addresses to handle egress traffic, users may require much fewer public IP addresses for connecting outbound. The requirement for egressing with fewer public IP addresses may be due to various architectural requirements and allowlist limitations by destination endpoints.
+Azure Firewall can provide outbound internet connectivity to virtual networks. Azure Firewall provides only 2,496 SNAT ports per public IP address. While Azure Firewall can be associated with up to 250 public IP addresses to handle egress traffic, you might require fewer public IP addresses for connecting outbound. The requirement for egressing with fewer public IP addresses is due to architectural requirements and allowlist limitations by destination endpoints.
-One method by which to provide greater scalability for outbound traffic and also reduce the risk of SNAT port exhaustion is to use NAT gateway in the same subnet with Azure Firewall. To set up NAT gateway in an Azure Firewall subnet, see [integrate NAT gateway with Azure Firewall](/azure/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall). See [Scale SNAT ports with Azure NAT Gateway](../firewall/integrate-with-nat-gateway.md) to learn more about how NAT gateway works with Firewall.
+One method by which to provide greater scalability for outbound traffic and also reduce the risk of SNAT port exhaustion is to use NAT gateway in the same subnet with Azure Firewall.
+For more information on how to set up a NAT gateway in an Azure Firewall subnet, see [integrate NAT gateway with Azure Firewall](/azure/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall). For more information about how NAT gateway works with Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](../firewall/integrate-with-nat-gateway.md).
> [!NOTE] > NAT gateway is not supported in a vWAN architecture. NAT gateway cannot be configured to an Azure Firewall subnet in a vWAN hub.
One method by which to provide greater scalability for outbound traffic and also
NAT gateway can be used to connect outbound from your databricks cluster when you create your Databricks workspace. NAT gateway can be deployed to your databricks cluster in one of two ways:
-1. By enabling [Secure Cluster Connectivity (No Public IP)](/azure/databricks/security/secure-cluster-connectivity#use-secure-cluster-connectivity) on the default virtual network that Azure Databricks creates, NAT gateway will automatically be deployed for connecting outbound from your workspaceΓÇÖs subnets to the internet. This NAT gateway resource is created within the managed resource group managed by Azure Databricks. You can't modify this resource group or any other resources provisioned in it.
+* When you enable [Secure Cluster Connectivity (No Public IP)](/azure/databricks/security/secure-cluster-connectivity#use-secure-cluster-connectivity) on the default virtual network that Azure Databricks creates, Azure Databricks automatically deploys a NAT gateway to connect outbound from your workspace's subnets to the internet. Azure Databricks creates this NAT gateway resource within the managed resource group and you can't modify this resource group or any other resources deployed in it.
-2. After deploying Azure Databricks workspace in your own VNet (via [VNet injection](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject)), you can deploy and configure NAT gateway to both of your workspaceΓÇÖs subnets to ensure outbound connectivity through the NAT gateway. You can implement this solution using an [Azure template](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject#advanced-configuration-using-azure-resource-manager-templates) or in the portal.
+* After you deploy Azure Databricks workspace in your own virtual network (via [virtual network injection](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject)), you can deploy and configure NAT gateway to both of your workspaceΓÇÖs subnets to ensure outbound connectivity through the NAT gateway. You can implement this solution using an [Azure template](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject#advanced-configuration-using-azure-resource-manager-templates) or in the portal.
## Next steps
-We're always looking to improve the experience of our customers. If you're experiencing issues with NAT gateway that aren't listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We'll address your feedback as soon as possible.
+If you're experiencing issues with NAT gateway not resolved by this article, submit feedback through GitHub via the bottom of this page. We address your feedback as soon as possible to improve the experience of our customers.
To learn more about NAT gateway, see:
nat-gateway Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat.md
Title: Troubleshoot Azure NAT Gateway-
-description: Troubleshoot issues with NAT Gateway.
+
+description: Get started using this article to learn how to troubleshoot issues and errors with Azure NAT Gateway.
Previously updated : 08/29/2022 Last updated : 02/14/2024
Check the following configurations to ensure that NAT gateway can be used to dir
1. At least one public IP address or one public IP prefix is attached to NAT gateway. At least one public IP address must be associated with the NAT gateway for it to provide outbound connectivity.
-2. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway can't span beyond a single virtual network.
+1. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway can't span beyond a single virtual network.
-3. No [NSG rules](../virtual-network/network-security-groups-overview.md#outbound) or UDRs are blocking NAT gateway from directing traffic outbound to the internet.
+1. No [Network Security Group (NSG) rules](../virtual-network/network-security-groups-overview.md#outbound) or User Defined Routes (UDR) are blocking NAT gateway from directing traffic outbound to the internet.
### How to validate connectivity
-[NAT gateway](./nat-overview.md#azure-nat-gateway-basics) supports IPv4 UDP and TCP protocols. ICMP isn't supported and is expected to fail.
+[NAT gateway](./nat-overview.md#azure-nat-gateway-basics) supports IPv4 User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) protocols. Ping isn't supported and is expected to fail.
To validate end-to-end connectivity of NAT gateway, follow these steps: + 1. Validate that your [NAT gateway public IP address is being used](./quickstart-create-nat-gateway-portal.md#test-nat-gateway).
-2. Conduct TCP connection tests and UDP-specific application layer tests.
+1. Conduct TCP connection tests and UDP-specific application layer tests.
-3. Look at NSG flow logs to analyze outbound traffic flows from NAT gateway.
+1. Look at NSG flow logs to analyze outbound traffic flows from NAT gateway.
-Refer to the table below for which tools to use to validate NAT gateway connectivity.
+Refer to the following table for tools to use to validate NAT gateway connectivity.
| Operating system | Generic TCP connection test | TCP application layer test | UDP | |||||
-| Linux | nc (generic connection test) | curl (TCP application layer test) | application specific |
+| Linux | `nc` (generic connection test) | `curl` (TCP application layer test) | application specific |
| Windows | [PsPing](/sysinternals/downloads/psping) | PowerShell [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) | application specific | ### How to analyze outbound connectivity
To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs p
* To learn more about NSG flow logs, see [NSG flow log overview](../network-watcher/network-watcher-nsg-flow-logging-overview.md).
-* For guides on how to enable NSG flow logs, see [Enabling NSG flow logs](../network-watcher/network-watcher-nsg-flow-logging-overview.md#enabling-nsg-flow-logs).
+* For guides on how to enable NSG flow logs, see [Managing NSG flow logs](../network-watcher/network-watcher-nsg-flow-logging-overview.md#managing-nsg-flow-logs).
* For guides on how to read NSG flow logs, see [Working with NSG flow logs](../network-watcher/network-watcher-nsg-flow-logging-overview.md#working-with-flow-logs). ## NAT gateway in a failed state
-You may experience outbound connectivity failure if your NAT gateway resource is in a failed state. To get your NAT gateway out of a failed state, follow these instructions:
+You can experience outbound connectivity failure if your NAT gateway resource is in a failed state. To get your NAT gateway out of a failed state, follow these instructions:
-1. Once you identify the resource that is in a failed state, go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource in this state.
+1. Identify the resource that is in a failed state. Go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource in this state.
-2. Update the toggle on the right-hand top corner to Read/Write.
+1. Update the toggle on the right-hand top corner to Read/Write.
-3. Select on Edit for the resource in failed state.
+1. Select on Edit for the resource in failed state.
-4. Select on PUT followed by GET to ensure the provisioning state was updated to Succeeded.
+1. Select on PUT followed by GET to ensure the provisioning state was updated to Succeeded.
-5. You can then proceed with other actions as the resource is out of failed state.
+1. You can then proceed with other actions as the resource is out of failed state.
## Add or remove NAT gateway
NAT gateway must be detached from all subnets within a virtual network before th
A subnet within a virtual network can't have more than one NAT gateway attached to it for connecting outbound to the internet. An individual NAT gateway resource can be associated to multiple subnets within the same virtual network. NAT gateway can't span beyond a single virtual network.
-### Basic SKU resources can't exist in the same subnet as NAT gateway
+### Basic resources can't exist in the same subnet as NAT gateway
NAT gateway isn't compatible with basic resources, such as Basic Load Balancer or Basic Public IP. Basic resources must be placed on a subnet not associated with a NAT Gateway. Basic Load Balancer and Basic Public IP can be upgraded to standard to work with NAT gateway.
NAT gateway isn't compatible with basic resources, such as Basic Load Balancer o
* To upgrade a basic public IP to standard, see [upgrade from basic public to standard public IP](../virtual-network/ip-services/public-ip-upgrade-portal.md).
-* To upgrade a basic public IP with an attached VM to standard, see [upgrade a basic public IP with an attached VM](/azure/virtual-network/ip-services/public-ip-upgrade-vm).
+* To upgrade a basic public IP with an attached virtual machine to standard, see [upgrade a basic public IP with an attached virtual machine](/azure/virtual-network/ip-services/public-ip-upgrade-virtual machine).
### NAT gateway can't be attached to a gateway subnet NAT gateway can't be deployed in a gateway subnet. A gateway subnet is used by a VPN gateway for sending encrypted traffic between an Azure virtual network and on-premises location. See [VPN gateway overview](../vpn-gateway/vpn-gateway-about-vpngateways.md) to learn more about how gateway subnets are used by VPN gateway.
-### Can't attach NAT gateway to a subnet that contains a virtual machine NIC in a failed state
+### Can't attach NAT gateway to a subnet that contains a virtual machine network interface in a failed state
-When associating a NAT gateway to a subnet that contains a virtual machine network interface (NIC) in a failed state, you receive an error message indicating that this action can't be performed. You must first resolve the VM NIC failed state before you can attach a NAT gateway to the subnet.
+When associating a NAT gateway to a subnet that contains a virtual machine network interface (network interface) in a failed state, you receive an error message indicating that this action can't be performed. You must first resolve the virtual machine network interface failed state before you can attach a NAT gateway to the subnet.
-To get your virtual machine NIC out of a failed state, you can use one of the two following methods.
+To get your virtual machine network interface out of a failed state, you can use one of the two following methods.
-#### Use PowerShell to get your virtual machine NIC out of a failed state
+#### Use PowerShell to get your virtual machine network interface out of a failed state
-1. Determine the provisioning state of your NICs using the [Get-AzNetworkInterface PowerShell command](/powershell/module/az.network/get-aznetworkinterface#example-2-get-all-network-interfaces-with-a-specific-provisioning-state) and setting the value of the "provisioningState" to "Succeeded".
+1. Determine the provisioning state of your network interfaces using the [Get-AzNetworkInterface PowerShell command](/powershell/module/az.network/get-aznetworkinterface#example-2-get-all-network-interfaces-with-a-specific-provisioning-state) and setting the value of the "provisioningState" to "Succeeded."
-2. Perform [GET/SET PowerShell commands](/powershell/module/az.network/set-aznetworkinterface#example-1-configure-a-network-interface) on the network interface to update the provisioning state.
+1. Perform [GET/SET PowerShell commands](/powershell/module/az.network/set-aznetworkinterface#example-1-configure-a-network-interface) on the network interface. The PowerShell commands update the provisioning state.
-3. Check the results of this operation by checking the provisioning state of your NICs again (follow commands from step 1).
+1. Check the results of this operation by checking the provisioning state of your network interfaces again (follow commands from step 1).
-#### Use Azure Resource Explorer to get your virtual machine NIC out of a failed state
+#### Use Azure Resource Explorer to get your virtual machine network interface out of a failed state
1. Go to [Azure Resource Explorer](https://resources.azure.com/) (recommended to use Microsoft Edge browser)
-2. Expand Subscriptions (takes a few seconds for it to appear on the left)
+1. Expand Subscriptions (takes a few seconds for it to appear).
-3. Expand your subscription that contains the VM NIC in the failed state
+1. Expand your subscription that contains the virtual machine network interface in the failed state.
-4. Expand resourceGroups
+1. Expand resourceGroups.
-5. Expand the correct resource group that contains the VM NIC in the failed state
+1. Expand the correct resource group that contains the virtual machine network interface in the failed state.
-6. Expand providers
+1. Expand providers.
-7. Expand Microsoft.Network
+1. Expand Microsoft.Network.
-8. Expand networkInterfaces
+1. Expand networkInterfaces.
-9. Select on the NIC that is in the failed provisioning state
+1. Select on the network interface that is in the failed provisioning state.
-10. Select the Read/Write button at the top
+1. Select the Read/Write button at the top.
-11. Select the green GET button
+1. Select the green GET button.
-12. Select the blue EDIT button
+1. Select the blue EDIT button.
-13. Select the green PUT button
+1. Select the green PUT button.
-14. Select Read Only button at the top
+1. Select Read Only button at the top.
-15. The VM NIC should now be in a succeeded provisioning state, you can close your browser
+1. The virtual machine network interface should now be in a succeeded provisioning state. You can close your browser.
## Add or remove public IP addresses
The following IP prefix sizes can be used with NAT gateway:
### IPv6 coexistence
-[NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but only uses IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources. See [Configure dual stack outbound connectivity with NAT gateway and public Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal) to learn how to provide IPv4 and IPv6 outbound connectivity from your dual stack subnet.
+[NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but only uses IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources. For more information about how to provide IPv4 and IPv6 outbound connectivity from your dual stack subnet, see [Dual stack outbound connectivity with NAT gateway and public Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal).
-### Can't use basic SKU public IPs with NAT gateway
+### Can't use basic public IPs with NAT gateway
-NAT gateway is a standard SKU resource and can't be used with basic SKU resources, including basic public IP addresses. You can upgrade your basic SKU public IP address in order to use with your NAT gateway using the following guidance: [Upgrade a public IP address](../virtual-network/ip-services/public-ip-upgrade-portal.md)
+NAT gateway is a standard resource and can't be used with basic resources, including basic public IP addresses. You can upgrade your basic public IP address in order to use with your NAT gateway using the following guidance: [Upgrade a public IP address.](../virtual-network/ip-services/public-ip-upgrade-portal.md)
### Can't mismatch zones of public IP addresses and NAT gateway
-NAT gateway is a [zonal resource](./nat-availability-zones.md) and can either be designated to a specific zone or to ΓÇÿno zoneΓÇÖ. When NAT gateway is placed in ΓÇÿno zoneΓÇÖ, Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located.
+NAT gateway is a [zonal resource](./nat-availability-zones.md) and can either be designated to a specific zone or to "no zone." When NAT gateway is placed in "no zone," Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located.
NAT gateway can be used with public IP addresses designated to a specific zone, no zone, all zones (zone-redundant) depending on its own availability zone configuration.
NAT gateway can be used with public IP addresses designated to a specific zone,
## More troubleshooting guidance If the issue you're experiencing isn't covered by this article, refer to the other NAT gateway troubleshooting articles:
-* [Troubleshoot outbound connectivity with NAT Gateway](/azure/nat-gateway/troubleshoot-nat-connectivity)
-* [Troubleshoot outbound connectivity with NAT Gateway and other Azure services](/azure/nat-gateway/troubleshoot-nat-and-azure-services)
+
+* [Troubleshoot outbound connectivity with NAT Gateway](/azure/nat-gateway/troubleshoot-nat-connectivity).
+
+* [Troubleshoot outbound connectivity with NAT Gateway and other Azure services](/azure/nat-gateway/troubleshoot-nat-and-azure-services).
## Next steps
-We're always looking to improve the experience of our customers. If you're experiencing issues with NAT gateway that aren't listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We'll address your feedback as soon as possible.
+If you're experiencing issues with NAT gateway not listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We address your feedback as soon as possible to improve the experience of our customers.
To learn more about NAT gateway, see:
-* [Azure NAT Gateway](nat-overview.md)
+* [What is Azure NAT Gateway?](nat-overview.md).
-* [NAT gateway resource](nat-gateway-resource.md)
+* [Azure NAT gateway resource](nat-gateway-resource.md).
-* [Manage NAT gateway](./manage-nat-gateway.md)
+* [Manage a NAT gateway](./manage-nat-gateway.md).
* [Metrics and alerts for NAT gateway resources](nat-metrics.md).
nat-gateway Tutorial Migrate Ilip Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-migrate-ilip-nat.md
Title: 'Tutorial: Migrate a virtual machine public IP address to NAT gateway'
+ Title: 'Tutorial: Migrate a virtual machine public IP address to a NAT gateway'
-description: Learn how to migrate your virtual machine public IP to a NAT gateway.
+description: Use this tutorial to learn how to migrate your virtual machine public IP address to an Azure NAT Gateway.
Previously updated : 5/25/2022 Last updated : 02/14/2024
+# Customer intent: As a network engineer, I want to migrate my virtual machine public IP address to a NAT gateway to improve outbound connectivity.
# Tutorial: Migrate a virtual machine public IP address to Azure NAT Gateway
-In this article, you'll learn how to migrate your virtual machine's public IP address to a NAT gateway. You'll learn how to remove the IP address from the virtual machine. You'll reuse the IP address from the virtual machine for the NAT gateway.
+In this tutorial, you learn how to migrate your virtual machine's public IP address to a NAT gateway. You learn how to remove the IP address from the virtual machine. You reuse the IP address from the virtual machine for the NAT gateway.
-Azure NAT Gateway is the recommended method for outbound connectivity. Azure NAT Gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of SNAT port exhaustion as default outbound access. A NAT gateway replaces the need for a virtual machine to have a public IP address to have outbound connectivity.
+Azure NAT Gateway is the recommended method for outbound connectivity. Azure NAT Gateway is a fully managed and highly resilient Network Address Translation (NAT) service. A NAT gateway doesn't have the same limitations of Source Network Address Translation (SNAT) port exhaustion as default outbound access. A NAT gateway replaces the need for a virtual machine to have a public IP address to have outbound connectivity.
-For more information about Azure NAT Gateway, see [What is Azure NAT Gateway](nat-overview.md)
+For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](nat-overview.md)
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Remove public IP from virtual machine
-In this section, you'll learn how to remove the public IP address from the virtual machine.
+In this section, you learn how to remove the public IP address from the virtual machine.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines**.
-3. In **Virtual machines**, select **myVM** or your virtual machine.
+1. In **Virtual machines**, select **myVM** or your virtual machine.
-4. In the **Overview** of **myVM**, select **Public IP address**.
+1. In the **Overview** of **myVM**, select **Public IP address**.
:::image type="content" source="./media/tutorial-migrate-ilip-nat/select-public-ip.png" alt-text="Screenshot of virtual machines public IP address.":::
-5. In **myPublicIP**, select the **Overview** page in the left-hand column.
+1. In **myPublicIP**, select the **Overview** page in the left-hand column.
-6. In **Overview**, select **Dissociate**.
+1. In **Overview**, select **Dissociate**.
:::image type="content" source="./media/tutorial-migrate-ilip-nat/remove-public-ip.png" alt-text="Screenshot of virtual machines public IP address overview and removal of IP address.":::
-7. Select **Yes** in **Dissociate public IP address**.
+1. Select **Yes** in **Dissociate public IP address**.
### (Optional) Upgrade IP address
-The NAT gateway resource requires a standard SKU public IP address. In this section, you'll upgrade the IP you removed from the virtual machine in the previous section. If the IP address you removed is already a standard SKU public IP, you can proceed to the next section.
+The NAT gateway resource requires a standard public IP address. In this section, you upgrade the IP you removed from the virtual machine in the previous section. If the IP address you removed is already a standard public IP, you can proceed to the next section.
1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses**.
-2. In **Public IP addresses**, select **myPublicIP** or your basic SKU IP address.
+1. In **Public IP addresses**, select **myPublicIP** or your basic IP address.
-3. In the **Overview** of **myPublicIP**, select the IP address upgrade banner.
+1. In the **Overview** of **myPublicIP**, select the IP address upgrade banner.
:::image type="content" source="./media/tutorial-migrate-ilip-nat/select-upgrade-banner.png" alt-text="Screenshot of public IP address upgrade banner.":::
-4. In **Upgrade to Standard SKU**, select the box next to **I acknowledge**. Select the **Upgrade** button.
+1. In **Upgrade to Standard SKU**, select the box next to **I acknowledge**. Select the **Upgrade** button.
:::image type="content" source="./media/tutorial-migrate-ilip-nat/upgrade-public-ip.png" alt-text="Screenshot of upgrade public IP address selection.":::
-5. When the upgrade is complete, proceed to the next section.
+1. When the upgrade is complete, proceed to the next section.
+ ## Create NAT gateway
-In this section, youΓÇÖll create a NAT gateway with the IP address you previously removed from the virtual machine. You'll assign the NAT gateway to your pre-created subnet within your virtual network. The subnet name for this example is **default**.
+In this section, you create a NAT gateway with the IP address you previously removed from the virtual machine. You assign the NAT gateway to your precreated subnet within your virtual network. The subnet name for this example is **default**.
1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways**.
-2. In **NAT gateways**, select **+ Create**.
+1. In **NAT gateways**, select **+ Create**.
-3. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
+1. In **Create network address translation (NAT) gateway**, enter or select the following information in the **Basics** tab.
| Setting | Value | | - | -- |
In this section, youΓÇÖll create a NAT gateway with the IP address you previousl
| NAT gateway name | Enter **myNATgateway**. | | Region | Select the region of your virtual network. In this example, it's **West US 2**. | | Availability zone | Leave the default of **None**. |
- | Idle timeout (minutes) | Enter **10**. |
+ | Idle timeout (minutes) | Leave the default setting. |
-4. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
+1. Select the **Outbound IP** tab, or select **Next: Outbound IP** at the bottom of the page.
-5. In **Public IP addresses** in the **Outbound IP** tab, select the IP address from the previous section in **Public IP addresses**. In this example, it's **myPublicIP**.
+1. In **Public IP addresses** in the **Outbound IP** tab, select the IP address from the previous section in **Public IP addresses**. In this example, it's **myPublicIP**.
-6. Select the **Subnet** tab, or select **Next: Subnet** at the bottom of the page.
+1. Select the **Subnet** tab, or select **Next: Subnet** at the bottom of the page.
-7. In the pull-down box for **Virtual network**, select your virtual network.
+1. In the pull-down box for **Virtual network**, select your virtual network.
-8. In **Subnet name**, select the checkbox for your subnet. In this example, it's **default**.
+1. In **Subnet name**, select the checkbox for your subnet. In this example, it's **default**.
-9. Select the **Review + create** tab, or select **Review + create** at the bottom of the page.
+1. Select the **Review + create** tab, or select **Review + create** at the bottom of the page.
-10. Select **Create**.
+1. Select **Create**.
## Clean up resources
If you're not going to continue to use this application, delete the NAT gateway
4. Enter **myResourceGroup** and select **Delete**.
-## Next steps
+## Next step
In this article, you learned how to:
In this article, you learned how to:
* Create a NAT gateway and use the public IP address from the virtual machine for the NAT gateway resource.
-Any virtual machine created within this subnet won't require a public IP address and will automatically have outbound connectivity. For more information about NAT gateway and the connectivity benefits it provides, see [Design virtual networks with NAT gateway](nat-gateway-resource.md).
+NAT gateway provides connectivity benefits and allows any virtual machine created within this subnet to have outbound connectivity without requiring a public IP address. For more information about NAT gateway and its connectivity benefits, see the [Design virtual networks with NAT gateway](nat-gateway-resource.md) documentation.
Advance to the next article to learn how to migrate default outbound access to Azure NAT Gateway: > [!div class="nextstepaction"]
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
description: In this article, you learn how to use Azure CLI to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher.
-tags: azure-resource-manager
network-watcher
network-watcher Diagnose Vm Network Routing Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-powershell.md
description: In this article, you learn how to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher.
-tags: azure-resource-manager
network-watcher
network-watcher Network Watcher Analyze Nsg Flow Logs Graylog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-analyze-nsg-flow-logs-graylog.md
Title: Analyze Azure network security group flow logs - Graylog
description: Learn how to manage and analyze network security group flow logs in Azure using Network Watcher and Graylog.
-tags: azure-resource-manager
Last updated 05/03/2023
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Previously updated : 02/07/2024 Last updated : 02/15/2024 #CustomerIntent: As an Azure administrator, I want to learn about NSG flow logs so that I can log my network traffic to analyze and optimize the network performance. # Flow logging for network security groups
-Network security groups flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a [network security group](../virtual-network/network-security-groups-overview.md). Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice.
+Network security group (NSG) flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a [network security group](../virtual-network/network-security-groups-overview.md). Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice.
:::image type="content" source="./media/network-watcher-nsg-flow-logging-overview/nsg-flow-logs-portal.png" alt-text="Screenshot showing Network Watcher NSG flow logs page in the Azure portal.":::
Here's an example bandwidth calculation for flow tuples from a TCP conversation
For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000.
-## Enabling NSG flow logs
+## Managing NSG flow logs
-For more information about enabling flow logs, see the following guides:
+To learn how to create, change, disable, or delete NSG flow logs, see one of the following guides:
- [Azure portal](./nsg-flow-logging.md) - [PowerShell](./network-watcher-nsg-flow-logging-powershell.md)
For more information about enabling flow logs, see the following guides:
- [REST API](./network-watcher-nsg-flow-logging-rest.md) - [Azure Resource Manager](./network-watcher-nsg-flow-logging-azure-resource-manager.md)
-## Updating parameters
-
-On the Azure portal:
-
-1. Go to the **NSG flow logs** section in Network Watcher.
-1. Select the name of the network security group.
-1. On the settings pane for the NSG flow log, change the parameters that you want.
-1. Select **Save** to deploy the changes.
-
-To update parameters via command-line tools, use the same command that you used to enable flow logs.
- ## Working with flow logs ### Read and export flow logs
+To learn how to read and export NSG flow logs, see one of the following guides:
+ - [Download and view flow logs from the portal](./nsg-flow-logging.md#download-a-flow-log) - [Read flow logs by using PowerShell functions](./network-watcher-read-nsg-flow-logs.md) - [Export NSG flow logs to Splunk](https://www.splunk.com/en_us/blog/platform/splunking-azure-nsg-flow-logs.html)
-NSG flow logs target network security groups and aren't displayed the same way as the other logs. NSG flow logs are stored only in a storage account and follow the logging path shown in the following example:
+NSG flow log files are stored in a storage account at the following path:
``` https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecuritygroupflowevent/resourceId=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json
https://{storageAccountName}.blob.core.windows.net/insights-logs-networksecurity
### Visualize flow logs
+To learn how to visualize NSG flow logs, see one of the following guides:
+ - [Visualize NSG flow logs using Network Watcher traffic analytics](./traffic-analytics.md) - [Visualize NSG flow logs using Power BI](./network-watcher-visualize-nsg-flow-logs-power-bi.md) - [Visualize NSG flow logs using Elastic Stack](./network-watcher-visualize-nsg-flow-logs-open-source-tools.md) - [Manage and analyze NSG flow logs using Grafana](./network-watcher-nsg-grafana.md) - [Manage and analyze NSG flow logs using Graylog](./network-watcher-analyze-nsg-flow-logs-graylog.md)
-### Disable flow logs
-
-When you disable an NSG flow log, you stop the flow logging for the associated network security group. But the flow log continues to exist as a resource, with all its settings and associations. You can enable it anytime to begin flow logging on the configured network security group.
-
-You can disable a flow log using the [Azure portal](nsg-flow-logging.md#disable-a-flow-log), [PowerShell](network-watcher-nsg-flow-logging-powershell.md#disable-a-flow-log), the [Azure CLI](network-watcher-nsg-flow-logging-cli.md#disable-a-flow-log), or the [REST API](/rest/api/network-watcher/flow-logs/create-or-update).
-
-For steps to disable and enable NSG flow logs, see [Configure NSG flow logs](./network-watcher-nsg-flow-logging-powershell.md).
-
-### Delete flow logs
-
-When you delete an NSG flow log, you not only stop the flow logging for the associated network security group but also delete the flow log resource (with all its settings and associations). To begin flow logging again, you must create a new flow log resource for that network security group.
-
-You can delete a flow log using the [Azure portal](nsg-flow-logging.md#delete-a-flow-log), [PowerShell](network-watcher-nsg-flow-logging-powershell.md#delete-a-flow-log), the [Azure CLI](network-watcher-nsg-flow-logging-cli.md#delete-a-flow-log), or the [REST API](/rest/api/network-watcher/flow-logs/delete).
-
-When you delete a network security group, the associated flow log resource is deleted by default.
-
-> [!NOTE]
-> To move a network security group to a different resource group or subscription, you must delete the associated flow logs. Just disabling the flow logs won't work. After you migrate a network security group, you must re-create the flow logs to enable flow logging on it.
- ## Considerations for NSG flow logs ### Storage account
When you delete a network security group, the associated flow log resource is de
### Cost
-NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow-log volume which increases the associated costs.
+NSG flow logging is billed on the volume of produced logs. High traffic volume can result in large flow-log volume, which increases the associated costs.
-NSG flow log pricing doesn't include the underlying costs of storage. Using the retention policy feature with NSG flow logs means incurring separate storage costs for extended periods of time.
-
-If you want to retain data forever and don't want to apply a retention policy, set retention days to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+NSG flow log pricing doesn't include the underlying costs of storage. Retaining NSG flow logs data forever or using the retention policy feature means incurring storage costs for extended periods of time.
### Non-default inbound TCP rules
Network security groups are implemented as a [stateful firewall](https://en.wiki
Flows affected by non-default inbound rules become non-terminating. Additionally, byte and packet counts aren't recorded for these flows. Because of those factors, the number of bytes and packets reported in NSG flow logs (and Network Watcher traffic analytics) could be different from actual numbers.
-You can resolve this difference by setting the `FlowTimeoutInMinutes` property on the associated virtual networks to a non-null value. You can achieve default stateful behavior by setting `FlowTimeoutInMinutes` to 4 minutes. For long-running connections where you don't want flows to disconnect from a service or destination, you can set `FlowTimeoutInMinutes` to a value of up to 30 minutes. Use [Get-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to set `FlowTimeoutInMinutes` property:
+You can resolve this difference by setting the `FlowTimeoutInMinutes` property on the associated virtual networks to a non-null value. You can achieve default stateful behavior by setting `FlowTimeoutInMinutes` to 4 minutes. For long-running connections where you don't want flows to disconnect from a service or destination, you can set `FlowTimeoutInMinutes` to a value of up to 30 minutes. Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to set `FlowTimeoutInMinutes` property:
-```powershell
-$virtualNetwork = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup
+```azurepowershell-interactive
+$virtualNetwork = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'myResourceGroup'
$virtualNetwork.FlowTimeoutInMinutes = 4 $virtualNetwork | Set-AzVirtualNetwork ```
We don't recommend that you log flows on an Azure ExpressRoute gateway subnet be
### Traffic to a private endpoint
-Traffic to private endpoints can only be captured at source VM, the traffic is recorded with source IP address of the VM and destination IP address of the private endpoint. Traffic can't be recorded at the private endpoint itself due to platform limitations.
+Traffic to private endpoints can only be captured at source VM. The traffic is recorded with source IP address of the VM and destination IP address of the private endpoint. Traffic can't be recorded at the private endpoint itself due to platform limitations.
### Support for network security groups associated to Application Gateway v2 subnet
Currently, these Azure services don't support NSG flow logs:
### I can't enable NSG flow logs
-If you get an "AuthorizationFailed" or "GatewayAuthenticationFailed" error, you might not have enabled the **Microsoft.Insights** resource provider on your subscription. For more information, see [Register Insights provider](./nsg-flow-logging.md#register-insights-provider).
+You might get an *AuthorizationFailed* or *GatewayAuthenticationFailed* error, if you didn't enable the **Microsoft.Insights** resource provider on your subscription before trying to enable NSG flow logs. For more information, see [Register Insights provider](nsg-flow-logging.md#register-insights-provider).
### I enabled NSG flow logs but don't see data in my storage account
This problem might be related to:
## Pricing
-NSG flow logs are charged per gigabyte of *Network flow logs collected* and come with a free tier of 5 GB/month per subscription. If traffic analytics is enabled with NSG flow logs, traffic analytics pricing applies at per gigabyte processing rates. Traffic analytics isn't offered with a free tier of pricing. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+NSG flow logs are charged per gigabyte of ***Network flow logs collected*** and come with a free tier of 5 GB/month per subscription.
+
+If traffic analytics is enabled with NSG flow logs, traffic analytics pricing applies at per gigabyte processing rates. Traffic analytics isn't offered with a free tier of pricing. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
Storage of logs is charged separately. For more information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Grafana.
-tags: azure-resource-manager
Last updated 05/03/2023
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Title: VNet flow logs (preview)
+ Title: VNet flow logs (Preview)
description: Learn about Azure Network Watcher VNet flow logs and how to use them to record your virtual network's traffic. Previously updated : 01/16/2024 Last updated : 02/16/2024 #CustomerIntent: As an Azure administrator, I want to learn about VNet flow logs so that I can log my network traffic to analyze and optimize network performance.
-# VNet flow logs (preview)
+# VNet flow logs (Preview)
Virtual network (VNet) flow logs are a feature of Azure Network Watcher. You can use them to log information about IP traffic flowing through a virtual network.
Here's an example bandwidth calculation for flow tuples from a TCP conversation
For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1,021 + 52 + 8,005 + 47 = 9,125. The total number of bytes transferred is 588,096 + 29,952 + 4,610,880 + 27,072 = 5,256,000.
-## Considerations for VNet flow logs
-
-### Storage account
+## Storage account considerations for VNet flow logs
- **Location**: The storage account must be in the same region as the virtual network.-- **Subscription**: The storage account must be in either:-
- - The same subscription as the virtual network.
- - A subscription that's associated with the same Microsoft Entra tenant as the virtual network's subscription.
+- **Subscription**: The storage account must be in the same subscription of the virtual network or in a subscription associated with the same Microsoft Entra tenant of the virtual network's subscription.
- **Performance tier**: The storage account must be standard. Premium storage accounts aren't supported. - **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs.
-### Cost
-
-VNet flow logs are billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs.
-
-Pricing of VNet flow logs doesn't include the underlying costs of storage. Using the retention policy feature with VNet flow logs means incurring separate storage costs for extended periods of time.
+## Pricing
-If you want to retain data forever and don't want to apply any retention policy, set retention days to zero. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
+Currently, VNet flow logs aren't billed. However, the following costs apply:
-## Pricing
+If traffic analytics is enabled for VNet flow logs, traffic analytics pricing applies at per gigabyte processing rates. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
-Currently, VNet flow logs aren't billed. In the future, VNet flow logs will be billed per gigabyte of *network logs collected* and will come with a free tier of 5 GB/month per subscription. If enable traffic analytics for VNet flow logs, existing pricing for traffic analytics applies. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+Flow logs are stored in a storage account, and their retention policy can be set from one day to 365 days. If a retention policy isn't set, the logs are maintained forever. Pricing of VNet flow logs doesn't include the costs of storage. For more information, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
## Availability VNet flow logs are available in the following regions during the preview: -- East US 2 EUAP-- Central US EUAP
+- Central US EUAP<sup>1</sup>
+- East US<sup>1</sup>
+- East US 2<sup>1</sup>
+- East US 2 EUAP<sup>1</sup>
+- Swiss North
+- UK South
- West Central US-- East US-- East US 2-- West US-- West US 2
+- West US<sup>1</sup>
+- West US 2<sup>1</sup>
-To get access to the preview, go to the [VNet flow logs preview sign-up page](https://aka.ms/VNetflowlogspreviewsignup).
+<sup>1</sup> Requires signing up for access to the preview. Fill out the [VNet flow logs preview sign-up form](https://aka.ms/VNetflowlogspreviewsignup) to access the preview.
## Related content -- To learn how to manage VNet flow logs, see [Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell](vnet-flow-logs-powershell.md) or [Create, change, enable, disable, or delete VNet flow logs using the Azure CLI](vnet-flow-logs-cli.md).
+- To learn how to create, change, enable, disable, or delete VNet flow logs, see [Manage VNet flow logs using Azure PowerShell](vnet-flow-logs-powershell.md) or [Manage VNet flow logs using the Azure CLI](vnet-flow-logs-cli.md).
- To learn about traffic analytics, see [Traffic analytics overview](traffic-analytics.md) and [Schema and data aggregation in Azure Network Watcher traffic analytics](traffic-analytics-schema.md). - To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md).
networking Check Usage Against Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/check-usage-against-limits.md
description: Learn how to check your Azure resource usage against Azure subscrip
-tags: azure-resource-manager
open-datasets Dataset Gatk Resource Bundle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-gatk-resource-bundle.md
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
[SAS Token](../storage/common/storage-sas-overview.md): ?sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D
-5. datasetbroadpublic
+ South Central US: 'https://datasetpublicbroadrefsc.blob.core.windows.net/dataset'
+
+ [SAS Token](../storage/common/storage-sas-overview.md): ?sv=2023-01-03&st=2024-02-12T19%3A56%3A11Z&se=2029-02-13T19%3A56%3A00Z&sr=c&sp=rl&sig=oGiNUGZ08PaabHVNtIiVEpJ1kcyqcL6ZadQcuN2ns%2FM%3D
+
+6. datasetbroadpublic
West US 2: 'https://datasetbroadpublic.blob.core.windows.net/dataset'
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
[SAS Token](../storage/common/storage-sas-overview.md): ?sv=2020-04-08&si=prod&sr=c&sig=u%2Bg2Ab7WKZEGiAkwlj6nKiEeZ5wdoJb10Az7uUwis%2Fg%3D
+ South Central US: 'https://datasetbroadpublicsc.blob.core.windows.net/dataset'
+
+ [SAS Token](../storage/common/storage-sas-overview.md): ?sv=2023-01-03&st=2024-02-12T19%3A58%3A33Z&se=2029-02-13T19%3A58%3A00Z&sr=c&sp=rl&sig=C2lDhe1uwu%2FJnC9rbQO65G6%2BdEUQ%2Fl0VheXrlnIQVAs%3D
+ ## Use Terms Visit the [GATK resource bundle official site](https://gatk.broadinstitute.org/hc/articles/360035890811-Resource-bundle).
open-datasets Dataset Human Reference Genomes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-human-reference-genomes.md
This dataset contains approximately 10 GB of data and is updated daily.
## Storage location
-This dataset is stored in the West US 2 and West Central US Azure regions. Allocating compute resources in West US 2 or West Central US is recommended for affinity.
+This dataset is stored in the West US 2, West Central US and South Central US Azure regions. Allocating compute resources in West US 2 or West Central US or South Central US is recommended for affinity.
## Data Access
West Central US: 'https://datasetreferencegenomes-secondary.blob.core.windows.ne
[SAS Token](../storage/common/storage-sas-overview.md): sv=2019-02-02&se=2050-01-01T08%3A00%3A00Z&si=prod&sr=c&sig=JtQoPFqiC24GiEB7v9zHLi4RrA2Kd1r%2F3iFt2l9%2FlV8%3D
+South Central US: 'https://datasetreferencegenomesc.blob.core.windows.net/dataset'
+
+[SAS Token](../storage/common/storage-sas-overview.md): sv=2023-01-03&st=2024-02-12T20%3A07%3A21Z&se=2029-02-13T20%3A07%3A00Z&sr=c&sp=rl&sig=ASZYVyhqLOXKsT%2BcTR8MMblFeI4uZ%2Bnno%2FCnQk2RaFs%3D
+ ## Use Terms Data is available without restrictions. For more information and citation details, see the [NCBI Reference Sequence Database site](https://www.ncbi.nlm.nih.gov/refseq/).
blob_service_client.get_blob_to_path('dataset/vertebrate_mammalian/Homo_sapiens/
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
Last updated 10/10/2023
# Control egress traffic for your Azure Red Hat OpenShift (ARO) cluster
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
- This article provides the necessary details that allow you to secure outbound traffic from your Azure Red Hat OpenShift cluster (ARO). With the release of the [Egress Lockdown Feature](./concepts-egress-lockdown.md), all of the required connections for an ARO cluster are proxied through the service. There are additional destinations that you may want to allow to use features such as Operator Hub or Red Hat telemetry. > [!IMPORTANT]
For additional information on remote health monitoring and telemetry, see the [R
### Azure Monitor container insights ARO clusters can be monitored using the Azure Monitor container insights extension. Review the pre-requisites and instructions for [enabling the extension](../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md).--
-<!-- @todo Migrate this to a secondary article if we find customer demand.
-## Private ARO cluster setup
-The goal is to secure ARO cluster by routing Egress traffic through an Azure Firewall
-### Before:
-![Before](media/concepts-networking/aro-private.jpg)
-### After:
-![After](media/concepts-networking/aro-fw.jpg)
-
-## Create a private ARO cluster
-
-### Set up VARS for your environment
-```bash
-
-CLUSTER=aro-cluster # Name of your created cluster
-RESOURCEGROUP=aro-rg # The name of your resource group where you created the ARO cluster
-AROVNET=aro-vnet # The name of your vnet from your created ARO cluster
-JUMPSUBNET=jump-subnet
-LOCATION=eastus # The location where ARO cluster is deployed
-
-```
-
-### Create a resource group
-```azurecli
-az group create -g "$RESOURCEGROUP" -l $LOCATION
-```
-
-### Create the virtual network
-```azurecli
-az network vnet create \
- -g $RESOURCEGROUP \
- -n $AROVNET \
- --address-prefixes 10.0.0.0/8
-```
-
-### Add two empty subnets to your virtual network
-```azurecli
- az network vnet subnet create \
- -g "$RESOURCEGROUP" \
- --vnet-name $AROVNET \
- -n "$CLUSTER-master" \
- --address-prefixes 10.10.1.0/24 \
- --service-endpoints Microsoft.ContainerRegistry
-
- az network vnet subnet create \
- -g $RESOURCEGROUP \
- --vnet-name $AROVNET \
- -n "$CLUSTER-worker" \
- --address-prefixes 10.20.1.0/24 \
- --service-endpoints Microsoft.ContainerRegistry
-```
-
-### Disable network policies for Private Link Service on your virtual network and subnets. This is a requirement for the ARO service to access and manage the cluster.
-```azurecli
-az network vnet subnet update \
- -g "$RESOURCEGROUP" \
- --vnet-name $AROVNET \
- -n "$CLUSTER-master" \
- --disable-private-link-service-network-policies true
-```
-### Create a Firewall Subnet
-```azurecli
-az network vnet subnet create \
- -g "$RESOURCEGROUP" \
- --vnet-name $AROVNET \
- -n "AzureFirewallSubnet" \
- --address-prefixes 10.100.1.0/26
-```
-
-## Create a jump-host VM
-### Create a jump-subnet
-```azurecli
- az network vnet subnet create \
- -g "$RESOURCEGROUP" \
- --vnet-name $AROVNET \
- -n $JUMPSUBNET \
- --address-prefixes 10.30.1.0/24 \
- --service-endpoints Microsoft.ContainerRegistry
-```
-### Create a jump-host VM
-```azurecli
-VMUSERNAME=aroadmin
-
-az vm create --name ubuntu-jump \
- --resource-group $RESOURCEGROUP \
- --generate-ssh-keys \
- --admin-username $VMUSERNAME \
- --image Ubuntu2204 \
- --subnet $JUMPSUBNET \
- --public-ip-address jumphost-ip \
- --vnet-name $AROVNET
-```
-
-## Create an Azure Red Hat OpenShift cluster
-### Get a Red Hat pull secret (optional)
-
-A Red Hat pull secret enables your cluster to access Red Hat container registries along with other content. This step is optional but recommended.
-
-1. **[Go to your Red Hat OpenShift cluster manager portal](https://cloud.redhat.com/openshift/install/azure/aro-provisioned) and log in.**
-
- You will need to log in to your Red Hat account or create a new Red Hat account with your business email and accept the terms and conditions.
-
-2. **Click Download pull secret.**
-
-Keep the saved `pull-secret.txt` file somewhere safe - it will be used in each cluster creation.
-
-When running the `az aro create` command, you can reference your pull secret using the `--pull-secret @pull-secret.txt` parameter. Execute `az aro create` from the directory where you stored your `pull-secret.txt` file. Otherwise, replace `@pull-secret.txt` with `@<path-to-my-pull-secret-file`.
-
-If you're copying your pull secret or referencing it in other scripts, format your pull secret as a valid JSON string.
-
-```azurecli
-az aro create \
- -g "$RESOURCEGROUP" \
- -n "$CLUSTER" \
- --vnet $AROVNET \
- --master-subnet "$CLUSTER-master" \
- --worker-subnet "$CLUSTER-worker" \
- --apiserver-visibility Private \
- --ingress-visibility Private \
- --pull-secret @pull-secret.txt
-```
-
-## Create an Azure Firewall
-
-### Create a public IP Address
-```azurecli
-az network public-ip create -g $RESOURCEGROUP -n fw-ip --sku "Standard" --location $LOCATION
-```
-### Update install Azure Firewall extension
-```azurecli
-az extension add -n azure-firewall
-az extension update -n azure-firewall
-```
-
-### Create Azure Firewall and configure IP Config
-```azurecli
-az network firewall create -g $RESOURCEGROUP -n aro-private -l $LOCATION
-az network firewall ip-config create -g $RESOURCEGROUP -f aro-private -n fw-config --public-ip-address fw-ip --vnet-name $AROVNET
-
-```
-
-### Capture Azure Firewall IPs for a later use
-```azurecli
-FWPUBLIC_IP=$(az network public-ip show -g $RESOURCEGROUP -n fw-ip --query "ipAddress" -o tsv)
-FWPRIVATE_IP=$(az network firewall show -g $RESOURCEGROUP -n aro-private --query "ipConfigurations[0].privateIPAddress" -o tsv)
-
-echo $FWPUBLIC_IP
-echo $FWPRIVATE_IP
-```
-
-### Create a UDR and Routing Table for Azure Firewall
-```azurecli
-az network route-table create -g $RESOURCEGROUP --name aro-udr
-
-az network route-table route create -g $RESOURCEGROUP --name aro-udr --route-table-name aro-udr --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address $FWPRIVATE_IP
-```
-
-### Add Application Rules for Azure Firewall
-Example rule for telemetry to work. Additional possibilities are listed [here](https://docs.openshift.com/container-platform/4.3/installing/install_config/configuring-firewall.html#configuring-firewall_configuring-firewall):
-```azurecli
-az network firewall application-rule create -g $RESOURCEGROUP -f aro-private \
- --collection-name 'ARO' \
- --action allow \
- --priority 100 \
- -n 'required' \
- --source-addresses '*' \
- --protocols 'http=80' 'https=443' \
- --target-fqdns 'cert-api.access.redhat.com' 'api.openshift.com' 'api.access.redhat.com' 'infogw.api.openshift.com'
-```
-Optional rules for Docker images:
-```azurecli
-az network firewall application-rule create -g $RESOURCEGROUP -f aro-private \
- --collection-name 'Docker' \
- --action allow \
- --priority 200 \
- -n 'docker' \
- --source-addresses '*' \
- --protocols 'http=80' 'https=443' \
- --target-fqdns '*cloudflare.docker.com' '*registry-1.docker.io' 'apt.dockerproject.org' 'auth.docker.io'
-```
-
-### Associate ARO Subnets to FW
-```azurecli
-az network vnet subnet update -g $RESOURCEGROUP --vnet-name $AROVNET --name "$CLUSTER-master" --route-table aro-udr
-az network vnet subnet update -g $RESOURCEGROUP --vnet-name $AROVNET --name "$CLUSTER-worker" --route-table aro-udr
-```
-
-## Test the configuration from the Jumpbox
-These steps work only if you added rules for Docker images.
-### Configure the jumpbox
-Log in to a jumpbox VM and install `azure-cli`, `oc-cli`, and `jq` utils. For the installation of openshift-cli, check the Red Hat customer portal.
-```bash
-#Install Azure-cli
-curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
-#Install jq
-sudo apt install jq -y
-```
-### Log in to the ARO cluster
-List cluster credentials:
-```bash
-
-# Login to Azure
-az login
-# Set Vars in Jumpbox
-CLUSTER=aro-cluster # Name of your created cluster
-RESOURCEGROUP=aro-rg # The name of your resource group where you created the ARO cluster
-
-#Get the cluster credentials
-ARO_PASSWORD=$(az aro list-credentials -n $CLUSTER -g $RESOURCEGROUP -o json | jq -r '.kubeadminPassword')
-ARO_USERNAME=$(az aro list-credentials -n $CLUSTER -g $RESOURCEGROUP -o json | jq -r '.kubeadminUsername')
-```
-Get an API server endpoint:
-```azurecli
-ARO_URL=$(az aro show -n $CLUSTER -g $RESOURCEGROUP -o json | jq -r '.apiserverProfile.url')
-```
-
-### Download the oc CLI to the jumpbox
-```bash
-cd ~
-wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz
-
-mkdir openshift
-tar -zxvf openshift-client-linux.tar.gz -C openshift
-echo 'export PATH=$PATH:~/openshift' >> ~/.bashrc && source ~/.bashrc
-```
-
-Log in using `oc login`:
-```bash
-oc login $ARO_URL -u $ARO_USERNAME -p $ARO_PASSWORD
-```
-
-### Run CentOS to test outside connectivity
-Create a pod
-```bash
-cat <<EOF | oc apply -f -
-apiVersion: v1
-kind: Pod
-metadata:
- name: centos
-spec:
- containers:
- - name: centos
- image: centos
- ports:
- - containerPort: 80
- command:
- - sleep
- - "3600"
-EOF
-```
-Once the pod is running, exec into it and test outside connectivity.
-
-```bash
-oc exec -it centos -- /bin/bash
-curl microsoft.com
-```
-
-## Access the web console of the private cluster
-
-### Set up ssh forwards commands
-
-```bash
-sudo ssh -i $SSH_PATH -L 443:$CONSOLE_URL:443 aroadmin@$JUMPHOST
-
-example:
-sudo ssh -i /Users/jimzim/.ssh/id_rsa -L 443:console-openshift-console.apps.d5xm5iut.eastus.aroapp.io:443 aroadmin@104.211.18.56
-```
-
-### Modify the etc. hosts file on your local machine
-```bash
-##
-# Host Database
-#
-127.0.0.1 console-openshift-console.apps.d5xm5iut.eastus.aroapp.io
-127.0.0.1 oauth-openshift.apps.d5xm5iut.eastus.aroapp.io
-```
-
-### Use sshuttle as another option
-
-[SSHuttle](https://github.com/sshuttle/sshuttle)
--
-## Clean up resources
-
-```azurecli
-
-# Clean up the ARO cluster, vnet, firewall and jumpbox
-
-# Remove udr from master and worker subnets first or will get error when deleting ARO cluster
-az network vnet subnet update --vnet-name $AROVNET -n aro-cluster-master -g $RESOURCEGROUP --route-table aro-udr --remove routeTable
-az network vnet subnet update --vnet-name $AROVNET -n aro-cluster-worker -g $RESOURCEGROUP --route-table aro-udr --remove routeTable
-
-# Remove ARO Cluster
-az aro delete -n $CLUSTER -g $RESOURCEGROUP
-
-# Remove the resource group that contains the firewall, jumpbox and vnet
-az group delete -n $RESOURCEGROUP
-``` -->
operator-nexus Howto Use Mde Runtime Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-mde-runtime-protection.md
Previously updated : 02/05/2024 Last updated : 02/15/2024
export CLUSTER_NAME="contoso-cluster"
## Defaults for MDE Runtime Protection The runtime protection sets to following default values when you deploy a cluster-- Enforcement Level: `OnDemand` if not specified when creating the cluster-- MDE Service: `Enabled`
+- Enforcement Level: `Disabled` if not specified when creating the cluster
+- MDE Service: `Disabled`
> [!NOTE] >The argument `--runtime-protection enforcement-level="<enforcement level>"` serves two purposes: enabling/disabling MDE service and updating the enforcement level.
payment-hsm Certification Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/certification-compliance.md
description: Information on Azure Payment HSM certification and compliance
-tags: azure-resource-manager
Last updated 01/31/2024
payment-hsm Deployment Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/deployment-scenarios.md
description: Azure HSM deployment scenarios for high availability deployment and
-tags: azure-resource-manager
Last updated 03/25/2023
payment-hsm Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/getting-started.md
description: Information to begin using Azure Payment HSM
-tags: azure-resource-manager
Last updated 01/30/2024
payment-hsm Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/known-issues.md
description: Azure Payment HSM known issues
-tags: azure-resource-manager
Last updated 01/31/2024
payment-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/overview.md
Title: What is Azure Payment HSM?
description: Learn how Azure Payment HSM is an Azure service that provides cryptographic key operations for real-time, critical payment transactions
-tags: azure-resource-manager
payment-hsm Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-template.md
Last updated 01/30/2024
-tags: azure-resource-manager
#Customer intent: As a security admin who is new to Azure, I want to create a payment HSM using an Azure Resource Manager template.
payment-hsm Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/solution-design.md
description: Learn about topologies and constraints for Azure Payment HSM
-tags: azure-resource-manager
Last updated 01/30/2024
payment-hsm Support Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/support-guide.md
description: Azure Payment HSM Service support guide
-tags: azure-resource-manager
Last updated 01/30/2024
postgresql Generative Ai Azure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-overview.md
azure_ai.version()
### Examples
-#### Set the Endpoint and an API Key for Azure Open AI
+#### Set the Endpoint and an API Key for Azure OpenAI
```postgresql select azure_ai.set_setting('azure_openai.endpoint','https://<endpoint>.openai.azure.com'); select azure_ai.set_setting('azure_openai.subscription_key', '<API Key>'); ```
-#### Get the Endpoint and API Key for Azure Open AI
+#### Get the Endpoint and API Key for Azure OpenAI
```postgresql select azure_ai.get_setting('azure_openai.endpoint');
-select azure_ai.get_setting('azure_openai. subscription_key');
+select azure_ai.get_setting('azure_openai.subscription_key');
``` #### Check the Azure AI extension version
postgresql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-create-users.md
This article describes how you can create users within an Azure Database for PostgreSQL flexible server instance.
-> [!NOTE]
-> Microsoft Entra authentication for Azure Database for PostgreSQL flexible server is currently in preview.
Suppose you want to learn how to create and manage Azure subscription users and their privileges. In that case, you can visit the [Azure role-based access control (Azure RBAC) article](../../role-based-access-control/built-in-roles.md) or review [how to customize roles](../../role-based-access-control/custom-roles.md).
postgresql Best Practices Seamless Migration Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/best-practices-seamless-migration-single-to-flexible.md
To get an idea of the downtime required for migrating your server, we strongly r
## Set up Online migration parameters > [!NOTE]
-> For Online migrations using Single servers running PostgreSQL 9.5 and 9.6, we explicitly have to allow replication connection. To enable that, add a firewall entry to allowlist connection from target. Make sure the firewall rule name has `_replrule` suffix. The suffic isn't required for Single servers running PostgreSQL 10 and 11. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> For Online migrations using Single servers running PostgreSQL 9.5 and 9.6, we explicitly have to allow replication connection. To enable that, add a firewall entry to allowlist connection from target. Make sure the firewall rule name has `_replrule` suffix. The suffic isn't required for Single servers running PostgreSQL 10 and 11. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
The following table lists the different tools available for performing the migra
| pg_dump and pg_restore | Offline | - Tried and tested tool that is in use for a long time<br />- Suited for databases of size less than 10 GB<br />| - Need prior knowledge of setting up and using this tool<br />- Slow when compared to other tools<br />Significant downtime to your application. | > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="media\concepts-single-to-flexible\online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="media\concepts-single-to-flexible\online-migration-feature-switch.png":::
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Note these important points for the command response:
- The migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state. > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png"::: #### Setup replication
-If **Online** migration is selected, it requires Logical replication to be turned on in the source Single server. If it isn't turned on, the migration tool automatically turns on logical replication at the source Single server when the `SetupLogicalReplicationOnSourceDBIfNeeded` parameter is passed with a value of `true` in the accompanying JSON file. Replication can also be set up manually at the source after starting the migration, using the below command. Note that either approach of turning on logical replication restarts the source Single server.
+If **Online** migration preview is selected, it requires Logical replication to be turned on in the source Single server. If it isn't turned on, the migration tool automatically turns on logical replication at the source Single server when the `SetupLogicalReplicationOnSourceDBIfNeeded` parameter is passed with a value of `true` in the accompanying JSON file. Replication can also be set up manually at the source after starting the migration, using the below command. Note that either approach of turning on logical replication restarts the source Single server.
For example:
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
The first tab is **Setup**. Just in case you missed it, allowlist necessary exte
It's always a good practice to choose **Validate** or **Validate and Migrate** option to perform pre-migration validations before running the migration. To learn more about the pre-migration validation refer to this [documentation](./concepts-single-to-flexible.md#pre-migration-validations).
-**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
-If **Online** migration is selected, it requires Logical replication to be turned on in the source Single server. If it's not turned on, the migration tool automatically turns on logical replication at the source Single server. Replication can also be set up manually under **Replication** tab in the Single server side pane by setting the Azure replication support level to **Logical**. Either approach restarts the source single server.
+If **Online** migration preview is selected, it requires Logical replication to be turned on in the source Single server. If it's not turned on, the migration tool automatically turns on logical replication at the source Single server. Replication can also be set up manually under **Replication** tab in the Single server side pane by setting the Azure replication support level to **Logical**. Either approach restarts the source single server.
Select the **Next : Connect to Source** button.
You can see the results of **Validate and Migrate** once the operation is comple
:::image type="content" source="./media/concepts-single-to-flexible/validate-and-migrate-1.png" alt-text="Screenshot showing validations tab in details page." lightbox="./media/concepts-single-to-flexible/validate-and-migrate-1.png":::
-### Online migration
+### Online migration preview
> [!NOTE]
-> Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> **Online migrations preview** is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
-tags: azure-resource-manager
zone_pivot_groups: ase-pro-version
private-5g-core Enable Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-azure-active-directory.md
You'll need to apply your Kubernetes Secret Objects if you're enabling Microsoft
1. Sign in to [Azure Cloud Shell](../cloud-shell/overview.md) and select **PowerShell**. If this is your first time accessing your cluster via Azure Cloud Shell, follow [Access your cluster](../azure-arc/kubernetes/cluster-connect.md?tabs=azure-cli) to configure kubectl access. 1. Apply the Secret Object for both distributed tracing and the packet core dashboards, specifying the core kubeconfig filename.
- `kubectl apply -f /home/centos/secret-azure-ad-local-monitoring.yaml --kubeconfig=<core kubeconfig>`
+ `kubectl apply -f $HOME/secret-azure-ad-local-monitoring.yaml --kubeconfig=<core kubeconfig>`
1. Use the following commands to verify if the Secret Objects were applied correctly, specifying the core kubeconfig filename. You should see the correct **Name**, **Namespace**, and **Type** values, along with the size of the encoded values.
reliability Reliability App Gateway Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-gateway-containers.md
+
+ Title: Reliability in Azure Application Gateway for Containers
+description: Find out about reliability in Azure Application Gateway for Containers.
++++++ Last updated : 02/07/2024 +++
+# Reliability in Azure Application Gateway for Containers
++
+This article describes reliability and availability zones support in [Azure Application Gateway for Containers](/azure/application-gateway/for-containers/overview). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
++
+## Availability zone support
+++
+Application Gateway for Containers (AGC) is always deployed in a highly available configuration. For Azure regions that support availability zones, AGC is automatically configured as zone redundant. For regions that don't support availability zones, availability sets are used.
+
+### Prerequisites
+
+To deploy with availability zone support, you must choose a region that supports availability zones. To see which regions support availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
++
+## Next steps
+
+- [Reliability in Azure](/azure/availability-zones/overview)
reliability Reliability Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-elastic-san.md
The latency differences between an elastic SAN on LRS and an elastic SAN on ZRS
### Availability zone redeployment and migration
-To migrate an elastic SAN on LRs to ZRS, you must snapshot your elastic SAN's volumes, export them to managed disk snapshots, deploy an elastic SAN on ZRS, and then create volumes on the SAN on ZRS using those disk snapshots. To learn how to use snapshots (preview), see [Snapshot Azure Elastic SAN Preview volumes (preview)](../storage/elastic-san/elastic-san-snapshots.md).
+To migrate an elastic SAN on LRs to ZRS, you must snapshot your elastic SAN's volumes, export them to managed disk snapshots, deploy an elastic SAN on ZRS, and then create volumes on the SAN on ZRS using those disk snapshots. To learn how to use snapshots (preview), see [Snapshot Azure Elastic SAN volumes (preview)](../storage/elastic-san/elastic-san-snapshots.md).
## Disaster recovery and business continuity
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
| Product| Availability zone guide | Disaster recovery guide | |-|-|-|
+|Azure Application Gateway for Containers | [Reliability in Azure Application Gateway for Containers](reliability-app-gateway-containers.md) | [Reliability in Azure Application Gateway for Containers](reliability-app-gateway-containers.md)|
|Azure Chaos Studio | [Reliability in Azure Chaos Studio](reliability-chaos-studio.md)| [Reliability in Azure Chaos Studio](reliability-chaos-studio.md)| |Azure Community Training|[Reliability in Community Training](reliability-community-training.md) |[Reliability in Community Training](reliability-community-training.md) | |Azure Cosmos DB for MongoDB vCore|[High availability in Azure Cosmos DB for MongoDB vCore](/azure/cosmos-db/mongodb/vcore/high-availability?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Failover for business continuity and disaster recovery with Azure Cosmos DB for MongoDB vCore](../cosmos-db/mongodb/vcore/failover-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
role-based-access-control Custom Roles Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-bicep.md
Previously updated : 12/01/2023 Last updated : 02/15/2024 #Customer intent: As an IT admin, I want to create custom and/or roles using Bicep so that I can start automating custom role processes.
The Bicep file used in this article is from [Azure Quickstart Templates](https:/
The scope where this custom role can be assigned is set to the current subscription.
+A custom role requires a unique ID. The ID can be generated with the [guid()](../azure-resource-manager/bicep/bicep-functions-string.md#guid) function. Since a custom role also requires a [unique display name](custom-roles.md#custom-role-properties) for the tenant, you can use the role name as a parameter for the `guid()` function to create a [deterministic GUID](../azure-resource-manager/bicep/scenarios-rbac.md#name). A deterministic GUID is useful if you later need to update the custom role using the same Bicep file.
+ :::code language="bicep" source="~/quickstart-templates/subscription-deployments/create-role-def/main.bicep"::: The resource defined in the Bicep file is:
The resource defined in the Bicep file is:
## Deploy the Bicep file 1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+1. Create a variable named **myActions** with the actions for the roleDefinition.
# [CLI](#tab/CLI) ```azurecli-interactive
- $myActions='("Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read")'
-
- az deployment sub create --location eastus --name customRole --template-file main.bicep --parameters actions=$myActions
+ $myActions='["Microsoft.Resources/subscriptions/resourceGroups/read"]'
``` # [PowerShell](#tab/PowerShell) ```azurepowershell-interactive
- $myActions = @("Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read")
+ $myActions = @("Microsoft.Resources/subscriptions/resourceGroups/read")
+ ```
+
+
+
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ az deployment sub create --location eastus --name customRole --template-file ./main.bicep --parameters actions=$myActions
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+ ```azurepowershell-interactive
New-AzSubscriptionDeployment -Location eastus -Name customRole -TemplateFile ./main.bicep -actions $myActions ```
- > [!NOTE]
- > Create a variable called **myActions** and then pass that variable. Replace the sample actions with the actions for the roleDefinition.
- When the deployment finishes, you should see a message indicating the deployment succeeded. ## Review deployed resources
Get-AzRoleDefinition "Custom Role - RG Reader"
## Update a custom role
-Similar to creating a custom role, you can update an existing custom role using Bicep. To update a custom role, you need to specify the role you want to update.
+Similar to creating a custom role, you can update an existing custom role using Bicep. To update a custom role, you need to specify the role you want to update. If you previously created the custom role in Bicep with a unique role ID that is [deterministic](../azure-resource-manager/bicep/scenarios-rbac.md#name), you can use the same Bicep file and specify the custom role by just using the display name.
-Here are the changes you would need to make to the previous Bicep file to update the custom role.
-
-1. Include the role ID as a parameter.
-
- ```bicep
- ...
- @description('ID of the role definition')
- param roleDefName string
- ...
-
- ```
-
-2. Remove the roleDefName variable. You'll get a warning if you have a parameter and variable with the same name.
-3. Use Azure CLI or Azure PowerShell to get the roleDefName.
+1. Specify the updated actions.
# [CLI](#tab/CLI) ```azurecli-interactive
- az role definition list --name "Custom Role - RG Reader"
+ $myActions='["Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read"]'
``` # [PowerShell](#tab/PowerShell) ```azurepowershell-interactive
- Get-AzRoleDefinition -Name "Custom Role - RG Reader"
+ $myActions = @(""Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read"")
``` -
+
-4. Use Azure CLI or Azure PowerShell to deploy the updated Bicep file, replacing **\<name-id\>** with the roleDefName, and replacing the sample actions with the updated actions for the roleDefinition.
+1. Use Azure CLI or Azure PowerShell to update the custom role.
# [CLI](#tab/CLI) ```azurecli-interactive
- $myActions='("Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read")'
-
- az deployment sub create --location eastus --name customrole --template-file main.bicep --parameters actions=$myActions roleDefName="name-id" roleName="Custom Role - RG Reader"
+ az deployment sub create --location eastus --name customrole --template-file ./main.bicep --parameters actions=$myActions roleName="Custom Role - RG Reader"
``` # [PowerShell](#tab/PowerShell) ```azurepowershell-interactive
- $myActions = @(""Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read"")
-
- New-AzSubscriptionDeployment -Location eastus -Name customrole -TemplateFile ./main.bicep -actions $myActions -roleDefName "name-id" -roleName "Custom Role - RG Reader"
+ New-AzSubscriptionDeployment -Location eastus -Name customrole -TemplateFile ./main.bicep -actions $myActions -roleName "Custom Role - RG Reader"
``` > [!NOTE]
- > It may take several minutes for the updated role definition to be propagated.
+ > It may take several minutes for the updated custom role to be propagated.
## Clean up resources
route-server Expressroute Vpn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md
Previously updated : 08/15/2023 Last updated : 02/16/2024+
+#CustomerIntent: As an Azure administrator, I want to deploy Azure Route Server with ExpressRoute and Azure VPN so that routes can be exchanged between the two on-premises networks.
# Azure Route Server support for ExpressRoute and Azure VPN
-Azure Route Server supports not only third-party network virtual appliances (NVA) running on Azure but also seamlessly integrates with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateways and Azure Route Server by enabling [branch-to-branch](quickstart-configure-route-server-portal.md#configure-route-exchange) in Azure portal. If you prefer, you can use [Azure PowerShell](quickstart-configure-route-server-powershell.md#route-exchange) or [Azure CLI](quickstart-configure-route-server-cli.md#configure-route-exchange) to enable the route exchange with the Route Server.
+Azure Route Server supports not only third-party network virtual appliances (NVA) in Azure but also seamlessly integrates with ExpressRoute and Azure VPN gateways. You donΓÇÖt need to configure or manage the BGP peering between the gateway and Azure Route Server. You can enable route exchange between the gateways and Azure Route Server by enabling [branch-to-branch](quickstart-configure-route-server-portal.md#configure-route-exchange) in Azure portal. If you prefer, you can use [Azure PowerShell](quickstart-configure-route-server-powershell.md#route-exchange) or [Azure CLI](quickstart-configure-route-server-cli.md#configure-route-exchange) to enable the route exchange with the Route Server.
[!INCLUDE [downtime note](../../includes/route-server-note-vng-downtime.md)]
Azure Route Server supports not only third-party network virtual appliances (NVA
When you deploy an Azure Route Server along with a virtual network gateway and an NVA in a virtual network, by default Azure Route Server doesnΓÇÖt propagate the routes it receives from the NVA and virtual network gateway between each other. Once you enable **branch-to-branch** in Route Server, the virtual network gateway and the NVA exchange their routes.
-For example, in the following diagram:
+> [!IMPORTANT]
+> ExpressRoute branch-to-branch connectivity is not supported. If you have two (or more) ExpressRoute circuits connected to the same ExpressRoute virtual network gateway, routes from one circuit are not advertised to the other. If you want to enable on-premises to on-premises connectivity over ExpressRoute, consider configuring ExpressRoute Global Reach. For more information, see [About Azure ExpressRoute Global Reach](../expressroute/expressroute-global-reach.md).
+
+The following diagram shows an example of using Route Server to exchange routes between an ExpressRoute and SDWAN appliance:
-* The SDWAN appliance receives from Azure Route Server the route of *On-premises 2*, which is connected to ExpressRoute circuit, along with the route of the virtual network.
+- The SDWAN appliance receives from Azure Route Server the route of *On-premises 2*, which is connected to ExpressRoute circuit, along with the route of the virtual network.
-* The ExpressRoute gateway receives from Azure Route Server the route of *On-premises 1*, which is connected to the SDWAN appliance, along with the route of the virtual network.
+- The ExpressRoute gateway receives from Azure Route Server the route of *On-premises 1*, which is connected to the SDWAN appliance, along with the route of the virtual network.
:::image type="content" source="./media/expressroute-vpn-support/expressroute-with-route-server.png" alt-text="Diagram showing ExpressRoute gateway and SDWAN NVA exchanging routes through Azure Route Server.":::
If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes
:::image type="content" source="./media/expressroute-vpn-support/expressroute-and-vpn-with-route-server.png" alt-text="Diagram showing ExpressRoute and VPN gateways exchanging routes through Azure Route Server."::: > [!NOTE]
-> When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred.
+> When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred by default. You can configure routing preference to influence Route Server route selection. For more information, see [Routing preference (preview)](hub-routing-preference.md).
-## Next steps
+## Related content
-- Learn more about [Azure Route Server](route-server-faq.md).-- Learn how to [configure Azure Route Server](quickstart-configure-route-server-powershell.md).-- Learn more about [Azure ExpressRoute and Azure VPN coexistence](../expressroute/how-to-configure-coexisting-gateway-portal.md).
+- [Azure Route Server frequently asked questions (FAQ)](route-server-faq.md).
+- [Configure Azure Route Server](quickstart-configure-route-server-powershell.md).
+- [Azure ExpressRoute and Azure VPN coexistence](../expressroute/how-to-configure-coexisting-gateway-portal.md?toc=/azure/route-server/toc.json).
sap Hana Connect Azure Vm Large Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-connect-azure-vm-large-instances.md
description: Connectivity setup from virtual machines for using SAP HANA on Azur
-tags: azure-resource-manager
sap Hana Li Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/hana-li-portal.md
Title: Azure HANA Large Instances control through Azure portal | Microsoft Docs
description: Describes the way how you can identify and interact with Azure HANA Large Instances through portal
-tags: azure-resource-manager
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
description: Deploy SAP S/4HANA or BW/4HANA on an Azure VM
-tags: azure-resource-manager
ms.assetid: 44bbd2b6-a376-4b5c-b824-e76917117fa9
sap Certifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/certifications.md
description: Updated list of current configurations and certifications of SAP on
-tags: azure-resource-manager
sap Dbms Guide Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ibm.md
Title: IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload | Microso
description: IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload
-tags: azure-resource-manager
keywords: 'Azure, Db2, SAP, IBM'
sap Dbms Guide Maxdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-maxdb.md
Title: SAP MaxDB, liveCache, and Content Server deployment on Azure VMs | Micros
description: SAP MaxDB, liveCache, and Content Server deployment on Azure
-tags: azure-resource-manager
sap Dbms Guide Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-oracle.md
Title: Oracle Azure Virtual Machines DBMS deployment for SAP workload | Microsof
description: Oracle Azure Virtual Machines DBMS deployment for SAP workload
-tags: azure-resource-manager
keywords: 'SAP, Azure, Oracle, Data Guard'
sap Dbms Guide Sapase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sapase.md
Title: SAP ASE Azure Virtual Machines DBMS deployment for SAP workload | Microso
description: SAP ASE Azure Virtual Machines DBMS deployment for SAP workload
-tags: azure-resource-manager
sap Dbms Guide Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-sqlserver.md
Title: SQL Server Azure Virtual Machines DBMS deployment for SAP workload | Micr
description: SQL Server Azure Virtual Machines DBMS deployment for SAP workload
-tags: azure-resource-manager
keywords: 'Azure, SQL Server, SAP, AlwaysOn, Always On'
sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/deployment-checklist.md
Title: SAP workload planning and deployment checklist
description: Checklist for planning SAP workload deployments to Azure and deploying the workloads
-tags: azure-resource-manager
sap Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/deployment-guide.md
description: Learn how to deploy SAP software on Linux virtual machines in Azure
-tags: azure-resource-manager
sap Hana Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-get-started.md
description: Guide for installation of SAP HANA on Azure VMs
-tags: azure-resource-manager
ms.assetid: c51a2a06-6e97-429b-a346-b433a785c9f0
sap Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations-netapp.md
Title: SAP HANA Azure virtual machine ANF configuration | Microsoft Docs
description: Azure NetApp Files Storage recommendations for SAP HANA.
-tags: azure-resource-manager
keywords: 'SAP, Azure, ANF, HANA, Azure NetApp Files, snapshot'
sap Hana Vm Operations Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations-storage.md
Title: SAP HANA Azure virtual machine storage configurations | Microsoft Docs
description: General Storage recommendations for VM that have SAP HANA deployed.
-tags: azure-resource-manager
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
sap Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations.md
Title: SAP HANA infrastructure configurations and operations on Azure | Microsof
description: Operations guide for SAP HANA systems that are deployed on Azure virtual machines.
-tags: azure-resource-manager
sap Hana Vm Premium Ssd V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md
Title: SAP HANA Azure virtual machine premium storage configurations | Microsoft
description: Storage recommendations HANA using premium storage.
-tags: azure-resource-manager
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
sap Hana Vm Premium Ssd V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md
Title: SAP HANA Azure virtual machine Premium SSD v2 configurations | Microsoft
description: Storage recommendations HANA using Premium SSD v2.
-tags: azure-resource-manager
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage, Premium SSD v2'
sap Hana Vm Ultra Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-ultra-disk.md
Title: SAP HANA Azure virtual machine Ultra Disk configurations | Microsoft Docs
description: Storage recommendations for SAP HANA using Ultra disk.
-tags: azure-resource-manager
keywords: 'SAP, Azure HANA, Storage Ultra disk, Premium storage'
sap High Availability Guide Rhel Ibm Db2 Luw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md
Title: Set up IBM Db2 HADR on Azure virtual machines (VMs) on RHEL | Microsoft D
description: Establish high availability of IBM Db2 LUW on Azure virtual machines (VMs) RHEL.
-tags: azure-resource-manager
keywords: 'SAP'
sap High Availability Guide Rhel Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md
Title: Azure Virtual Machines HA for SAP NW on RHEL with Azure NetApp Files| Mic
description: Establish high availability (HA) for SAP NetWeaver on Azure Virtual Machines Red Hat Enterprise Linux (RHEL) with Azure NetApp Files.
-tags: azure-resource-manager
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
Title: Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files| M
description: Establish high availability for SAP NetWeaver on Azure Virtual Machines Red Hat Enterprise Linux (RHEL) with NFS on Azure Files.
-tags: azure-resource-manager
sap High Availability Guide Rhel With Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-dialog-instance.md
description: Configure SAP Dialog Instance on SAP ASCS/SCS high availability VMs
-tags: azure-resource-manager
sap High Availability Guide Rhel With Hana Ascs Ers Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance.md
description: Configure SAP ASCS/SCS and SAP ERS with SAP HANA high availability
-tags: azure-resource-manager
sap High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md
Title: Azure Virtual Machines HA for SAP NW on RHEL | Microsoft Docs
description: This article describes Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux (RHEL).
-tags: azure-resource-manager
sap High Availability Guide Standard Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-standard-load-balancer-outbound-connections.md
description: Public endpoint connectivity for Virtual Machines using Azure Stand
-tags: azure-resource-manager
sap High Availability Guide Windows Azure Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-azure-files-smb.md
description: Learn how to install high availability for SAP NetWeaver on Azure V
-tags: azure-resource-manager
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap High Availability Guide Windows Dfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-dfs.md
description: Using Windows DFS-N to overcome SAP-related SAPMNT naming limitatio
-tags: azure-resource-manager
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap High Availability Guide Windows Netapp Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-netapp-files-smb.md
description: High availability for SAP NetWeaver on Azure VMs on Windows with Az
-tags: azure-resource-manager
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap Lama Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/lama-installation.md
description: Learn how to manage SAP systems on Azure by using SAP LaMa.
-tags: azure-resource-manager
sap Planning Guide Storage Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage-azure-files.md
Title: 'Azure Premium Files NFS and SMB for SAP'
description: Using Azure Premium Files NFS and SMB for SAP workload
-tags: azure-resource-manager
sap Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage.md
Title: 'Azure storage types for SAP workload'
description: Planning Azure storage types for SAP workloads
-tags: azure-resource-manager
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
sap Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide.md
Title: 'Plan and implement an SAP deployment on Azure'
description: Learn how to plan and implement a deployment of SAP applications on Azure virtual machines.
-tags: azure-resource-manager
sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-supported-configurations.md
Title: 'SAP on Azure: Supported Scenarios with Azure VMs'
description: Azure Virtual Machines supported scenarios with SAP workload
-tags: azure-resource-manager
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
sap Rise Integration Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-network.md
description: Describes network connectivity between customer's own Azure environ
-tags: azure-resource-manager
sap Rise Integration Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-security.md
description: Describes integration scenarios of Azure security, identity and mon
-tags: azure-resource-manager
sap Rise Integration Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-services.md
description: Describes integration scenarios of Azure services with SAP RISE man
-tags: azure-resource-manager
sap Rise Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration.md
description: Describes integrating SAP RISE managed virtual network with custome
-tags: azure-resource-manager
sap Sap Ascs Ha Multi Sid Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-file-share.md
Title: SAP ASCS/SCS instance multi-SID high availability with Windows Server Fai
description: Multi-SID high availability for SAP ASCS/SCS instances with Windows Server Failover Clustering and file share on Azure
-tags: azure-resource-manager
ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325
sap Sap Ascs Ha Multi Sid Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-shared-disk.md
description: Multi-SID high availability for an SAP ASCS/SCS instance with Wind
-tags: azure-resource-manager
ms.assetid: cbf18abe-41cb-44f7-bdec-966f32c89325
sap Sap Hana Availability Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-across-regions.md
Title: SAP HANA availability across Azure regions | Microsoft Docs
description: An overview of availability considerations when running SAP HANA on Azure VMs in multiple Azure regions.
-tags: azure-resource-manager
sap Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-one-region.md
Title: SAP HANA availability within one Azure region | Microsoft Docs
description: Describes SAP HANA operations on Azure native VMs in one Azure region.
-tags: azure-resource-manager
sap Sap Hana Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-overview.md
Title: SAP HANA availability on Azure VMs - Overview | Microsoft Docs
description: Describes SAP HANA operations on Azure native VMs.
-tags: azure-resource-manager
sap Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md
description: Establish high availability of SAP HANA with ANF on SLES Azure virt
-tags: azure-resource-manager
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
Title: SAP HANA scale-out with HSR and Pacemaker on RHEL| Microsoft Docs
description: SAP HANA scale-out with HANA system replication (HSR) and Pacemaker on Red Hat Enterprise Linux (RHEL)
-tags: azure-resource-manager
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md
Title: SAP HANA scale-out with standby with Azure NetApp Files on RHEL| Microsof
description: High-availability guide for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications
-tags: azure-resource-manager
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md
Title: SAP HANA scale-out with standby with Azure NetApp Files on SLES | Microso
description: Learn how to deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server.
-tags: azure-resource-manager
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap Sap High Availability Guide Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-start.md
description: In this article, learn about Azure Virtual Machines high availabili
-tags: azure-resource-manager
ms.assetid: 1cfcc14a-6795-4cfd-a740-aa09d6d2b817
sap Sap High Availability Guide Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-file-share.md
description: Learn how to cluster an SAP ASCS/SCS instance on a Windows failover
-tags: azure-resource-manager
ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
sap Sap High Availability Guide Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-shared-disk.md
description: Learn how to cluster an SAP ASCS/SCS instance on a Windows failover
-tags: azure-resource-manager
ms.assetid: f6fb85f8-c77a-4af1-bde8-1de7e4425d2e
sap Sap High Availability Infrastructure Wsfc File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-infrastructure-wsfc-file-share.md
Title: Azure infrastructure for SAP ASCS/SCS HA with WSFC&file Share
description: Azure infrastructure preparation for SAP high availability using a Windows failover cluster and file Share for SAP ASCS/SCS instances
-tags: azure-resource-manager
ms.assetid: 2ce38add-1078-4bb9-a1da-6f407a9bc910
sap Sap High Availability Installation Wsfc Shared Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-installation-wsfc-shared-disk.md
description: Learn how to install SAP NetWeaver HA on a Windows failover cluster
-tags: azure-resource-manager
ms.assetid: 6209bcb3-5b20-4845-aa10-1475c576659f
sap Sap Higher Availability Architecture Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-higher-availability-architecture-scenarios.md
description: Utilize Azure infrastructure VM restart to achieve ΓÇ£higher availa
-tags: azure-resource-manager
ms.assetid: f0b2f8f0-e798-4176-8217-017afe147917
sap Sap Information Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-information-lifecycle-management.md
description: SAP Information Lifecycle Management with Microsoft Azure Blob Stor
-tags: azure-resource-manager
sap Supported Product On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/supported-product-on-azure.md
Title: 'SAP on Azure: What SAP software is supported on Azure'
description: Explains what SAP software is supported to be deployed on Azure
-tags: azure-resource-manager
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
sap Universal Print Sap Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/universal-print-sap-frontend.md
description: Enabling SAP front-end printing with Universal Print
-tags: azure-resource-manager
sap Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-new.md
description: Learn how to deploy the new VM Extension for SAP.
-tags: azure-resource-manager
ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
sap Vm Extension For Sap Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap-standard.md
description: Learn how to deploy the Std VM Extension for SAP.
-tags: azure-resource-manager
ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
sap Vm Extension For Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/vm-extension-for-sap.md
description: Learn how to deploy the VM Extension for SAP.
-tags: azure-resource-manager
ms.assetid: 1c4f1951-3613-4a5a-a0af-36b85750c84e
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
Last updated 01/30/2024
In Azure AI Search, *AI enrichment* refers to integration with [Azure AI services](/azure/ai-services/what-are-ai-services) to process content that isn't searchable in its raw form. Through enrichment, analysis and inference are used to create searchable content and structure where none previously existed.
-Because Azure AI Search is a text and vector search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios. Source content must be textual (you can't enrich vectors), but the content created by an enrichment pipeline can be vectorized and indexed in a vector store using skills like [Text Split skill](cognitive-search-skill-textsplit.md) for chunking and [AzureOpenAiEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) for encoding.
+Because Azure AI Search is a text and vector search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios. Source content must be textual (you can't enrich vectors), but the content created by an enrichment pipeline can be vectorized and indexed in a vector store using skills like [Text Split skill](cognitive-search-skill-textsplit.md) for chunking and [AzureOpenAIEmbedding skill](cognitive-search-skill-azure-openai-embedding.md) for encoding.
AI enrichment is based on [*skills*](cognitive-search-working-with-skillsets.md).
search Cognitive Search Incremental Indexing Conceptual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-incremental-indexing-conceptual.md
- ignite-2023 Previously updated : 04/21/2023 Last updated : 02/16/2024 # Incremental enrichment and caching in Azure AI Search
Last updated 04/21/2023
> [!IMPORTANT] > This feature is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this feature.
-*Incremental enrichment* refers to the use of cached enrichments during [skillset execution](cognitive-search-working-with-skillsets.md) so that only new and changed skills and documents incur AI processing. The cache contains the output from [document cracking](search-indexer-overview.md#document-cracking), plus the outputs of each skill for every document. Although caching is billable (it uses Azure Storage), the overall cost of enrichment is reduced because the costs of storage are less than image extraction and AI processing.
+*Incremental enrichment* refers to the use of cached enrichments during [skillset execution](cognitive-search-working-with-skillsets.md) so that only new and changed skills and documents incur AI processing charges. The cache contains the output from [document cracking](search-indexer-overview.md#document-cracking), plus the outputs of each skill for every document. Although caching is billable (it uses Azure Storage), the overall cost of enrichment is reduced because the costs of storage are less than image extraction and AI processing.
When you enable caching, the indexer evaluates your updates to determine whether existing enrichments can be pulled from the cache. Image and text content from the document cracking phase, plus skill outputs that are upstream or orthogonal to your edits, are likely to be reusable.
-After performing the incremental enrichments as indicated by the skillset update, refreshed results are written back to the cache, and also to the search index or knowledge store.
+After skillset processing is finished, the refreshed results are written back to the cache, and also to the search index or knowledge store.
+
+## Limitations
+
+> [!CAUTION]
+> If you're using the [SharePoint Online indexer (Preview)](search-howto-index-sharepoint-online.md), you should avoid incremental enrichment. Under certain circumstances, the cache becomes invalid, requiring an [indexer reset and run](search-howto-run-reset-indexers.md), should you choose to reload it.
## Cache configuration
-Physically, the cache is stored in a blob container in your Azure Storage account, one per indexer. Each indexer is assigned a unique and immutable cache identifier that corresponds to the container it is using.
+Physically, the cache is stored in a blob container in your Azure Storage account, one per indexer. Each indexer is assigned a unique and immutable cache identifier that corresponds to the container it's using.
-The cache is created when you specify the "cache" property and run the indexer. Only enriched content can be cached. If your indexer does not have an attached skillset, then caching does not apply.
+The cache is created when you specify the "cache" property and run the indexer. Only enriched content can be cached. If your indexer doesn't have an attached skillset, then caching doesn't apply.
-The following example illustrates an indexer with caching enabled. See [Enable enrichment caching](search-howto-incremental-index.md) for full instructions. Notice that when adding the cache property, use preview API version, 2020-06-30-Preview or later, on the request.
+The following example illustrates an indexer with caching enabled. See [Enable enrichment caching](search-howto-incremental-index.md) for full instructions. Notice that when adding the cache property, use a [preview API version](/rest/api/searchservice/search-service-api-versions#preview-versions), 2020-06-30-Preview or later, on the request.
```json POST https://[search service name].search.windows.net/indexers?api-version=2020-06-30-Preview
POST https://[search service name].search.windows.net/indexers?api-version=2020-
## Cache management
-The lifecycle of the cache is managed by the indexer. If an indexer is deleted, its cache is also deleted. If the "cache" property on the indexer is set to null or the connection string is changed, the existing cache is deleted on the next indexer run.
+The lifecycle of the cache is managed by the indexer. If an indexer is deleted, its cache is also deleted. If the `cache` property on the indexer is set to null or the connection string is changed, the existing cache is deleted on the next indexer run.
While incremental enrichment is designed to detect and respond to changes with no intervention on your part, there are parameters you can use to invoke specific behaviors:
While incremental enrichment is designed to detect and respond to changes with n
### Prioritize new documents
-The "cache" property includes an "enableReprocessing" parameter. It's used to control processing over incoming documents already represented in the cache. When true (default), documents already in the cache are reprocessed when you rerun the indexer, assuming your skill update affects that doc.
+The cache property includes an `enableReprocessing` parameter. It's used to control processing over incoming documents already represented in the cache. When true (default), documents already in the cache are reprocessed when you rerun the indexer, assuming your skill update affects that doc.
-When false, existing documents are not reprocessed, effectively prioritizing new, incoming content over existing content. You should only set "enableReprocessing" to false on a temporary basis. Having "enableReprocessing" set to true most of the time ensures that all documents, both new and existing, are valid per the current skillset definition.
+When false, existing documents aren't reprocessed, effectively prioritizing new, incoming content over existing content. You should only set enableReprocessing to false on a temporary basis. Having enableReprocessing set to true most of the time ensures that all documents, both new and existing, are valid per the current skillset definition.
<a name="Bypass-skillset-checks"></a> ### Bypass skillset evaluation
-Modifying a skill and reprocessing of that skill typically go hand in hand. However, some changes to a skill should not result in reprocessing (for example, deploying a custom skill to a new location or with a new access key). Most likely, these are peripheral modifications that have no genuine impact on the substance of the skill output itself.
+Modifying a skill and reprocessing of that skill typically go hand in hand. However, some changes to a skill shouldn't result in reprocessing (for example, deploying a custom skill to a new location or with a new access key). Most likely, these are peripheral modifications that have no genuine impact on the substance of the skill output itself.
-If you know that a change to the skill is indeed superficial, you should override skill evaluation by setting the "disableCacheReprocessingChangeDetection" parameter to true:
+If you know that a change to the skill is indeed superficial, you should override skill evaluation by setting the `disableCacheReprocessingChangeDetection` parameter to true:
1. Call [Update Skillset](/rest/api/searchservice/update-skillset) and modify the skillset definition. 1. Append the "disableCacheReprocessingChangeDetection=true" parameter on the request.
PUT https://[servicename].search.windows.net/skillsets/[skillset name]?api-versi
### Bypass data source validation checks
-Most changes to a data source definition will invalidate the cache. However, for scenarios where you know that a change should not invalidate the cache - such as changing a connection string or rotating the key on the storage account - append the "ignoreResetRequirement" parameter on the [data source update](/rest/api/searchservice/update-data-source). Setting this parameter to true allows the commit to go through, without triggering a reset condition that would result in all objects being rebuilt and populated from scratch.
+Most changes to a data source definition will invalidate the cache. However, for scenarios where you know that a change shouldn't invalidate the cache - such as changing a connection string or rotating the key on the storage account - append the `ignoreResetRequirement` parameter on the [data source update](/rest/api/searchservice/update-data-source). Setting this parameter to true allows the commit to go through, without triggering a reset condition that would result in all objects being rebuilt and populated from scratch.
```http PUT https://[search service].search.windows.net/datasources/[data source name]?api-version=2020-06-30-Preview&ignoreResetRequirement
POST https://[search service name].search.windows.net/indexers/[indexer name]/re
Once you enable a cache, the indexer evaluates changes in your pipeline composition to determine which content can be reused and which needs reprocessing. This section enumerates changes that invalidate the cache outright, followed by changes that trigger incremental processing.
-An invalidating change is one where the entire cache is no longer valid. An example of an invalidating change is one where your data source is updated. Here is the complete list of changes to any part of the indexer pipeline that would invalidate your cache:
+An invalidating change is one where the entire cache is no longer valid. An example of an invalidating change is one where your data source is updated. Here's the complete list of changes to any part of the indexer pipeline that would invalidate your cache:
+ Changing the data source type + Changing data source container
An invalidating change is one where the entire cache is no longer valid. An exam
## Changes that trigger incremental processing
-Incremental processing evaluates your skillset definition and determines which skills to rerun, selectively updating the affected portions of the document tree. Here is the complete list of changes resulting in incremental enrichment:
+Incremental processing evaluates your skillset definition and determines which skills to rerun, selectively updating the affected portions of the document tree. Here's the complete list of changes resulting in incremental enrichment:
+ Changing the skill type (the OData type of the skill is updated) + Skill-specific parameters are updated, for example a URL, defaults, or other parameters
REST API version `2020-06-30-Preview` or later provides incremental enrichment t
+ [Reset Skills (api-version=2020-06-30)](/rest/api/searchservice/preview-api/reset-skills)
-+ [Update Data Source](/rest/api/searchservice/update-data-source), when called with a preview API version, provides a new parameter named "ignoreResetRequirement", which should be set to true when your update action should not invalidate the cache. Use "ignoreResetRequirement" sparingly as it could lead to unintended inconsistency in your data that will not be detected easily.
-
-## Limitations
-
-> [!CAUTION]
-> If you're using the [SharePoint Online indexer (Preview)](search-howto-index-sharepoint-online.md), you should avoid incremental enrichment. Under certain circumstances, the cache becomes invalid, requiring an [indexer reset and run](search-howto-run-reset-indexers.md), should you choose to reload it.
++ [Update Data Source](/rest/api/searchservice/update-data-source), when called with a preview API version, provides a new parameter named "ignoreResetRequirement", which should be set to true when your update action shouldn't invalidate the cache. Use "ignoreResetRequirement" sparingly as it could lead to unintended inconsistency in your data that won't be detected easily. ## Next steps
search Monitor Azure Cognitive Search Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search-data-reference.md
Title: Azure AI Search monitoring data reference
-description: Log and metrics reference for monitoring data from Azure AI Search.
--
+ Title: Monitoring data reference for Azure AI Search
+description: This article contains important reference material you need when you monitor Azure AI Search.
Last updated : 02/15/2024++ - - Previously updated : 02/08/2023-
- - subject-monitoring
- - ignite-2023
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace AI Search with the official name of your service.
+2. Search and replace azure-cognitive-search with the service name to use in GitHub filenames.-->
+
+<!-- VERSION 3.0 2024_01_01
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
+
+<!-- Most services can use the following sections unchanged. All headings are required unless otherwise noted.
+The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
+
+At a minimum your service should have the following two articles:
+
+1. The primary monitoring article (based on the template monitor-service-template.md)
+ - Title: "Monitor AI Search"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-azure-cognitive-search.md"
+
+2. A reference article that lists all the metrics and logs for your service (based on this template).
+ - Title: "AI Search monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "monitor-azure-cognitive-search-data-reference.md".
+-->
+ # Azure AI Search monitoring data reference
-This article provides a reference of log and metric data collected to analyze the performance and availability of Azure AI Search. See [Monitoring Azure AI Search](monitor-azure-cognitive-search.md) for an overview.
+<!-- Intro. Required. -->
+
+See [Monitor Azure AI Search](monitor-azure-cognitive-search.md) for details on the data you can collect for Azure AI Search and how to use it.
-## Metrics
+<!-- ## Metrics. Required section. -->
-This section lists the platform metrics collected for Azure AI Search ([Microsoft.Search/searchServices](../azure-monitor/essentials/metrics-supported.md#microsoftsearchsearchservices)).
+<!-- Repeat the following section for each resource type/namespace in your service. -->
+### Supported metrics for Microsoft.Search/searchServices
+The following table lists the metrics available for the Microsoft.Search/searchServices resource type.
-| Metric ID | Unit | Description |
-|:-|:--|:|
-| DocumentsProcessedCount | Count | Total of the number of documents successfully processed in an indexing operation by an indexer. |
-| SearchLatency | Seconds | Average search latency for queries that execute on the search service. |
-| SearchQueriesPerSecond | CountPerSecond | Average of the search queries per second (QPS) for the search service. It's common for queries to execute in milliseconds, so only queries that measure as seconds will appear in a metric like QPS. </br>The minimum is the lowest value for search queries per second that was registered during that minute. The same applies to the maximum value. Average is the aggregate across the entire minute. For example, within one minute, you might have a pattern like this: one second of high load that is the maximum for SearchQueriesPerSecond, followed by 58 seconds of average load, and finally one second with only one query, which is the minimum.|
-| SkillExecutionCount | Count | Total number of skill executions processed during an indexer operation. |
-| ThrottledSearchQueriesPercentage | Percent | Average percentage of the search queries that were throttled from the total number of search queries that executed during a one-minute interval.|
+SearchQueriesPerSecond shows the average of the search queries per second (QPS) for the search service. It's common for queries to execute in milliseconds, so only queries that measure as seconds appear in a metric like QPS. The minimum is the lowest value for search queries per second that was registered during that minute. Maximum is the highest value. Average is the aggregate across the entire minute.
-For reference, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+For example, within one minute, you might have a pattern like this: one second of high load that is the maximum for SearchQueriesPerSecond, followed by 58 seconds of average load, and finally one second with only one query, which is the minimum.
-## Metric dimensions
+<!-- ## Metric dimensions. Required section. -->
-Dimensions of a metric are name/value pairs that carry additional data to describe the metric value. Azure AI Search has the following dimensions associated with its metrics that capture a count of documents or skills that were executed ("Document processed count" and "Skill execution invocation count").
+Azure AI Search has the following dimensions associated with the metrics that capture a count of documents or skills that were executed, "Document processed count" and "Skill execution invocation count".
| Dimension Name | Description | | -- | -- |
Dimensions of a metric are name/value pairs that carry additional data to descri
| **SkillName** | Name of a skill within a skillset. | | **SkillType** | The @odata.type of the skill. |
-For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+<!-- ## Resource logs. Required section. -->
-## Resource logs
+<!-- Add at least one resource provider/resource type here. Example: ### Supported resource logs for Microsoft.Storage/storageAccounts/blobServices
+Repeat this section for each resource type/namespace in your service. -->
+### Supported resource logs for Microsoft.Search/searchServices
-[Resource logs](../azure-monitor/essentials/resource-logs.md) are platform logs that provide insight into operations that were performed within an Azure resource. Resource logs are generated by the search service automatically, but are not collected by default. You must create a diagnostic setting to send resource logs to a Log Analytics workspace to use with Azure Monitor Logs, Azure Event Hubs to forward outside of Azure, or to Azure Storage for archiving.
-
-This section identifies the type (or category) of resource logs you can collect for Azure AI Search:
-
-+ Resource logs are grouped by type (or category). Azure AI Search generates resource logs under the [**Operations category**](../azure-monitor/essentials/resource-logs-categories.md#microsoftsearchsearchservices).
-
-For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
-
-## Azure Monitor Logs tables
-
-[Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources. If you configured a diagnostic setting for Log Analytics, you can query the Azure Monitor Logs tables for the resource logs generated by Azure AI Search.
-
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure AI Search and available for query by Log Analytics and Metrics Explorer in the Azure portal.
+<!-- ## Azure Monitor Logs tables. Required section. -->
+### Search Services
+Microsoft.Search/searchServices
| Table | Description | |-|-|
-| [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) | Entries from the Azure Activity log that provide insight into control plane operations. Tasks invoked on the control plane, such as adding or removing replicas and partitions, will be represented through a "Get Admin Key" activity. |
-| [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) | Logged query and indexing operations.|
+| [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) | Entries from the Azure activity log provide insight into control plane operations. Tasks invoked on the control plane, such as adding or removing replicas and partitions, are represented through a "Get Admin Key" activity. |
+| [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) | Logged query and indexing operations. Queries against the AzureDiagnostics table in Log Analytics can include the common properties, the [search-specific properties](#resource-log-search-props), and the [search-specific operations](#resource-log-search-ops) listed in the schema reference section. |
| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | Metric data emitted by Azure AI Search that measures health and performance. |
-For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype#search-services).
+### Resource log tables
-### Diagnostics tables
+The following table lists the properties of resource logs in Azure AI Search. The resource logs are collected into Azure Monitor Logs or Azure Storage. In Azure Monitor, logs are collected in the AzureDiagnostics table under the resource provider name of `Microsoft.Search`.
-Azure AI Search uses the [**Azure Diagnostics**](/azure/azure-monitor/reference/tables/azurediagnostics) table to collect resource logs related to queries and indexing on your search service.
+| Azure Storage field or property | Azure Monitor Logs property | Description |
+|-|-|-|
+| time | TIMESTAMP | The date and time (UTC) when the operation occurred. |
+| resourceId | Concat("/", "/subscriptions", SubscriptionId, "resourceGroups", ResourceGroupName, "providers/Microsoft.Search/searchServices", ServiceName) | The Azure AI Search resource for which logs are enabled. |
+| category | "OperationLogs" | Log categories include `Audit`, `Operational`, `Execution`, and `Request`. |
+| operationName | Name | Name of the operation. The operation name can be `Indexes.ListIndexStatsSummaries`, `Indexes.Get`, `Indexes.Stats`, `Indexers.List`, `Query.Search`, `Query.Suggest`, `Query.Lookup`, `Query.Autocomplete`, `CORS.Preflight`, `Indexes.Update`, `Indexes.Prototype`, `ServiceStats`, `DataSources.List`, `Indexers.Warmup`. |
+| durationMS | DurationMilliseconds | The duration of the operation, in milliseconds. |
+| operationVersion | ApiVersion | The API version used on the request. |
+| resultType | (Failed) ? "Failed" : "Success" | The type of response. |
+| resultSignature | Status | The HTTP response status of the operation. |
+| properties | Properties | Any extended properties related to this category of events. |
-Queries against this table in Log Analytics can include the common properties, the [search-specific properties](#resource-log-search-props), and the [search-specific operations](#resource-log-search-ops) listed in the schema reference section.
+<!-- ## Activity log. Required section. -->
-For examples of Kusto queries useful for Azure AI Search, see [Monitoring Azure AI Search](monitor-azure-cognitive-search.md) and [Analyze performance in Azure AI Search](search-performance-analysis.md).
-
-## Activity logs
-
-The following table lists common operations related to Azure AI Search that may be created in the Azure Activity log.
+The following table lists common operations related to Azure AI Search that may be recorded in the activity log. For a complete listing of all Microsoft.Search operations, see [Microsoft.Search resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftsearch).
| Operation | Description | |:-|:|
-| Get Admin Key | Any operation that requires administrative rights will be logged as a "Get Admin Key" operation. |
+| Get Admin Key | Any operation that requires administrative rights is logged as a "Get Admin Key" operation. |
| Get Query Key | Any read-only operation against the documents collection of an index. | | Regenerate Admin Key | A request to regenerate either the primary or secondary admin API key. |
-Common entries include references to API keys - generic informational notifications like *Get Admin Key* and *Get Query keys*. These activities indicate requests that were made using the admin key (create or delete objects) or query key, but do not show the request itself. For information of this grain, you must configure resource logging.
-
-Alternatively, you might gain some insight through change history. In Azure portal, select the activity to open the detail page and then select "Change history" for information about the underlying operation.
+Common entries include references to API keys - generic informational notifications like *Get Admin Key* and *Get Query keys*. These activities indicate requests that were made using the admin key (create or delete objects) or query key, but don't show the request itself. For information of this grain, you must configure resource logging.
-For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
+Alternatively, you might gain some insight through change history. In the Azure portal, select the activity to open the detail page and then select "Change history" for information about the underlying operation.
-## Schemas
+<!-- Refer to https://learn.microsoft.com/azure/role-based-access-control/resource-provider-operations and link to the possible operations for your service, using the format - [<Namespace> resource provider operations](/azure/role-based-access-control/resource-provider-operations#<namespace>).
+If there are other operations not in the link, list them here in table form. -->
-The following schemas are in use by Azure AI Search. If you are building queries or custom reports, the data structures that contain Azure AI Search resource logs conform to the schema below.
+<!-- ## Other schemas. Optional section. Please keep heading in this order. If your service uses other schemas, add the following include and information. -->
+<a name="schemas"></a>
+<!-- List other schemas and their usage here. These can be resource logs, alerts, event hub formats, etc. depending on what you think is important. You can put JSON messages, API responses not listed in the REST API docs, and other similar types of info here. -->
+If you're building queries or custom reports, the data structures that contain Azure AI Search resource logs conform to the following schemas.
For resource logs sent to blob storage, each blob has one root object called **records** containing an array of log objects. Each blob contains records for all the operations that took place during the same hour. <a name="resource-log-schema"></a>- ### Resource log schema
-All resource logs available through Azure Monitor share a [common top-level schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). Azure AI Search supplements with [additional properties](#resource-log-search-props) and [operations](#resource-log-search-ops) that are unique to a search service.
+All resource logs available through Azure Monitor share a [common top-level schema](/azure/azure-monitor/essentials/resource-logs-schema#top-level-common-schema). Azure AI Search supplements with [more properties](#resource-log-search-props) and [operations](#resource-log-search-ops) that are unique to a search service.
The following example illustrates a resource log that includes common properties (TimeGenerated, Resource, Category, and so forth) and search-specific properties (OperationName and OperationVersion).
The following example illustrates a resource log that includes common properties
| TimeGenerated | Datetime | Timestamp of the operation. For example: `2021-12-07T00:00:43.6872559Z` | | Resource | String | Resource ID. For example: `/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group-name>/providers/Microsoft.Search/searchServices/<your-search-service-name>` | | Category | String | "OperationLogs". This value is a constant. OperationLogs is the only category used for resource logs. |
-| OperationName | String | The name of the operation (see the [full list of operations](#resource-log-search-ops) below). An example is `Query.Search` |
+| OperationName | String | The name of the operation (see the [full list of operations](#resource-log-search-ops)). An example is `Query.Search` |
| OperationVersion | String | The api-version used on the request. For example: `2020-06-30` | | ResultType | String |"Success". Other possible values: Success or Failure | | ResultSignature | Int | An HTTP result code. For example: `200` |
The following example illustrates a resource log that includes common properties
#### Properties schema
-The properties below are specific to Azure AI Search.
+The following properties are specific to Azure AI Search.
| Name | Type | Description and example | | - | - | -- |
The properties below are specific to Azure AI Search.
#### OperationName values (logged operations)
-The operations below can appear in a resource log.
+The following operations can appear in a resource log.
| OperationName | Description | |:- |:|
The operations below can appear in a resource log.
| DebugSessions.RetrieveProjectedIndexerExecutionHistoricalData | Execution history for enrichments projected to a knowledge store. | | Indexers.* | Applies to an indexer. Can be Create, Delete, Get, List, and Status. | | Indexes.* | Applies to a search index. Can be Create, Delete, Get, List. |
-| indexes.Prototype | This is an index created by the Import Data wizard. |
+| indexes.Prototype | This index is created by the Import Data wizard. |
| Indexing.Index | This operation is a call to [Add, Update or Delete Documents](/rest/api/searchservice/addupdate-or-delete-documents). | | Metadata.GetMetadata | A request for search service system data. | | Query.Autocomplete | An autocomplete query against an index. See [Query types and composition](search-query-overview.md). | | Query.Lookup | A lookup query against an index. See [Query types and composition](search-query-overview.md). | | Query.Search | A full text search request against an index. See [Query types and composition](search-query-overview.md). | | Query.Suggest | Type ahead query against an index. See [Query types and composition](search-query-overview.md). |
-| ServiceStats | This operation is a routine call to [Get Service Statistics](/rest/api/searchservice/get-service-statistics), either called directly or implicitly to populate a portal overview page when it is loaded or refreshed. |
+| ServiceStats | This operation is a routine call to [Get Service Statistics](/rest/api/searchservice/get-service-statistics), either called directly or implicitly to populate a portal overview page when it's loaded or refreshed. |
| Skillsets.* | Applies to a skillset. Can be Create, Delete, Get, List. |
-## See also
-
-+ See [Monitoring Azure AI Search](monitor-azure-cognitive-search.md) for concepts and instructions.
+## Related content
-+ See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitor Azure AI Search](monitor-azure-cognitive-search.md) for a description of monitoring Azure AI Search.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search.md
Title: Monitor Azure AI Search
-description: Enable resource logging, get query metrics, resource usage, and other system data about an Azure AI Search service.
--
+description: Start here to learn how to monitor Azure AI Search.
Last updated : 02/15/2024++ - Previously updated : 01/18/2024-
- - subject-monitoring
- - ignite-2023
-# Monitoring Azure AI Search
-
-[Azure Monitor](../azure-monitor/overview.md) is enabled with every subscription to provide uniform monitoring capabilities over all Azure resources, including Azure AI Search. When you create a search service, Azure Monitor collects [**activity logs**](../azure-monitor/data-sources.md#azure-activity-log) and [**platform metrics**](../azure-monitor/essentials/data-platform-metrics.md) as soon as you start using the service.
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace AI Search with the official name of your service.
+2. Search and replace monitor-azure-cognitive-search-data-reference with the service name to use in GitHub filenames.-->
-Optionally, you can enable diagnostic settings to collect [**resource logs**](../azure-monitor/essentials/resource-logs.md). Resource logs contain detailed information about search service operations that's useful for deeper analysis and investigation.
-
-This article explains how monitoring works for Azure AI Search. It also describes the system APIs that return information about your service.
-
-> [!NOTE]
-> Azure AI Search doesn't monitor individual user access to content on the search service. If you require this level of monitoring, you'll need to implement it in your client application.
+<!-- VERSION 3.0 2024_01_07
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
-## Monitoring in Azure portal
+<!-- Most services can use the following sections unchanged. The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
-In the search service pages in Azure portal, you can find the current status of operations and capacity.
+At a minimum your service should have the following two articles:
- ![Azure Monitor integration in a search service](./media/search-monitor-usage/azure-monitor-search.png "Azure Monitor integration in a search service")
+1. The primary monitoring article (based on this template)
+ - Title: "Monitor AI Search"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-monitor-azure-cognitive-search-data-reference.md"
-+ **Monitoring** tab in the **Overview** page summarizes key [query metrics](search-monitor-queries.md), including search latency, search queries per second, and throttled queries. On the next tab over (not shown), **Usage** shows available capacity and the quantity of indexes, indexers, data sources, and skillsets relative to the maximum allowed for your [service tier](search-sku-tier.md).
+2. A reference article that lists all the metrics and logs for your service (based on the template data-reference-template.md).
+ - Title: "AI Search monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "monitor-azure-cognitive-search-data-reference.md".
+-->
-+ **Activity log** on the navigation menu captures service-level events: service creation, configuration, and deletion.
+# Monitor Azure AI Search
-+ Further down the navigation menu, the **Monitoring** section includes actions for Azure Monitor, filtered for search. Here, you can enable diagnostic settings and resource logging, and specify how you want the data stored.
+<!-- Intro. Required. -->
> [!NOTE]
-> Because portal pages are refreshed every few minutes, the numbers reported are approximate, intended to give you a general sense of how well your system is handling requests. Actual metrics, such as queries per second (QPS) may be higher or lower than the number shown on the page. If precision is a requirement, consider using APIs.
-
-## Get system data from REST APIs
-
-Azure AI Search REST APIs provide the **Usage** data that's visible in the portal. This information is retrieved from your search service, which you can obtain programmatically:
-
-+ [Service Statistics (REST)](/rest/api/searchservice/get-service-statistics)
-+ [Index Statistics (REST)](/rest/api/searchservice/get-index-statistics)
-+ [Document Counts (REST)](/rest/api/searchservice/count-documents)
-+ [Indexer Status (REST)](/rest/api/searchservice/get-indexer-status)
-
-For REST calls, use an [admin API key](search-security-api-keys.md) and [Postman](search-get-started-rest.md) or another REST client to query your search service.
-
-## Monitor activity logs
-
-In Azure AI Search, [**activity logs**](../azure-monitor/data-sources.md#azure-activity-log) reflect control plane activity, such as service creation and configuration, or API key usage or management.
-
-Activity logs are collected [free of charge](../azure-monitor/cost-usage.md#pricing-model), with no configuration required. Data retention is 90 days, but you can configure durable storage for longer retention.
-
-1. In the Azure portal, find your search service. From the menu on the left, select **Activity logs** to view the logs for your search service. See [Azure Monitor activity log](../azure-monitor/essentials/activity-log.md) for general guidance on working with activity logs.
-
-1. Review the entries. Entries often include **Get Admin Key**, one entry for every call that [provided an admin API key](search-security-api-keys.md) on the request. There are no details about the call itself, just a notification that the admin key was used.
-
-1. For other entires, see the [Management REST API reference](/rest/api/searchmanagement/) for control plane activity that might appear in the log.
-
-The following screenshot shows the activity log signals that can be configured in an alert. These signals represent the entries you might see in the activity log.
--
-## Monitor metrics
-
-In Azure AI Search, [**platform metrics**](../azure-monitor/essentials/data-platform-metrics.md) measure query performance, indexing volume, and skillset invocation.
-
-Metrics are collected [free of charge](../azure-monitor/cost-usage.md#pricing-model), with no configuration required. Platform metrics are stored for 93 days. However, in the portal you can only query a maximum of 30 days' worth of platform metrics data on any single chart.
-
-In the Azure portal, find your search service. From the menu on the left, under Monitoring, select **Metrics** to open metrics explorer.
-
-The following links provide more information about working with platform metrics:
-
-+ [Tutorial: Analyze metrics for an Azure resource](../azure-monitor/essentials/tutorial-metrics.md) for general guidance on using metrics explorer.
-
-+ [Microsoft.Search/searchServices (Azure Monitor)](../azure-monitor/essentials/metrics-supported.md#microsoftsearchsearchservices) for the platform metrics reference.
+> Azure AI Search doesn't monitor individual user access to content on the search service. If you require this level of monitoring, you need to implement it in your client application.
-+ [Monitoring data reference](monitor-azure-cognitive-search-data-reference.md) for supplemental descriptions and dimensions.
+<!-- ## Insights. Optional section. If your service has insights, add the following include and information.
+<!-- Insights service-specific information. Add brief information about what your Azure Monitor insights provide here. You can refer to another article that gives details or add a screenshot. -->
-+ [Monitor queries](search-monitor-queries.md) has details about the query metrics.
+<!-- ## Resource types. Required section. -->
+For more information about the resource types for Azure AI Search, see [Azure AI Search monitoring data reference](monitor-azure-cognitive-search-data-reference.md).
-## Set up alerts
+<!-- ## Data storage. Required section. Optionally, add service-specific information about storing your monitoring data after the include. -->
+<!-- Add service-specific information about storing monitoring data here, if applicable. For example, SQL Server stores other monitoring data in its own databases. -->
-Alerts help you to identify and address issues before they become a problem for application users. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [resource logs](../azure-monitor/alerts/alerts-unified-log.md), and [activity logs](../azure-monitor/alerts/activity-log-alerts.md). Alerts are billable (see the [Pricing model](../azure-monitor/cost-usage.md#pricing-model) for details).
+<!-- METRICS SECTION START ->
-1. In the Azure portal, find your search service. From the menu on the left, under Monitoring, select **Alerts** to open metrics explorer.
+<!-- ## Platform metrics. Required section.
+ - If your service doesn't collect platform metrics, use the following include: [!INCLUDE [horz-monitor-no-platform-metrics](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-platform-metrics.md)]
+ - If your service collects platform metrics, add the following include, statement, and service-specific information as appropriate. -->
+In Azure AI Search, platform metrics measure query performance, indexing volume, and skillset invocation. For a list of available metrics for Azure AI Search, see [Azure AI Search monitoring data reference](monitor-azure-cognitive-search-data-reference.md#metrics).
-1. See [Tutorial: Create a metric alert for an Azure resource](../azure-monitor/alerts/tutorial-metric-alert.md) for general guidance on setting up alerts from metrics explorer.
+<!-- Platform metrics service-specific information. Add service-specific information about your platform metrics here.-->
-The following table describes several rules. On a search service, throttling or query latency that exceeds a given threshold are the most commonly used alerts, but you might also want to be notified if a search service is deleted.
+<!-- ## Prometheus/container metrics. Optional. If your service uses containers/Prometheus metrics, add the following include and information.
+<!-- Add service-specific information about your container/Prometheus metrics here.-->
-| Alert type | Condition | Description |
-|:|:|:|
-| Search Latency (metric alert) | Whenever the average search latency is greater than a user-specified threshold (in seconds) | Send an SMS alert when average query response time exceeds the threshold. |
-| Throttled search queries percentage (metric alert) | Whenever the total throttled search queries percentage is greater than or equal to a user-specified threshold | Send an SMS alert when dropped queries begin to exceed the threshold.|
-| Delete Search Service (activity log alert) | Whenever the Activity Log has an event with Category='Administrative', Signal name='Delete Search Service (searchServices)', Level='critical' | Send an email if a search service is deleted in the subscription. |
-
-> [!NOTE]
-> Currently, there are no storage-related alerts (storage consumption data isn't aggregated or logged into the **AzureMetrics** table). To get storage alerts, you could [build a custom solution](/previous-versions/azure/azure-monitor/insights/solutions) that emits resource-related notifications, where your code checks for storage size and handles the response.
+<!-- ## System metrics. Optional. If your service uses system-imported metrics, add the following include and information.
+<!-- Add service-specific information about your system-imported metrics here.-->
-## Enable resource logging
+<!-- ## Custom metrics. Optional. If your service uses custom imported metrics, add the following include and information.
+<!-- Custom imported service-specific information. Add service-specific information about your custom imported metrics here.-->
-In Azure AI Search, [**resource logs**](../azure-monitor/essentials/resource-logs.md) capture indexing and query operations on the search service itself.
+<!-- ## Non-Azure Monitor metrics. Optional. If your service uses any non-Azure Monitor based metrics, add the following include and information.
+<!-- Non-Monitor metrics service-specific information. Add service-specific information about your non-Azure Monitor metrics here.-->
-Resource Logs aren't collected and stored until you create a diagnostic setting. A diagnostic setting specifies data collection and storage. You can create multiple settings if you want to keep metrics and log data separate, or if you want more than one of each type of destination.
+<!-- METRICS SECTION END ->
-Resource logging is billable (see the [Pricing model](../azure-monitor/cost-usage.md#pricing-model) for details), starting when you create a diagnostic setting. See [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for general guidance.
+<!-- LOGS SECTION START -->
-1. In the Azure portal, find your search service. From the menu on the left, under Monitoring, select **Diagnostic settings**.
+<!-- ## Resource logs. Required section.
+ - If your service doesn't collect resource logs, use the following include [!INCLUDE [horz-monitor-no-resource-logs](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-resource-logs.md)]
+ - If your service collects resource logs, add the following include, statement, and service-specific information as appropriate. -->
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure AI Search, see [Azure AI Search monitoring data reference](monitor-azure-cognitive-search-data-reference.md#resource-logs).
+<!-- Resource logs service-specific information. Add service-specific information about your resource logs here.
+NOTE: Azure Monitor already has general information on how to configure and route resource logs. See https://learn.microsoft.com/azure/azure-monitor/platform/diagnostic-settings. Ideally, don't repeat that information here. You can provide a single screenshot of the diagnostic settings portal experience if you want. -->
-1. Select **+ Add diagnostic setting**.
+<!-- ## Activity log. Required section. Optionally, add service-specific information about your activity log after the include. -->
+<!-- Activity log service-specific information. Add service-specific information about your activity log here. -->
+In Azure AI Search, activity logs reflect control plane activity such as service creation and configuration, or API key usage or management. Entries often include **Get Admin Key**, one entry for every call that [provided an admin API key](search-security-api-keys.md) on the request. There are no details about the call itself, just a notification that the admin key was used.
-1. Give the diagnostic setting a name. Use granular and descriptive names if you're creating more than one setting.
+The following screenshot shows Azure AI Search activity log signals you can configure in an alert.
-1. Select the logs and metrics that are in scope for this setting. Selections include "allLogs", "audit", "OperationLogs", "AllMetrics". You can exclude activity logs by selecting the "OperationLogs" category.
-
- + See [Microsoft.Search/searchServices (in Supported categories for Azure Monitor resource logs)](../azure-monitor/essentials/resource-logs-categories.md#microsoftsearchsearchservices)
-
- + See [Microsoft.Search/searchServices (in Supported metrics)](../azure-monitor/essentials/metrics-supported.md#microsoftsearchsearchservices)
-
- + See [Azure AI Search monitoring data reference](monitor-azure-cognitive-search-data-reference.md) for the extended schema
-
-1. Select **Send to Log Analytics workspace**. Kusto queries and data exploration will target the workspace.
-
-1. Optionally, select [other destinations](../azure-monitor/essentials/diagnostic-settings.md#destinations).
-1. Select **Save**.
+For other entries, see the [Management REST API reference](/rest/api/searchmanagement/) for control plane activity that might appear in the log.
+<!-- ## Imported logs. Optional section. If your service uses imported logs, add the following include and information.
+<!-- Add service-specific information about your imported logs here. -->
-Once the workspace contains data, you can run log queries:
+<!-- ## Other logs. Optional section.
+If your service has other logs that aren't resource logs or in the activity log, add information that states what they are and what they cover here. You can describe how to route them in a later section. -->
-+ See [Tutorial: Collect and analyze resource logs from an Azure resource](../azure-monitor/essentials/tutorial-resource-logs.md) for general guidance on log queries.
+<!-- LOGS SECTION END ->
-+ See [Analyze performance in Azure AI Search](search-performance-analysis.md) for examples and guidance specific to search services.
+<!-- ANALYSIS SECTION START -->
-## Sample Kusto queries
+<!-- ## Analyze data. Required section. -->
-> [!IMPORTANT]
-> When you select **Logs** from the Azure AI Search menu, Log Analytics is opened with the query scope set to the current search service. This means that log queries will only include data from that resource. If you want to query over multiple search services or combine data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+<!-- ### External tools. Required section. -->
-Kusto is the query language used for Log Analytics. The next section has some queries to get you started. See the [**Azure AI Search monitoring data reference**](monitor-azure-cognitive-search-data-reference.md) for descriptions of schema elements used in a query. See [Analyze performance in Azure AI Search](search-performance-analysis.md) for more examples and guidance specific to search service.
+<!-- ### Sample Kusto queries. Required section. If you have sample Kusto queries for your service, add them after the include. -->
+<!-- Add sample Kusto queries for your service here. -->
+The following queries can get you started. See [Analyze performance in Azure AI Search](search-performance-analysis.md) for more examples and guidance specific to search service.
-### List metrics by name
+#### List metrics by name
Return a list of metrics and the associated aggregation. The query is scoped to the current search service over the time range that you specify.
AzureMetrics
| project MetricName, Total, Count, Maximum, Minimum, Average ```
-### List operations by name
+#### List operations by name
Return a list of operations and a count of each one.
AzureDiagnostics
| summarize count() by OperationName ```
-### Long-running queries
+#### Long-running queries
This Kusto query against AzureDiagnostics returns `Query.Search` operations, sorted by duration (in milliseconds). For more examples of `Query.Search` queries, see [Analyze performance in Azure AI Search](search-performance-analysis.md).
AzureDiagnostics
| sort by DurationMs ```
-### Indexer status
+#### Indexer status
This Kusto query returns the status of indexer operations. Results include the operation name, description of the request (which includes the name of the indexer), result status (Success or Failure), and the [HTTP status code](/rest/api/searchservice/http-status-codes). For more information about indexer execution, see [Monitor indexer status](search-howto-monitor-indexers.md).
AzureDiagnostics
| where OperationName == "Indexers.Status" ```
-## Next steps
+<!-- ### AI Search service-specific analytics. Optional section.
+Add short information or links to specific articles that outline how to analyze data for your service. -->
+
+<!-- ANALYSIS SECTION END ->
+
+<!-- ALERTS SECTION START -->
+
+<!-- ## Alerts. Required section. -->
+
+<!-- ONLY if applications run on your service that work with Application Insights, add the following include.
+
+<!-- ### AI Search alert rules. Required section.
+**MUST HAVE** service-specific alert rules. Include useful alerts on metrics, logs, log conditions, or activity log.
+Fill in the following table with metric and log alerts that would be valuable for your service. Change the format as necessary for readability. You can instead link to an article that discusses your common alerts in detail.
+Ask your PMs if you don't know. This information is the BIGGEST request we get in Azure Monitor, so don't avoid it long term. People don't know what to monitor for best results. Be prescriptive. -->
+
+### Azure AI Search alert rules
+The following table lists common and recommended alert rules for Azure AI Search. On a search service, throttling or query latency that exceeds a given threshold are the most commonly used alerts, but you might also want to be notified if a search service is deleted.
+
+| Alert type | Condition | Description |
+|:|:|:|
+| Search Latency (metric alert) | Whenever the average search latency is greater than a user-specified threshold (in seconds) | Send an SMS alert when average query response time exceeds the threshold. |
+| Throttled search queries percentage (metric alert) | Whenever the total throttled search queries percentage is greater than or equal to a user-specified threshold | Send an SMS alert when dropped queries begin to exceed the threshold.|
+| Delete Search Service (activity log alert) | Whenever the Activity Log has an event with Category='Administrative', Signal name='Delete Search Service (searchServices)', Level='critical' | Send an email if a search service is deleted in the subscription. |
+
+<!-- ### Advisor recommendations. Required section. -->
+<!-- Add any service-specific advisor recommendations or screenshots here. -->
+
+<!-- ALERTS SECTION END -->
-The monitoring framework for Azure AI Search is provided by [Azure Monitor](../azure-monitor/overview.md). If you're not familiar with this service, start with [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) to review the main concepts. You can also review the following articles for Azure AI Search:
+## Related content
+<!-- You can change the wording and add more links if useful. -->
-+ [Analyze performance in Azure AI Search](search-performance-analysis.md)
-+ [Monitor queries](search-monitor-queries.md)
-+ [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
+- [Azure AI Search monitoring data reference](monitor-azure-cognitive-search-data-reference.md)
+- [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource)
+- [Monitor queries](search-monitor-queries.md)
+- [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
+- [Monitor client-side interactions](search-traffic-analytics.md)
+- [Visualize resource logs](search-monitor-logs-powerbi.md)
+- [Analyze performance in Azure AI Search](search-performance-analysis.md)
+- [Performance benchmarks](performance-benchmarks.md)
+- [Tips for better performance](search-performance-tips.md)
search Search Blob Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-blob-storage-integration.md
Title: Search over Azure Blob Storage content
-description: Learn about extracting text from Azure blobs and making it full-text searchable in an Azure AI Search index.
+description: Learn how to extract text from Azure blobs and making the content full-text searchable in an Azure AI Search index.
- ignite-2023 Previously updated : 02/07/2023 Last updated : 02/15/2024 # Search over Azure Blob Storage content
In this article, review the basic workflow for extracting content and metadata f
## What it means to add full text search to blob data
-Azure AI Search is a standalone search service that supports indexing and query workloads over user-defined indexes that contain your remote searchable content hosted in the cloud. Co-locating your searchable content with the query engine is necessary for performance, returning results at a speed users have come to expect from search queries.
+Azure AI Search is a standalone search service that supports indexing and query workloads over user-defined indexes that contain your private searchable content hosted in the cloud. Co-locating your searchable content with the query engine in the cloud is necessary for performance, returning results at a speed users have come to expect from search queries.
Azure AI Search integrates with Azure Blob Storage at the indexing layer, importing your blob content as search documents that are indexed into *inverted indexes* and other query structures that support free-form text queries and filter expressions. Because your blob content is indexed into a search index, you can use the full range of query features in Azure AI Search to find information in your blob content.
-Inputs are your blobs, in a single container, in Azure Blob Storage. Blobs can be almost any kind of text data. If your blobs contain images, you can add [AI enrichment](cognitive-search-concept-intro.md) to create and extract text from images.
+Inputs are your blobs, in a single container, in Azure Blob Storage. Blobs can be almost any kind of text data. If your blobs contain images, you can add [AI enrichment](cognitive-search-concept-intro.md) to create and extract text and features from images.
Output is always an Azure AI Search index, used for fast text search, retrieval, and exploration in client applications. In between is the indexing pipeline architecture itself. The pipeline is based on the *indexer* feature, discussed further on in this article.
Within Blob Storage, you'll need a container that provides source content. You c
You can start directly in your Storage Account portal page.
-1. In the left navigation page under **Data management**, select **Azure search** to select or create a search service.
+1. In the left navigation page under **Data management**, select **Azure AI Search** to select or create a search service.
-1. Follow the steps in the wizard to extract and optionally create searchable content from your blobs. The workflow is the [**Import data** wizard](cognitive-search-quickstart-blob.md).
+1. Follow the steps in the wizard to extract and optionally create searchable content from your blobs. The workflow is the [**Import data** wizard](cognitive-search-quickstart-blob.md). The workflow creates an indexer, data source, index, and option skillset on your Azure AI Search service.
- :::image type="content" source="media/search-blob-storage-integration/blob-blade.png" alt-text="Screenshot of the Azure search wizard in the Azure Storage portal page." border="true":::
+ :::image type="content" source="media/search-blob-storage-integration/blob-blade.png" alt-text="Screenshot of the Azure AI Search wizard in the Azure Storage portal page." border="true":::
1. Use [Search explorer](search-explorer.md) in the search portal page to query your content.
Textual content of a document is extracted into a string field named "content".
An *indexer* is a data-source-aware subservice in Azure AI Search, equipped with internal logic for sampling data, reading and retrieving data and metadata, and serializing data from native formats into JSON documents for subsequent import.
-Blobs in Azure Storage are indexed using the [blob indexer](search-howto-indexing-azure-blob-storage.md). You can invoke this indexer by using the **Azure search** command in Azure Storage, the **Import data** wizard, a REST API, or the .NET SDK. In code, you use this indexer by setting the type, and by providing connection information that includes an Azure Storage account along with a blob container. You can subset your blobs by creating a virtual directory, which you can then pass as a parameter, or by filtering on a file type extension.
+Blobs in Azure Storage are indexed using the [blob indexer](search-howto-indexing-azure-blob-storage.md). You can invoke this indexer by using the **Azure AI Search** command in Azure Storage, the **Import data** wizard, a REST API, or the .NET SDK. In code, you use this indexer by setting the type, and by providing connection information that includes an Azure Storage account along with a blob container. You can subset your blobs by creating a virtual directory, which you can then pass as a parameter, or by filtering on a file type extension.
An indexer ["cracks a document"](search-indexer-overview.md#document-cracking), opening a blob to inspect content. After connecting to the data source, it's the first step in the pipeline. For blob data, this is where PDF, Office docs, and other content types are detected. Document cracking with text extraction is no charge. If your blobs contain image content, images are ignored unless you [add AI enrichment](cognitive-search-concept-intro.md). Standard indexing applies only to text content.
You can control which blobs are indexed, and which are skipped, by the blob's fi
Include specific file extensions by setting `"indexedFileNameExtensions"` to a comma-separated list of file extensions (with a leading dot). Exclude specific file extensions by setting `"excludedFileNameExtensions"` to the extensions that should be skipped. If the same extension is in both lists, it will be excluded from indexing. ```http
-PUT /indexers/[indexer name]?api-version=2020-06-30
+PUT /indexers/[indexer name]?api-version=2023-11-01
{ "parameters" : { "configuration" : {
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
This section assumes manual approval and the portal for this step, but you can a
After the private endpoint is approved, Azure AI Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
-The private endpoint link on the page only resolves to the private link definition in Azure AI Search if there's shared tenancy between Azure AI Search backend private link and the Azure PaaS resource.
+Although the private endpoint link on the **Networking** page is active, it won't resolve.
:::image type="content" source="media/search-indexer-howto-secure-access/private-endpoint-link.png" alt-text="Screenshot of the private endpoint link in the Azure PaaS networking page.":::
-A status message of `"The access token is from the wrong issuer"` and `must match the tenant associated with this subscription` appears because the backend private endpoint resource is provisioned in a Microsoft-managed tenant, while the linked resource (Azure AI Search) is in your tenant. It's by design you can't access the private endpoint resource by selecting the private endpoint connection link.
+Selecting the link produces an error. A status message of `"The access token is from the wrong issuer"` and `must match the tenant associated with this subscription` appears because the backend private endpoint resource is provisioned by Microsoft in a Microsoft-managed tenant, while the linked resource (Azure AI Search) is in your tenant. It's by design you can't access the private endpoint resource by selecting the private endpoint connection link.
Follow the instructions in the next section to check the status of your shared private link.
search Search Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage.md
description: Manage an Azure AI Search resource using the Azure portal.
-tags: azure-portal
- ignite-2023
search Search More Like This https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-more-like-this.md
- ignite-2023- Previously updated : 10/06/2022+ Last updated : 02/16/2024 # moreLikeThis (preview) in Azure AI Search
search Search Performance Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-tips.md
- ignite-2023 Previously updated : 04/20/2023 Last updated : 02/15/2024 # Tips for better performance in Azure AI Search
-This article is a collection of tips and best practices that are often recommended for boosting performance. Knowing which factors are most likely to impact search performance can help you avoid inefficiencies and get the most out of your search service. Some key factors include:
+This article is a collection of tips and best practices for boosting query and indexing performance. Knowing which factors are most likely to impact search performance can help you avoid inefficiencies and get the most out of your search service. Some key factors include:
+ Index composition (schema and size) + Query design
When query performance is slowing down in general, adding more replicas frequent
One positive side-effect of adding partitions is that slower queries sometimes perform faster due to parallel computing. We've noted parallelization on low selectivity queries, such as queries that match many documents, or facets providing counts over a large number of documents. Since significant computation is required to score the relevancy of the documents, or to count the numbers of documents, adding extra partitions helps queries complete faster.
-To add partitions, use [Azure portal](search-create-service-portal.md), [PowerShell](search-manage-powershell.md), [Azure CLI](search-manage-azure-cli.md), or a management SDK.
+To add partitions, use [Azure portal](search-capacity-planning.md#add-or-reduce-replicas-and-partitions), [PowerShell](search-manage-powershell.md), [Azure CLI](search-manage-azure-cli.md), or a management SDK.
## Service capacity
An important benefit of added memory is that more of the index can be cached, re
### Tip: Consider alternatives to regular expression queries
-[Regular expression queries](query-lucene-syntax.md#bkmk_regex) or regex can be particularly expensive. While they can be very useful for complex searches, they also may require a lot of processing power to be executed, especially if the regular expression has a lot of complexity or when searching through a large amount of data. This would result in high search latency. In order to reduce the search latency, try to simplify the regular expression or break the complex query down into smaller, more manageable queries.
-
+[Regular expression queries](query-lucene-syntax.md#bkmk_regex) or regex can be particularly expensive. While they can be very useful for advanced searches, execution can require a lot of processing power, especially if the regular expression is complicated or if you're searching through a large amount of data. All of these factors contribute to high search latency. As a mitigation, try to simplify the regular expression or break the complex query down into smaller, more manageable queries.
## Next steps
search Search Query Fuzzy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-fuzzy.md
Title: Fuzzy search
-description: Implement a fuzzy search query for a "did you mean" search experience. Fuzzy search auto-corrects a misspelled term or typo on the query.
+description: Implement a fuzzy search query for a "did you mean" search experience. Fuzzy search autocorrects a misspelled term or typo on the query.
- ignite-2023 Previously updated : 04/20/2023 Last updated : 02/16/2024 # Fuzzy search to correct misspellings and typos
Azure AI Search supports fuzzy search, a type of query that compensates for typo
## What is fuzzy search?
-It's a query expansion exercise that produces a match on terms having a similar composition. When a fuzzy search is specified, the search engine builds a graph (based on [deterministic finite automaton theory](https://en.wikipedia.org/wiki/Deterministic_finite_automaton)) of similarly composed terms, for all whole terms in the query. For example, if your query includes three terms "university of washington", a graph is created for every term in the query `search=university~ of~ washington~` (there's no stop-word removal in fuzzy search, so "of" gets a graph).
+It's a query expansion exercise that produces a match on terms having a similar composition. When a fuzzy search is specified, the search engine builds a graph (based on [deterministic finite automaton theory](https://en.wikipedia.org/wiki/Deterministic_finite_automaton)) of similarly composed terms, for all whole terms in the query. For example, if your query includes three terms `"university of washington"`, a graph is created for every term in the query `search=university~ of~ washington~` (there's no stop-word removal in fuzzy search, so `"of"` gets a graph).
The graph consists of up to 50 expansions, or permutations, of each term, capturing both correct and incorrect variants in the process. The engine then returns the topmost relevant matches in the response.
-For a term like "university", the graph might have "unversty, universty, university, universe, inverse". Any documents that match on those in the graph are included in results. In contrast with other queries that analyze the text to handle different forms of the same word ("mice" and "mouse"), the comparisons in a fuzzy query are taken at face value without any linguistic analysis on the text. "Universe" and "inverse", which are semantically different, will match because the syntactic discrepancies are small.
+For a term like "university", the graph might have `"unversty, universty, university, universe, inverse"`. Any documents that match on those in the graph are included in results. In contrast with other queries that analyze the text to handle different forms of the same word ("mice" and "mouse"), the comparisons in a fuzzy query are taken at face value without any linguistic analysis on the text. "Universe" and "inverse", which are semantically different, will match because the syntactic discrepancies are small.
A match succeeds if the discrepancies are limited to two or fewer edits, where an edit is an inserted, deleted, substituted, or transposed character. The string correction algorithm that specifies the differential is the [Damerau-Levenshtein distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) metric. It's described as the "minimum number of operations (insertions, deletions, substitutions, or transpositions of two adjacent characters) required to change one word into the other".
Fuzzy queries are constructed using the full Lucene query syntax, invoking the [
Here's an example of a query request that invokes fuzzy search. It includes four terms, two of which are misspelled: ```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30
+POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2023-11-01
{ "search": "seatle~ waterfront~ view~ hotle~", "queryType": "full",
In the response, because you added hit highlighting, formatting is applied to "s
"Description": [ "Test queries with <em>special</em> characters, plus strings for MSFT, SQL and Java." ]
+}
```
-Try the request again, misspelling "special" by taking out several letters ("pe"):
+Try the request again, misspelling "special" by taking out several letters (`"pe"`):
```console search=scial~&highlight=Description ```
-So far, no change to the response. Using the default of 2 degrees distance, removing two characters "pe" from "special" still allows for a successful match on that term.
+So far, no change to the response. Given the default of 2 degrees distance, removing two characters `"pe"` from "special" still allows for a successful match on that term.
```output "@search.highlights": { "Description": [ "Test queries with <em>special</em> characters, plus strings for MSFT, SQL and Java." ]
+}
```
-Trying one more request, further modify the search term by taking out one last character for a total of three deletions (from "special" to "scal"):
+Trying one more request, further modify the search term by taking out one last character for a total of three deletions (from "special" to `"scal"`):
```console search=scal~&highlight=Description
search=scal~&highlight=Description
Notice that the same response is returned, but now instead of matching on "special", the fuzzy match is on "SQL". ```output
- "@search.score": 0.4232868,
- "@search.highlights": {
- "Description": [
- "Mix of special characters, plus strings for MSFT, <em>SQL</em>, 2019, Linux, Java."
- ]
+"@search.score": 0.4232868,
+"@search.highlights": {
+ "Description": [
+ "Mix of special characters, plus strings for MSFT, <em>SQL</em>, 2019, Linux, Java."
+ ]
+}
``` The point of this expanded example is to illustrate the clarity that hit highlighting can bring to ambiguous results. In all cases, the same document is returned. Had you relied on document IDs to verify a match, you might have missed the shift from "special" to "SQL".
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
- ignite-2023 Previously updated : 01/14/2023 Last updated : 02/15/2024 # Connect to Azure AI Search using key authentication
-Azure AI Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint will be accepted if both the request and the API key are valid.
+Azure AI Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint is accepted if both the request and the API key are valid.
> [!NOTE] > A quick note about how "key" terminology is used in Azure AI Search. An "API key", which is described in this article, refers to a GUID used for authenticating a request. A separate term, "document key", refers to a unique string in your indexed content that's used to uniquely identify documents in a search index.
You can specify API keys in a request header for REST API calls, or in code that
Best practices for using hard-coded keys in source files include:
-+ During early development and proof-of-concept testing when security is looser, use sample or public data.
++ Use API keys if data disclosure isn't a risk (for example, when using sample data) and if you're operating behind a firewall. Exposure of API keys is a risk to both data and to unauthorized use of your search service. +++ If you're publishing samples and training materials, check your code to make sure you didn't leave valid API keys behind. + For mature solutions or production scenarios, switch to [Microsoft Entra ID and role-based access](search-security-rbac.md) to eliminate the need for having hard-coded keys. Or, if you want to continue using API keys, be sure to always monitor [who has access to your API keys](#secure-api-keys) and [regenerate API keys](#regenerate-admin-keys) on a regular cadence.
Best practices for using hard-coded keys in source files include:
Key authentication is built in so no action is required. By default, the portal uses API keys to authenticate the request automatically. However, if you [disable API keys](search-security-rbac.md#disable-api-key-authentication) and set up role assignments, the portal uses role assignments instead.
-In Azure AI Search, most tasks can be performed in Azure portal, including object creation, indexing through the Import data wizard, and queries through Search explorer.
+In Azure AI Search, most tasks can be performed in Azure portal, including object creation, indexing through the import wizards, and queries through Search explorer.
### [**PowerShell**](#tab/azure-ps-use)
A script example showing API key usage for various operations can be found at [Q
### [**REST API**](#tab/rest-use)
-Set an admin key in the request header using the syntax `api-key` equal to your key. Admin keys are used for most operations, including create, delete, and update. Admin keys are also used on requests issued to the search service itself, such as listing objects or requesting service statistics. see [Quickstart: Create a search index using REST](search-get-started-rest.md) for a more detailed example.
+Set an admin key in the request header using the syntax `api-key` equal to your key. Admin keys are used for most operations, including create, delete, and update. Admin keys are also used on requests issued to the search service itself, such as listing objects or requesting service statistics. See [Quickstart: Create a search index using REST](search-get-started-rest.md) for a more detailed example.
:::image type="content" source="media/search-security-api-keys/rest-headers.png" alt-text="Screenshot of the Headers section of a request in Postman." border="true":::
search Search Security Get Encryption Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-get-encryption-keys.md
- ignite-2023 Previously updated : 09/09/2022 Last updated : 02/16/2024 # Find encrypted objects and information
-In Azure AI Search, customer-managed encryption keys are created, stored, and managed in Azure Key Vault. If you need to determine whether an object is encrypted, or what key name or version was used in Azure Key Vault, use the REST API or an Azure SDK to retrieve the **encryptionKey** property from the object definition in your search service.
+In Azure AI Search, customer-managed encryption keys are created, stored, and managed in Azure Key Vault. If you need to determine whether an object is encrypted, or what key name or version is used in Azure Key Vault, use the REST API or an Azure SDK to retrieve the **encryptionKey** property from the object definition in your search service.
-Objects that aren't encrypted with a customer-managed key will have an empty **encryptionKey** property. Otherwise, you might see a definition similar to the following example.
+Objects that aren't encrypted with a customer-managed key have an empty **encryptionKey** property. Otherwise, you might see a definition similar to the following example.
```json
-"encryptionKey": {
-"keyVaultUri": "https://demokeyvault.vault.azure.net",
-"keyVaultKeyName": "myEncryptionKey",
-"keyVaultKeyVersion": "eaab6a663d59439ebb95ce2fe7d5f660",
-"accessCredentials": {
- "applicationId": "00000000-0000-0000-0000-000000000000",
- "applicationSecret": "myApplicationSecret"
- }
+"encryptionKey":{
+ "keyVaultUri":"https://demokeyvault.vault.azure.net",
+ "keyVaultKeyName":"myEncryptionKey",
+ "keyVaultKeyVersion":"eaab6a663d59439ebb95ce2fe7d5f660",
+ "accessCredentials":{
+ "applicationId":"00000000-0000-0000-0000-000000000000",
+ "applicationSecret":"myApplicationSecret"
+ }
} ``` The **encryptionKey** construct is the same for all encrypted objects. It's a first-level property, on the same level as the object name and description.
-## Get the admin API key
+## Permissions for retrieving object definitions
-Before you can retrieve object definitions from a search service, you'll need to provide an admin API key. Admin API keys are required on requests that query for object definitions and metadata. The easiest way to get the admin API key is through the portal.
+You must have [Search Service Contributor](search-security-rbac.md#built-in-roles-used-in-search) or equivalent permissions. To use [key-based authentication](search-security-api-keys.md) instead, provide an admin API key. Admin permissions are required on requests that return object definitions and metadata. The easiest way to get the admin API key is through the portal.
1. Sign in to the [Azure portal](https://portal.azure.com/) and open the search service overview page.
For the remaining steps, switch to PowerShell and the REST API. The portal doesn
Use PowerShell and REST to run the following commands to set up the variables and get object definitions.
-Alternatively, you can also use the Azure SDKs for [.NET](/dotnet/api/azure.search.documents.indexes.searchindexclient.getindexes), [Python](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient), [JavaScript](/javascript/api/@azure/search-documents/searchindexclient), and [Java](/java/api/com.azure.search.documents.indexes.searchindexclient.getindex).
+Alternatively, you can also use the Azure SDK for [.NET](/dotnet/api/azure.search.documents.indexes.searchindexclient.getindexes), [Python](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient), [JavaScript](/javascript/api/@azure/search-documents/searchindexclient), and [Java](/java/api/com.azure.search.documents.indexes.searchindexclient.getindex).
First, connect to your Azure account.
First, connect to your Azure account.
Connect-AzAccount ```
+If you have more than one active subscription in your tenant, specify the subscription containing your search service:
+
+```powershell
+ Set-AzContext -Subscription <your-subscription-ID>
+```
+ Set up the headers used on each request in the current session. Provide the admin API key used for search service authentication. ```powershell
$headers = @{
To return a list of all search indexes, set the endpoint to the indexes collection. ```powershell
-$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes?api-version=2020-06-30&$select=name'
+$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes?api-version=2023-11-01&$select=name'
Invoke-RestMethod -Uri $uri -Headers $headers | ConvertTo-Json ``` To return a specific index definition, provide its name in the path. The encryptionKey property is at the end. ```powershell
-$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/<YOUR-INDEX-NAME>?api-version=2020-06-30'
+$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/indexes/<YOUR-INDEX-NAME>?api-version=2023-11-01'
Invoke-RestMethod -Uri $uri -Headers $headers | ConvertTo-Json ``` To return synonym maps, set the endpoint to the synonyms collection and then send the request. ```powershell
-$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/synonyms?api-version=2020-06-30&$select=name'
+$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/synonyms?api-version=2023-11-01&$select=name'
Invoke-RestMethod -Uri $uri -Headers $headers | ConvertTo-Json ```
-The following example returns a specific synonym map definition, including the encryptionKey property at the end.
+The following example returns a specific synonym map definition, including the encryptionKey property is towards the end of the definition.
```powershell
-$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/synonyms/<YOUR-SYNONYM-MAP-NAME>?api-version=2020-06-30'
+$uri= 'https://<YOUR-SEARCH-SERVICE>.search.windows.net/synonyms/<YOUR-SYNONYM-MAP-NAME>?api-version=2023-11-01'
Invoke-RestMethod -Uri $uri -Headers $headers | ConvertTo-Json ```
search Search Security Trimming For Azure Search With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-trimming-for-azure-search-with-aad.md
Title: Security filters to trim results using MIcrosoft Entra ID
+ Title: Security filters to trim results using Microsoft Entra ID
description: Access control at the document level for search results, using security filters and Microsoft Entra identities.
Previously updated : 03/24/2023 Last updated : 02/15/2024 - devx-track-csharp - ignite-2023
Your index in Azure AI Search must have a [security field](search-security-trimm
You must have Microsoft Entra administrator permissions (Owner or administrator) to create users, groups, and associations.
-Your application must also be registered with Microsoft Entra ID as a multi-tenant app, as described in the following procedure.
+Your application must also be registered with Microsoft Entra ID as a multitenant app, as described in the following procedure.
<a name='register-your-application-with-azure-active-directory'></a>
Your application must also be registered with Microsoft Entra ID as a multi-tena
This step integrates your application with Microsoft Entra ID for the purpose of accepting sign-ins of user and group accounts. If you aren't a tenant admin in your organization, you might need to [create a new tenant](../active-directory/develop/quickstart-create-new-tenant.md) to perform the following steps.
-1. In [Azure portal](https://portal.azure.com), find the Microsoft Entra tenant.
+1. In [Azure portal](https://portal.azure.com), find the Microsoft Entra ID tenant.
1. On the left, under **Manage**, select **App registrations**, and then select **New registration**.
-1. Give the registration a name, perhaps a name that's similar to the search application name. Select **Register**.
+1. Give the registration a name, perhaps a name that's similar to the search application name. Refer to [this article](/entra/external-id/customers/how-to-register-ciam-app) for information about other optional properties.
+
+1. Select **Register**.
1. Once the app registration is created, copy the Application (client) ID. You'll need to provide this string to your application. If you're stepping through the DotNetHowToSecurityTrimming, paste this value into the **app.config** file.
- Repeat for the Tenant ID.
+1. Copy the Directory (tenant) ID.
- :::image type="content" source="media/search-manage-encryption-keys/cmk-application-id.png" alt-text="Application ID in the Essentials section":::
+ :::image type="content" source="media/search-manage-encryption-keys/cmk-application-id.png" alt-text="Screenshot of the application ID in the Essentials section.":::
1. On the left, select **API permissions** and then select **Add a permission**.
search Search Semi Structured Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-semi-structured-data.md
- ignite-2023 Previously updated : 01/18/2023 Last updated : 02/16/2024 #Customer intent: As a developer, I want an introduction the indexing Azure blob data for Azure AI Search. # Tutorial: Index JSON blobs from Azure Storage using REST
-Azure AI Search can index JSON documents and arrays in Azure Blob Storage using an [indexer](search-indexer-overview.md) that knows how to read semi-structured data. Semi-structured data contains tags or markings which separate content within the data. It splits the difference between unstructured data, which must be fully indexed, and formally structured data that adheres to a data model, such as a relational database schema, that can be indexed on a per-field basis.
+Azure AI Search can index JSON documents and arrays in Azure Blob Storage using an [indexer](search-indexer-overview.md) that knows how to read semi-structured data. Semi-structured data contains tags or markings which separate content within the data. It splits the difference between unstructured data, which must be fully indexed, and formally structured data that adheres to a data model, such as a relational database schema that can be indexed on a per-field basis.
This tutorial uses Postman and the [Search REST APIs](/rest/api/searchservice/) to perform the following tasks:
If possible, create both in the same region and resource group for proximity and
### Start with Azure Storage
-1. Sign in to the [Azure portal](https://portal.azure.com) and click **+ Create Resource**.
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **+ Create Resource**.
1. Search for *storage account* and select Microsoft's Storage Account offering.
- :::image type="content" source="media/cognitive-search-tutorial-blob/storage-account.png" alt-text="Create Storage account" border="false":::
+ :::image type="content" source="media/cognitive-search-tutorial-blob/storage-account.png" alt-text="Screenshot of the Create Storage account page.":::
1. In the Basics tab, the following items are required. Accept the defaults for everything else.
If possible, create both in the same region and resource group for proximity and
+ **Account Kind**. Choose the default, *StorageV2 (general purpose v2)*.
-1. Click **Review + Create** to create the service.
+1. Select **Review + Create** to create the service.
-1. Once it's created, click **Go to the resource** to open the Overview page.
+1. Once it's created, select **Go to the resource** to open the Overview page.
-1. Click **Blobs** service.
+1. Select **Blob service**.
1. [Create a Blob container](../storage/blobs/storage-quickstart-blobs-portal.md) to contain sample data. You can set the Public Access Level to any of its valid values. 1. After the container is created, open it and select **Upload** on the command bar.
- :::image type="content" source="media/search-semi-structured-data/upload-command-bar.png" alt-text="Upload on command bar" border="false":::
+ :::image type="content" source="media/search-semi-structured-data/upload-command-bar.png" alt-text="Screenshot of upload on the command bar.":::
-1. Navigate to the folder containing the sample files. Select all of them and then click **Upload**.
+1. Navigate to the folder containing the sample files. Select all of them and then select **Upload**.
- :::image type="content" source="media/search-semi-structured-data/clinicalupload.png" alt-text="Upload files" border="false":::
+ :::image type="content" source="media/search-semi-structured-data/clinicalupload.png" alt-text="Screenshot of the Upload files page.":::
After the upload completes, the files should appear in their own subfolder inside the data container.
After the upload completes, the files should appear in their own subfolder insid
The next resource is Azure AI Search, which you can [create in the portal](search-create-service-portal.md). You can use the Free tier to complete this walkthrough.
-As with Azure Blob Storage, take a moment to collect the access key. Further on, when you begin structuring requests, you will need to provide the endpoint and admin api-key used to authenticate each request.
+As with Azure Blob Storage, take a moment to collect the access key. Further on, when you begin structuring requests, provide the endpoint and admin api-key used to authenticate each request.
### Get a key and URL
-REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure AI Search to your subscription, follow these steps to get the necessary information:
+This tutorial uses [key-based authentication](search-security-api-keys.md) on connections to the search service endpoint. In this step, get the service URL and an access key from the portal:
-1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
+1. Sign in to the [Azure portal](https://portal.azure.com), and in your search service **Overview** page, copy the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
-1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
+1. Under **Settings** > **Keys**, copy an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
- :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Get an HTTP endpoint and access key" border="false":::
-
-All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
+ :::image type="content" source="media/search-get-started-rest/get-url-key.png" lightbox="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot the HTTP endpoint and access key." border="false":::
## 2 - Set up Postman
-Start Postman and set up an HTTP request. If you are unfamiliar with this tool, see [Create a search index using REST APIs](search-get-started-rest.md).
+Start Postman and set up an HTTP request. If you're unfamiliar with this tool, see [Create a search index using REST APIs](search-get-started-rest.md).
The request methods for every call in this tutorial are **POST** and **GET**. You'll make three API calls to your search service to create a data source, an index, and an indexer. The data source includes a pointer to your storage account and your JSON data. Your search service makes the connection when loading the data. In Headers, set "Content-type" to `application/json` and set `api-key` to the admin api-key of your Azure AI Search service. Once you set the headers, you can use them for every request in this exercise.
- :::image type="content" source="media/search-get-started-rest/postman-url.png" alt-text="Postman request URL and header" border="false":::
+ :::image type="content" source="media/search-get-started-rest/postman-url.png" alt-text="Screenshot of a Postman request URL and header." border="false":::
-URIs must specify an api-version and each call should return a **201 Created**. The generally available api-version for using JSON arrays is `2020-06-30`.
+URIs must specify an api-version and each call should return a **201 Created**. The generally available api-version for using JSON arrays is `2023-11-01`.
## 3 - Create a data source The [Create Data Source API](/rest/api/searchservice/create-data-source) creates an Azure AI Search object that specifies what data to index.
-1. Set the endpoint of this call to `https://[service name].search.windows.net/datasources?api-version=2020-06-30`. Replace `[service name]` with the name of your search service.
+1. Set the endpoint of this call to `https://[service name].search.windows.net/datasources?api-version=2023-11-01`. Replace `[service name]` with the name of your search service.
1. Copy the following JSON into the request body.
The [Create Data Source API](/rest/api/searchservice/create-data-source) creates
The second call is [Create Index API](/rest/api/searchservice/create-index), creating an Azure AI Search index that stores all searchable data. An index specifies all the parameters and their attributes.
-1. Set the endpoint of this call to `https://[service name].search.windows.net/indexes?api-version=2020-06-30`. Replace `[service name]` with the name of your search service.
+1. Set the endpoint of this call to `https://[service name].search.windows.net/indexes?api-version=2023-11-01`. Replace `[service name]` with the name of your search service.
1. Copy the following JSON into the request body.
The second call is [Create Index API](/rest/api/searchservice/create-index), cre
An indexer connects to the data source, imports data into the target search index, and optionally provides a schedule to automate the data refresh. The REST API is [Create Indexer](/rest/api/searchservice/create-indexer).
-1. Set the URI for this call to `https://[service name].search.windows.net/indexers?api-version=2020-06-30`. Replace `[service name]` with the name of your search service.
+1. Set the URI for this call to `https://[service name].search.windows.net/indexers?api-version=2023-11-01`. Replace `[service name]` with the name of your search service.
1. Copy the following JSON into the request body.
An indexer connects to the data source, imports data into the target search inde
} ```
-1. Send the request. The request is processed immediately. When the response comes back, you will have an index that is full-text searchable. The response should look like:
+1. Send the request. The request is processed immediately. The response should look like:
```json {
You can start searching as soon as the first document is loaded.
1. Change the verb to **GET**.
-1. Set the URI for this call to `https://[service name].search.windows.net/indexes/clinical-trials-json-index/docs?search=*&api-version=2020-06-30&$count=true`. Replace `[service name]` with the name of your search service.
+1. Set the URI for this call to `https://[service name].search.windows.net/indexes/clinical-trials-json-index/docs?search=*&api-version=2023-11-01&$count=true`. Replace `[service name]` with the name of your search service.
1. Send the request. This is an unspecified full text search query that returns all of the fields marked as retrievable in the index, along with a document count. The response should look like:
You can start searching as soon as the first document is loaded.
. . . ```
-1. Add the `$select` query parameter to limit the results to fewer fields: `https://[service name].search.windows.net/indexes/clinical-trials-json-index/docs?search=*&$select=Gender,metadata_storage_size&api-version=2020-06-30&$count=true`. For this query, 100 documents match, but by default, Azure AI Search only returns 50 in the results.
+1. Add the `$select` query parameter to limit the results to fewer fields: `https://[service name].search.windows.net/indexes/clinical-trials-json-index/docs?search=*&$select=Gender,metadata_storage_size&api-version=2023-11-01&$count=true`. For this query, 100 documents match, but by default, Azure AI Search only returns 50 in the results.
- :::image type="content" source="media/search-semi-structured-data/lastquery.png" alt-text="Parameterized query" border="false":::
+ :::image type="content" source="media/search-semi-structured-data/lastquery.png" alt-text="Screenshot of a parameterized query." border="false":::
1. An example of more complex query would include `$filter=MinimumAge ge 30 and MaximumAge lt 75`, which returns only results where the parameters MinimumAge is greater than or equal to 30 and MaximumAge is less than 75. Replace the `$select` expression with the `$filter` expression.
- :::image type="content" source="media/search-semi-structured-data/metadatashort.png" alt-text="Semi-structured search" border="false":::
+ :::image type="content" source="media/search-semi-structured-data/metadatashort.png" alt-text="Screenshot of a semi-structured search." border="false":::
-You can also use Logical operators (and, or, not) and comparison operators (eq, ne, gt, lt, ge, le). String comparisons are case-sensitive. For more information and examples, see [Create a simple query](search-query-simple-examples.md).
+You can also use Logical operators (and, or, not) and comparison operators (eq, ne, gt, lt, ge, le). String comparisons are case-sensitive. For more information and examples, see [Create a query](search-query-simple-examples.md).
> [!NOTE] > The `$filter` parameter only works with metadata that were marked filterable at the creation of your index. ## Reset and rerun
-In the early experimental stages of development, the most practical approach for design iteration is to delete the objects from Azure AI Search and allow your code to rebuild them. Resource names are unique. Deleting an object lets you recreate it using the same name.
-
-You can use the portal to delete indexes, indexers, and data sources. Or use **DELETE** and provide URLs to each object. The following command deletes an indexer.
+Indexers can be reset, clearing execution history, which allows a full rerun. The following GET requests are for reset, followed by rerun.
```http
-DELETE https://[YOUR-SERVICE-NAME].search.windows.net/indexers/clinical-trials-json-indexer?api-version=2020-06-30
+https://[service name].search.windows.net/indexers('{indexerName}')/search.reset?api-version=2023-11-01
```
-Status code 204 is returned on successful deletion.
+```http
+https://[service name].search.windows.net/indexers('{indexerName}')/search.run?api-version=2023-11-01
+```
## Clean up resources When you're working in your own subscription, at the end of a project, it's a good idea to remove the resources that you no longer need. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-You can find and manage resources in the portal, using the All resources or Resource groups link in the left-navigation pane.
+You can use the portal to delete indexes, indexers, and data sources. Or use **DELETE** and provide URLs to each object. The following command deletes an indexer.
+
+```http
+DELETE https://[YOUR-SERVICE-NAME].search.windows.net/indexers/clinical-trials-json-indexer?api-version=2023-11-01
+```
+
+Status code 204 is returned on successful deletion.
## Next steps
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
All indexing and query requests target an index. Endpoints are usually one of th
| `<your-service>.search.windows.net/indexes` | Targets the indexes collection. Used when creating, listing, or deleting an index. Admin rights are required for these operations, available through admin [API keys](search-security-api-keys.md) or a [Search Contributor role](search-security-rbac.md#built-in-roles-used-in-search). | | `<your-service>.search.windows.net/indexes/<your-index>/docs` | Targets the documents collection of a single index. Used when querying an index or data refresh. For queries, read rights are sufficient, and available through query API keys or a data reader role. For data refresh, admin rights are required. |
-Search subscribers, or the person who created the search service, can manage the search service in the Azure portal. An Azure subscription requires Contributor or above permissions to create or delete services. You can [sign in to the Azure portal](https://portal.azure.com) for a direct connection to your search service.
+#### How to connect to Azure AI Search
-For other clients, we recommend reviewing the quickstarts for connection steps:
+1. [Start with the Azure portal](https://portal.azure.com). Azure subscribers, or the person who created the search service, can manage the search service in the Azure portal. An Azure subscription requires Contributor or above permissions to create or delete services. This permission level is sufficient for fully managing a search service in the Azure portal.
-+ [Quickstart: REST](search-get-started-rest.md)
-+ [Quickstart: Azure SDKs](search-get-started-text.md)
+1. Try other clients for programmatic access. We recommend the quickstarts for first steps:
+
+ + [Quickstart: REST](search-get-started-rest.md)
+ + [Quickstart: Azure SDKs](search-get-started-text.md)
## Next steps
You can get hands-on experience creating an index using almost any sample or wal
But you'll also want to become familiar with methodologies for loading an index with data. Index definition and data import strategies are defined in tandem. The following articles provide more information about creating and loading an index. + [Create a search index](search-how-to-create-search-index.md)- + [Create a vector store](vector-search-how-to-create-index.md)- + [Create an index alias](search-how-to-alias.md)- + [Data import overview](search-what-is-data-import.md)- + [Load an index](search-how-to-load-search-index.md)
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
Follow these steps to index vector data:
> + Add one or more vector fields > + Load prevectorized data [as a separate step](#load-vector-data-for-indexing), or use [integrated vectorization (preview)](vector-search-integrated-vectorization.md) for data chunking and encoding during indexing.
-This article applies to the generally available, non-preview version of [vector search](vector-search-overview.md), which assumes your application code calls external resources for chunking and encoding.
+This article applies to the generally available non-preview version of [vector search](vector-search-overview.md), which assumes your application code calls external resources for chunking and encoding.
> [!NOTE] > Looking for migration guidance from 2023-07-01-preview? See [Upgrade REST APIs](search-api-migration.md).
This article applies to the generally available, non-preview version of [vector
## Prepare documents for indexing
-Prior to indexing, assemble a document payload that includes fields of vector and non-vector data. The document structure must conform to the index schema.
+Prior to indexing, assemble a document payload that includes fields of vector and nonvector data. The document structure must conform to the index schema.
Make sure your documents:
Make sure your documents:
1. Provide other fields with human-readable alphanumeric content for the query response, and for hybrid query scenarios that include full text search or semantic ranking in the same request.
-Your search index should include fields and content for all of the query scenarios you want to support. Suppose you want to search or filter over product names, versions, metadata, or addresses. In this case, similarity search isn't especially helpful. Keyword search, geo-search, or filters would be a better choice. A search index that includes a comprehensive field collection of vector and non-vector data provides maximum flexibility for query construction and response composition.
+Your search index should include fields and content for all of the query scenarios you want to support. Suppose you want to search or filter over product names, versions, metadata, or addresses. In this case, similarity search isn't especially helpful. Keyword search, geo-search, or filters would be a better choice. A search index that includes a comprehensive field collection of vector and nonvector data provides maximum flexibility for query construction and response composition.
-A short example of a documents payload that includes vector and non-vector fields is in the [load vector data](#load-vector-data-for-indexing) section of this article.
+A short example of a documents payload that includes vector and nonvector fields is in the [load vector data](#load-vector-data-for-indexing) section of this article.
## Add a vector search configuration
Use this version if you want generally available features only.
+ `retrievable` can be true or false. True returns the raw vectors (1536 of them) as plain text and consumes storage space. Set to true if you're passing a vector result to a downstream app. + `filterable`, `facetable`, `sortable` must be false.
-1. Add filterable non-vector fields to the collection, such as "title" with `filterable` set to true, if you want to invoke [prefiltering or postfiltering](vector-search-filters.md) on the [vector query](vector-search-how-to-query.md).
+1. Add filterable nonvector fields to the collection, such as "title" with `filterable` set to true, if you want to invoke [prefiltering or postfiltering](vector-search-filters.md) on the [vector query](vector-search-how-to-query.md).
1. Add other fields that define the substance and structure of the textual content you're indexing. At a minimum, you need a document key.
In the following REST API example, "title" and "content" contain textual content
+ `retrievable` can be true or false. True returns the raw vectors (1536 of them) as plain text and consumes storage space. Set to true if you're passing a vector result to a downstream app. + `filterable`, `facetable`, `sortable` must be false.
-1. Add filterable non-vector fields to the collection, such as "title" with `filterable` set to true, if you want to invoke [prefiltering or postfiltering](vector-search-filters.md) on the [vector query](vector-search-how-to-query.md
+1. Add filterable nonvector fields to the collection, such as "title" with `filterable` set to true, if you want to invoke [prefiltering or postfiltering](vector-search-filters.md) on the [vector query](vector-search-how-to-query.md
1. Add other fields that define the substance and structure of the textual content you're indexing. At a minimum, you need a document key.
Although you can add a field to an index, there's no portal (Import data wizard)
## Load vector data for indexing
-Content that you provide for indexing must conform to the index schema and include a unique string value for the document key. Pre-vectorized data is loaded into one or more vector fields, which can coexist with other fields containing alphanumeric content.
+Content that you provide for indexing must conform to the index schema and include a unique string value for the document key. Prevectorized data is loaded into one or more vector fields, which can coexist with other fields containing alphanumeric content.
You can use either [push or pull methodologies](search-what-is-data-import.md) for data ingestion.
You can use [Search Explorer](search-explorer.md) to query an index. Search expl
### [**REST API**](#tab/rest-check-index)
-The following REST API example is a vector query, but it returns only non-vector fields (title, content, category). Only fields marked as "retrievable" can be returned in search results.
+The following REST API example is a vector query, but it returns only nonvector fields (title, content, category). Only fields marked as "retrievable" can be returned in search results.
```http POST https://my-search-service.search.windows.net/indexes/my-index/docs/search?api-version=2023-11-01
search Vector Search Index Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-index-size.md
The following table shows vector quotas by partition, and by service if all part
## How to determine service creation date
-Find out whether your search service was created before July 1, 2023. If it's an older service, consider creating a new search service to benefit from the higher limits. Newer services at the same tier offer at least twice as much vector storage.
+Services created after July 1, 2023 offer at least twice as much vector storage as older ones at the same tier.
1. In Azure portal, open the resource group.
Usage information can be found on the **Overview** page's **Usage** tab. Portal
The following screenshot is for a newer Standard 1 (S1) tier, configured for one partition and one replica. Vector index quota, measured in megabytes, refers to the internal vector indexes created for each vector field. Overall, indexes consume almost 460 megabytes of available storage, but the vector index component takes up just 93 megabytes of the 460 used on this search service. The tile on the Usage tab tracks vector index consumption at the search service level. If you increase or decrease search service capacity, the tile reflects the changes accordingly. ### [**REST**](#tab/rest-vector-quota)
-Use the following data plane REST APIs (version 2023-11-01 or later) for vector usage statistics:
+Use the following data plane REST APIs (version 2023-10-01-preview, 2023-11-01, and later) for vector usage statistics:
+ [GET Index Statistics](/rest/api/searchservice/indexes/get-statistics) returns usage for a given index. + [GET Service Statistics](/rest/api/searchservice/get-service-statistics/get-service-statistics) returns quota and usage for the search service all-up.
search Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md
Vector search is available in:
Scenarios for vector search include:
-+ **Vector database**. Azure AI Search stores the data that you query over. Use it as a pure vector store any time you need long-term memory or a knowledge base, or grounding data for [Retrieval Augmented Generation (RAG) architecture](https://aka.ms/what-is-rag), or any app that uses vectors.
++ **Vector database**. Azure AI Search stores the data that you query over. Use it as a [pure vector store](vector-store.md) any time you need long-term memory or a knowledge base, or grounding data for [Retrieval Augmented Generation (RAG) architecture](https://aka.ms/what-is-rag), or any app that uses vectors. + **Similarity search**. Encode text using embedding models such as OpenAI embeddings or open source models such as SBERT, and retrieve documents with queries that are also encoded as vectors.
search Vector Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-store.md
Considerations for vector storage include the following points:
In Azure AI Search, there are two patterns for working with search results.
-+ Generative search. Language models formulate a response to the user's query using data from Azure AI Search. This pattern usually includes an orchestration layer to coordinate prompts and maintain context. In this pattern, results are fed into prompt flows and chat models like GPT and Text-Davinci. This approach is based on [**Retrieval augmented generation (RAG)**](retrieval-augmented-generation-overview.md) architecture, where the search index provides the grounding data.
++ Generative search. Language models formulate a response to the user's query using data from Azure AI Search. This pattern usually includes an orchestration layer to coordinate prompts and maintain context. In this pattern, results are fed into prompt flows, received by chat models like GPT and Text-Davinci. This approach is based on [**Retrieval augmented generation (RAG)**](retrieval-augmented-generation-overview.md) architecture, where the search index provides the grounding data.
-+ Classic search. The search engine formulates a response based on content in your index, and you render those results in a client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are passed to the client app. It's expected that you would populate the vector store (search index) with nonvector content that's human readable in your response. The search engine matches on vectors, but can return nonvector values from the same search document. [**Vector queries**](vector-search-how-to-query.md) and [**hybrid queries**](hybrid-search-how-to-query.md) cover the types of requests.
++ Classic search. Search engine formulates a response based on content in your index, and you render those results in a client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are passed to the client app. It's expected that you would populate the vector store (search index) with nonvector content that's human readable in your response. The search engine matches on vectors, but can return nonvector values from the same search document. [**Vector queries**](vector-search-how-to-query.md) and [**hybrid queries**](hybrid-search-how-to-query.md) cover the types of requests.
-Your index schema should reflect your primary use case.
+Your index schema should reflect your primary use case. The following section highlights the differences in field composition for solutions built for generative AI or classic search.
## Schema of a vector store
-The following examples highlight the differences in field composition for solutions build for generative AI or classic search.
- An index schema for a vector store requires a name, a key field (string), one or more vector fields, and a vector configuration. Nonvector fields are recommended for hybrid queries, or for returning verbatim human readable content that doesn't have to go through a language model. For instructions about vector configuration, see [Create a vector store](vector-search-how-to-create-index.md). ### Basic vector field configuration
-A vector field, such as `"content_vector"` in the following example, is of type `Collection(Edm.Single)`. It must be searchable and retrievable. It can't be filterable, facetable, or sortable, and it can't have analyzers, normalizers, or synonym map assignments. It must have dimensions set to the number of embeddings generated by the embedding model. For instance, if you're using text-embedding-ada-002, it generates 1,536 embeddings. A vector search profile is specified in a separate vector search configuration and assigned to a vector field using a profile name.
+A vector field, such as `"content_vector"` in the following example, is of type `Collection(Edm.Single)`. It must be searchable and retrievable. It can't be filterable, facetable, or sortable, and it can't have analyzers, normalizers, or synonym map assignments. It must have dimensions set to the number of embeddings generated by the embedding model. For instance, if you're using text-embedding-ada-002, it generates 1,536 embeddings. A vector search profile is specified in a separate [vector search configuration](vector-search-how-to-create-index.md) and assigned to a vector field using a profile name.
```json {
The bias of this schema is that search documents are built around data chunks. I
Data chunking is necessary for staying within the input limits of language models, but it also improves precision in similarity search when queries can be matched against smaller chunks of content pulled from multiple parent documents. Finally, if you're using semantic ranking, the semantic ranker also has token limits, which are more easily met if data chunking is part of your approach.
-In the following example, for each search document, there's one chunk ID, chunk, title, and vector field. The chunkID and parent ID are populated by the wizard, using base 64 encoding of blob metadata (path). Chunk and title are derived from blob content and blob name. Only the vector field is fully generated. It calls an Azure OpenAI resource that you provide.
+In the following example, for each search document, there's one chunk ID, chunk, title, and vector field. The chunkID and parent ID are populated by the wizard, using base 64 encoding of blob metadata (path). Chunk and title are derived from blob content and blob name. Only the vector field is fully generated. It's the vectorized version of the chunk field. Embeddings are generated by calling an Azure OpenAI embedding model that you provide.
```json "name": "example-index-from-import-wizard",
Fields from the chat index that support generative search experience:
] ```
-Here's a screenshot showing [Search explorer](search-explorer.md) search results for the conversations index. The search score is 1.00 because the search was unqualified. Notice the fields that exist to support orchestration and prompt flows. A conversation ID identifies a specific chat. `"type"` indicates whether the content is from the user or the assistant. Dates are used to age out chats from the history.
+Fields from the conversations index that supports orchestration and chat history:
+
+```json
+"fields": [
+ { "name": "id", "type": "Edm.String", "key": true, "searchable": false, "filterable": true, "retrievable": true, "sortable": false, "facetable": false },
+ { "name": "conversation_id", "type": "Edm.String", "searchable": false, "filterable": true, "retrievable": true, "sortable": false, "facetable": true },
+ { "name": "content", "type": "Edm.String", "searchable": true, "filterable": false, "retrievable": true },
+ { "name": "content_vector", "type": "Collection(Edm.Single)", "searchable": true, "retrievable": true, "dimensions": 1536, "vectorSearchProfile": "default-profile" },
+ { "name": "metadata", "type": "Edm.String", "searchable": true, "filterable": false, "retrievable": true },
+ { "name": "type", "type": "Edm.String", "searchable": false, "filterable": true, "retrievable": true, "sortable": false, "facetable": true },
+ { "name": "user_id", "type": "Edm.String", "searchable": false, "filterable": true, "retrievable": true, "sortable": false, "facetable": true },
+ { "name": "sources", "type": "Collection(Edm.String)", "searchable": false, "filterable": true, "retrievable": true, "sortable": false, "facetable": true },
+ { "name": "created_at", "type": "Edm.DateTimeOffset", "searchable": false, "filterable": true, "retrievable": true },
+ { "name": "updated_at", "type": "Edm.DateTimeOffset", "searchable": false, "filterable": true, "retrievable": true }
+]
+```
+
+Here's a screenshot showing search results in [Search Explorer](search-explorer.md) for the conversations index. The search score is 1.00 because the search was unqualified. Notice the fields that exist to support orchestration and prompt flows. A conversation ID identifies a specific chat. `"type"` indicates whether the content is from the user or the assistant. Dates are used to age out chats from the history.
:::image type="content" source="media/vector-search-overview/vector-schema-search-results.png" alt-text="Screenshot of Search Explorer with results from an index designed for RAG apps."::: ## Physical structure and size
-In Azure AI Search, the physical structure of an index is largely an internal implementation. You can access its schema, load and query its content, monitor its size, and manage capacity, but the clusters themselves (indexes, [shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards), and other files and folders) are managed internally by Microsoft.
+In Azure AI Search, the physical structure of an index is largely an internal implementation. You can access its schema, load and query its content, monitor its size, and manage capacity, but the clusters themselves (inverted and vector indexes, [shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards), and other files and folders) are managed internally by Microsoft.
The size and substance of an index is determined by: + Quantity and composition of your documents
-+ Attributes on individual fields
++ Attributes on individual fields. For example, more storage is required for filterable fields. + Index configuration, including vector configuration that specifies how the internal navigation structures are created based on whether you choose HNSW or exhaustive KNN for similarity search.
-Vector store index limits and estimations are covered in [another article](vector-search-index-size.md), but it's highlighted here to emphasize that maximum storage varies by service tier, and also by when the search service was created. Newer same-tier services have significantly more capacity for vector indexes.
+Azure AI Search imposes limits on vector storage, which helps maintain a balanced and stable system for all workloads. To help you stay under the limits, vector usage is tracked and reported separately in the Azure portal, and programmatically through service and index statistics.
+
+The following screenshot shows an S1 service configured with one partition and one replica. This particular service has 24 small indexes, with one vector field on average, each field consisting of 1536 embeddings. The second tile shows the quota and usage for vector indexes. A vector index is an internal data structure created for each vector field. As such, storage for vector indexes is always a fraction of the storage used by the index overall. Other nonvector fields and data structures consume the rest.
++
+Vector index limits and estimations are covered in [another article](vector-search-index-size.md), but two points to emphasize up front is that maximum storage varies by service tier, and also by when the search service was created. Newer same-tier services have significantly more capacity for vector indexes. For these reasons, take the following actions:
+ [Check the deployment date of your search service](vector-search-index-size.md#how-to-determine-service-creation-date). If it was created before July 1, 2023, consider creating a new search service for greater capacity. + [Choose a scalable tier](search-sku-tier.md) if you anticipate fluctuations in vector storage requirements. The Basic tier is fixed at one partition. Consider Standard 1 (S1) and above for more flexibility and faster performance.
-In terms of usage metrics, a vector index is an internal data structure created for each vector field. As such, a vector storage is always a fraction of the overall index size. Other nonvector fields and data structures consume the remainder of the quota for index size and consumed storage at the service level.
- ## Basic operations and interaction This section introduces vector run time operations, including connecting to and securing a single index.
Notice that query continuity exists for document operations (refreshing or delet
To avoid an [index rebuild](search-howto-reindex.md), some customers who are making small changes choose to "version" a field by creating a new one that coexists alongside a previous version. Over time, this leads to orphaned content in the form of obsolete fields or obsolete custom analyzer definitions, especially in a production index that is expensive to replicate. You can address these issues on planned updates to the index as part of index lifecycle management.
+### Endpoint connection
+
+All vector indexing and query requests target an index. Endpoints are usually one of the following:
+
+| Endpoint | Connection and access control |
+|-|-|
+| `<your-service>.search.windows.net/indexes` | Targets the indexes collection. Used when creating, listing, or deleting an index. Admin rights are required for these operations, available through admin [API keys](search-security-api-keys.md) or a [Search Contributor role](search-security-rbac.md#built-in-roles-used-in-search). |
+| `<your-service>.search.windows.net/indexes/<your-index>/docs` | Targets the documents collection of a single index. Used when querying an index or data refresh. For queries, read rights are sufficient, and available through query API keys or a data reader role. For data refresh, admin rights are required. |
+
+#### How to connect to Azure AI Search
+
+1. [Start with the Azure portal](https://portal.azure.com). Azure subscribers, or the person who created the search service, can manage the search service in the Azure portal. An Azure subscription requires Contributor or above permissions to create or delete services. This permission level is sufficient for fully managing a search service in the Azure portal.
+
+1. Try other clients for programmatic access. We recommend the quickstarts and samples for first steps:
+
+ + [Quickstart: REST](search-get-started-vector.md)
+ + [Vector samples](https://github.com/Azure/azure-search-vector-samples/blob/main/README.md)
+ ### Secure access to vector data
-<!-- Azure AI Search supports comprehensive security. Authentication and authorization -->
+Azure AI Search implements data encryption, private connections for no-internet scenarios, and role assignments for secure access through Microsoft Entra ID. The full range of enterprise security features are outlined in [Security in Azure AI Search](search-security-overview.md).
### Manage vector stores
-Azure provides a monitoring platform that includes diagnostic logging and alerting.
+Azure provides a [monitoring platform](monitor-azure-cognitive-search.md) that includes diagnostic logging and alerting. We recommend the following best practices:
-+ Enable logging
-+ Set up alerts
-+ Back up and restore isn't natively supported but there are samples.
-+ Scale
++ [Enable diagnostic logging](/azure/azure-monitor/essentials/create-diagnostic-settings)++ [Set up alerts](/azure/azure-monitor/alerts/tutorial-metric-alert)++ [Analyze query and index performance](search-performance-analysis.md) ## See also
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
The following services are currently supported for Customer Lockbox:
- Azure Database for MySQL - Azure Database for MySQL Flexible Server - Azure Database for PostgreSQL-- Azure Databricks - Azure Edge Zone Platform Storage - Azure Energy - Azure Functions
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
Title: Secure your Microsoft Entra identity infrastructure description: This document outlines a list of important actions administrators should implement to help them secure their organization using Microsoft Entra capabilities- Last updated 08/17/2022- -
-tags: azuread
# Five steps to securing your identity infrastructure
security Tls Certificate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/tls-certificate-changes.md
Title: Azure TLS Certificate Changes
description: Azure TLS Certificate Changes
-tags: azure-resource-manager
sentinel Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customer-managed-keys.md
Title: Set up customer-managed keys in Microsoft Sentinel| Microsoft Docs
-description: Learn how to set up customer-managed keys (CMK) in Microsoft Sentinel.
+description: Learn how to set up customer-managed key (CMK) in Microsoft Sentinel.
Last updated 06/08/2023
This article provides background information and steps to configure a [customer-
## Prerequisites 1. Configure a Log Analytics dedicated cluster with at least a 100 GB/day commitment tier. When multiple workspaces are linked to the same dedicated cluster, they share the same customer-managed key. Learn about [Log Analytics Dedicated Cluster Pricing](../azure-monitor/logs/logs-dedicated-clusters.md#cluster-pricing-model).
-1. Configure the CMK within Azure Monitor. Don't onboard the workspace to Sentinel yet. Learn about the [CMK provisioning steps](../azure-monitor/logs/customer-managed-keys.md?tabs=portal#customer-managed-key-provisioning-steps).
-1. Contact the [Microsoft Sentinel Product Group](mailto:onboardrecoeng@microsoft.com) - you must receive onboarding confirmation as part of completing the steps in this guide before you use the workspace.
-
+1. Configure CMK on the dedicated cluster and link your workspace to that cluster. Learn about the [CMK provisioning steps in Azure Monitor](../azure-monitor/logs/customer-managed-keys.md?tabs=portal#customer-managed-key-provisioning-steps).
+
## Considerations - Onboarding a CMK workspace to Sentinel is supported only via REST API, and not via the Azure portal. Azure Resource Manager templates (ARM templates) currently aren't supported for CMK onboarding.
For more information, see:
## Enable CMK
-To provision CMK, follow these steps: 
-1. Create an Azure Key Vault and generate or import a key.
-1. Enable CMK on your Log Analytics workspace.
+To provision CMK, follow these steps:
+1. Make sure you have a Log Analytics workspace, and that it's linked to a dedicated cluster on which CMK is enabled. (See [Prerequisites](#prerequisites).)
1. Register to the Azure Cosmos DB Resource Provider. 1. Add an access policy to your Azure Key Vault instance. 1. Onboard the workspace to Microsoft Sentinel via the [Onboarding API](/rest/api/securityinsights/preview/sentinel-onboarding-states/create). 1. Contact the Microsoft Sentinel Product group to confirm onboarding.
-### Step 1: Create an Azure Key Vault and generate or import a key
-
-1. [Create Azure Key Vault resource](/azure-stack/user/azure-stack-key-vault-manage-portal), then generate or import a key to be used for data encryption.
-
- > [!NOTE]
- > Azure Key Vault must be configured as recoverable to protect your key and the access.
-
-1. [Turn on recovery options:](../key-vault/general/key-vault-recovery.md)
-
- - Make sure [Soft Delete](../key-vault/general/soft-delete-overview.md) is turned on.
-
- - Turn on [Purge protection](../key-vault/general/soft-delete-overview.md#purge-protection) to guard against forced deletion of the secret/vault even after soft delete.
-
-### Step 2: Enable CMK on your Log Analytics workspace
+### Step 1: Configure CMK on a Log Analytics workspace on a dedicated cluster
+As mentioned in the [prerequisites](#prerequisites), to onboard a Log Analytics workspace with CMK to Microsoft Sentinel, this workspace must first be linked to a dedicated Log Analytics cluster on which CMK is enabled.
+Microsoft Sentinel will use the same key used by the dedicated cluster.
Follow the instructions in [Azure Monitor customer-managed key configuration](../azure-monitor/logs/customer-managed-keys.md) in order to create a CMK workspace that is used as the Microsoft Sentinel workspace in the following steps.
-### Step 3: Register the Azure Cosmos DB Resource Provider
+### Step 2: Register the Azure Cosmos DB Resource Provider
-Microsoft Sentinel works with Azure Cosmos DB as an additional storage resource. Make sure to register to the Azure Cosmos DB Resource Provider.
+Microsoft Sentinel works with Azure Cosmos DB as an additional storage resource. Make sure to register to the Azure Cosmos DB Resource Provider before onboarding a CMK workspace to Microsoft Sentinel.
Follow the instructions to [Register the Azure Cosmos DB Resource Provider](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) for your Azure subscription.
-### Step 4: Add an access policy to your Azure Key Vault instance
+### Step 3: Add an access policy to your Azure Key Vault instance
-Add an access policy that allows your Azure Cosmos DB to access the Azure Key Vault instance created in [**STEP 1**](#step-1-create-an-azure-key-vault-and-generate-or-import-a-key).
+Add an access policy that allows Azure Cosmos DB to access the Azure Key Vault instance that is linked to your dedicated Log Analytics cluster (the same key will be used by Microsoft Sentinel).
Follow the instructions here to [add an access policy to your Azure Key Vault instance](../cosmos-db/how-to-setup-cmk.md#add-access-policy) with an Azure Cosmos DB principal. :::image type="content" source="../cosmos-db/media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" lightbox="../cosmos-db/media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" alt-text="Screenshot of the Select principal option on the Add access policy page.":::
-### Step 5: Onboard the workspace to Microsoft Sentinel via the onboarding API
+### Step 4: Onboard the workspace to Microsoft Sentinel via the onboarding API
Onboard the CMK enabled workspace to Microsoft Sentinel via the [onboarding API](/rest/api/securityinsights/preview/sentinel-onboarding-states/create) using the `customerManagedKey` property as `true`. For more context on the onboarding API, see [this document](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx) in the Microsoft Sentinel GitHub repo.
PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
} ```
-### Step 6: Contact the Microsoft Sentinel Product group to confirm onboarding
+### Step 5: Contact the Microsoft Sentinel Product group to confirm onboarding
-Lastly, you must confirm the onboarding status of your CMK enabled workspace by contacting the [Microsoft Sentinel Product Group](mailto:onboardrecoeng@microsoft.com).
+Lastly, confirm the onboarding status of your CMK-enabled workspace by contacting the [Microsoft Sentinel Product Group](mailto:onboardrecoeng@microsoft.com).
## Key Encryption Key revocation or deletion
sentinel Sentinel Solutions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-deploy.md
Title: Discover and deploy Microsoft Sentinel out-of-the-box content from Conten
description: Learn how to find and deploy Sentinel packaged solutions containing data connectors, analytics rules, hunting queries, workbooks, and other content. Previously updated : 09/29/2023 Last updated : 02/15/2024 # Discover and manage Microsoft Sentinel out-of-the-box content
-The Microsoft Sentinel Content hub is your centralized location to discover and manage out-of-the-box (built-in) content. There you'll find packaged solutions for end-to-end products by domain or industry. You'll also have access to the vast number of standalone contributions hosted in our GitHub repository and feature blades.
+The Microsoft Sentinel Content hub is your centralized location to discover and manage out-of-the-box (built-in) content. There you find packaged solutions for end-to-end products by domain or industry. You have access to the vast number of standalone contributions hosted in our GitHub repository and feature blades.
-- Discover solutions and standalone content with a consistent set of filtering capabilities based on status, content type, support, provider and category.
+- Discover solutions and standalone content with a consistent set of filtering capabilities based on status, content type, support, provider, and category.
- Install content in your workspace all at once or individually.
If you're a partner who wants to create your own solution, see the [Microsoft Se
## Prerequisites
-In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level. In addition, the **Template Spec Contributor** role is still required for some edge cases. See [Azure RBAC built in roles](../role-based-access-control/built-in-roles.md#template-spec-contributor) for details on this role.
+In order to install, update, and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
-This is in addition to Sentinel specific roles. For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
+For more information about other roles and permissions supported for Microsoft Sentinel, see [Permissions in Microsoft Sentinel](roles.md).
## Discover content
-The content hub offers the best way to find new content or manage the solutions you already have installed.
+The content hub offers the best way to find new content or manage the solutions you already installed.
-1. From the Microsoft Sentinel navigation menu, under **Content management**, select **Content hub**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.microsoft.com), under **Content management**, select **Content hub**.
-1. The **Content hub** page displays a searchable grid or list of solutions and standalone content.
+ The **Content hub** page displays a searchable grid or list of solutions and standalone content.
- Filter the list displayed, either by selecting specific values from the filters, or entering any part of a content name or description in the **Search** field.
+1. Filter the list displayed, either by selecting specific values from the filters, or entering any part of a content name or description in the **Search** field.
For more information, see [Categories for Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions.md#categories-for-microsoft-sentinel-out-of-the-box-content-and-solutions).
- > [!TIP]
- > If a solution that you've deployed has updates since you deployed it, the list view will have a blue up arrow in the status column, and will be included in the **Updates** blue up arrow count at the top of the page.
- >
+1. Select the **Card view** to view more information about a solution.
-Each content item shows categories that apply to it, and solutions show the types of content included.
+ Each content item shows categories that apply to it, and solutions show the types of content included. For example, in the following image, the **Cisco Umbrella** solution lists one of its categories as **Security - Cloud Security**, and indicates it includes a data connector, analytics rules, hunting queries, playbooks, and more.
-For example, in the following image, the **Cisco Umbrella** solution lists one of its categories as **Security - Cloud Security**, and indicates it includes a data connector, analytics rules, hunting queries, playbooks, and more.
-
+ :::image type="content" source="./media/sentinel-solutions-deploy/solutions-list.png" alt-text="Screenshot of the Microsoft Sentinel content hub.":::
## Install or update content
-Standalone content and solutions can be installed individually or all together in bulk. For more information on bulk operations, see [Bulk install and update content](#bulk-install-and-update-content) in the next section. Here's an example showing the install of an individual solution.
+Install standalone content and solutions individually or all together in bulk. For more information on bulk operations, see [Bulk install and update content](#bulk-install-and-update-content) in the next section.
+
+If a solution that you deployed has updates since you last deployed it, the list view shows **Update** in the status column. The solution is also included in the **Updates** count at the top of the page.
+
+Here's an example showing the install of an individual solution.
-1. In the content hub, to view more information about a solution switch to **Card view**.
+1. In the **Content hub**, search for and select the solution.
-1. Then select **View details** to initiate steps for installation.
+1. On the solutions details pane, from the bottom right-hand side, select **View details**.
-1. On the solution details page, select **Create** or **Update** to start the solution wizard. On the **Basics** tab, enter the subscription, resource group, and workspace to deploy the solution. For example:
+1. Select **Create** or **Update**.
+1. On the **Basics** tab, enter the subscription, resource group, and workspace to deploy the solution. For example:
:::image type="content" source="media/sentinel-solutions-deploy/wizard-basics.png" alt-text="Screenshot of a solution installation wizard, showing the Basics tab.":::
-1. Select **Next** to cycle through the remaining tabs (corresponding to the components included in the solution), where you can learn about, and in some cases configure, each of the content components.
+1. Select **Next** to go through the remaining tabs to learn about, and in some cases configure, each of the content components.
- > [!NOTE]
- > The tabs displayed for you correspond with the content offered by the solution. Different solutions may have different types of content, so you may not see all the same tabs in every solution.
- >
- > You may also be prompted to enter credentials to a third party service so that Microsoft Sentinel can authenticate to your systems. For example, with playbooks, you may want to take response actions as prescribed in your system.
- >
+ The tabs correspond with the content offered by the solution. Different solutions might have different types of content, so you might not see the same tabs in every solution.
-1. Finally, in the **Review + create** tab, wait for the `Validation Passed` message, then select **Create** or **Update** to deploy the solution. You can also select the **Download a template for automation** link to deploy the solution as code.
+ You might also be prompted to enter credentials to a third party service so that Microsoft Sentinel can authenticate to your systems. For example, with playbooks, you might want to take response actions as prescribed in your system.
-1. Each content type within the solution may require additional steps to configure. For more information, see [Enable content items in a solution](#enable-content-items-in-a-solution).
+1. In the **Review + create** tab, wait for the `Validation Passed` message.
+1. Select **Create** or **Update** to deploy the solution. You can also select the **Download a template for automation** link to deploy the solution as code.
-## Bulk install and update content
+Each content type within the solution might require more steps to configure. For more information, see [Enable content items in a solution](#enable-content-items-in-a-solution).
-Content hub supports a list view in addition to the default card view. Multiple solutions and standalone content can be selected with this view to install and update them all at once. Standalone content is kept up-to-date automatically. Any active or
-custom content created based on solutions or standalone content installed from content hub remains untouched.
+## Bulk install and update content
-1. To install and/or update items in bulk, change to the list view.
+Content hub supports a list view in addition to the default card view. Select the list view to install multiple solutions and standalone content all at once. Standalone content is kept up-to-date automatically. Any active or custom content created based on solutions or standalone content installed from content hub remains untouched.
-1. The list view is paginated, so choose a filter to ensure the content you want to bulk install are in view. Select their checkboxes and click the **Install/Update** button.
+1. To install or update items in bulk, change to the list view.
+1. Search for or filter to find the content that you want to install or update in bulk.
+1. Select the checkbox for each solution or standalone content that you want to install or update.
+1. Select the **Install/Update** button.
:::image type="content" source="media/sentinel-solutions-deploy/bulk-install-update.png" alt-text="Screenshot of solutions list view with multiple solutions selected and in progress for installation." lightbox="media/sentinel-solutions-deploy/bulk-install-update.png":::
-1. The content hub interface will indicate *in progress* for installs and updates. Azure notifications will also indicate the action taken. If a solution or standalone content that was already installed or updated was selected, no action will be taken on that item and it won't interfere with the update and install of the other items.
+ If a solution or standalone content you selected was already installed or updated, no action is taken on that item. It doesn't interfere with the update and install of the other items.
-1. Check each installed solution's **Manage** view. Content types within the solution may require additional steps to configure. For more information, see [Enable content items in a solution](#enable-content-items-in-a-solution).
+1. Select **Manage** for each solution you installed. Content types within the solution might require more information for you to configure. For more information, see [Enable content items in a solution](#enable-content-items-in-a-solution).
## Enable content items in a solution
Centrally manage content items for installed solutions from the content hub.
1. Select a content item to get started.
-### Management options for each content type
-Below are some tips on how to interact with various content types when managing a solution.
+### Manage each content type
+
+The following sections provide some tips on how to work with the different content types as you manage a solution.
#### Data connector+
+To connect a data connector, complete the configuration steps.
1. Select **Open connector page**. 1. Complete the data connector configuration steps. :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-data-connector-open-connector.png" alt-text="Screenshot of data connector content item for Azure Activity solution where status is disconnected.":::
-1. After you configure the data connector and logs are detected, the status will change to **Connected**.
+ After you configure the data connector and logs are detected, the status changes to **Connected**.
#### Analytics rule
-1. View the template in the analytics template gallery.
-1. If the template hasn't been used yet, select **Open** > **Create rule** and follow the steps to enable the analytics rule.
-1. Once created, the number of active rules created from the template is shown in the **Created content** column.
-1. Click the active rules link, in this example **2 items**, to edit the existing rule.
- :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-analytics-rule.png" alt-text="Screenshot of analytics rule content item in solution for Azure Activity." lightbox="media/sentinel-solutions-deploy/manage-solution-analytics-rule.png":::
+Create a rule from a template or edit an existing rule.
+
+1. View the template in the analytics template gallery.
+1. If the template isn't used yet, select **Open** > **Create rule** and follow the steps to enable the analytics rule.
+
+ After you create a rule, the number of active rules created from the template is shown in the **Created content** column.
+1. Select the active rules link to edit the existing rule. For example, the active rule link in the following image is under **Content created** and shows **2 items**.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-analytics-rule.png" alt-text="Screenshot of analytics rule content item in solution for Azure Activity." lightbox="media/sentinel-solutions-deploy/manage-solution-analytics-rule.png":::
#### Hunting query
-1. To start searching right away, select **Run query** from the details page for quick results.
+
+Run the provided hunting query or customize it.
+
+1. To start searching right away, select **Run query** from the details page for quick results.
:::image type="content" source="media/sentinel-solutions-deploy/manage-solution-hunting-query.png" alt-text="Screenshot of cloned hunting query content item in solution for Azure Activity." lightbox="media/sentinel-solutions-deploy/manage-solution-hunting-query.png":::
-1. To customize your hunting query, select the link, in this case **Common deployed resources**, in the **Content name** column.
-1. This brings you to the hunting gallery where you can create a clone of the read-only hunting query template by accessing the ellipses menu. Hunting queries created in this way will display as items in the content hub **Created content** column.
+1. To customize your hunting query, select the link in the **Content name** column.
+
+ From the hunting gallery, you can create a clone of the read-only hunting query template by going to the ellipses menu. Hunting queries created in this way display as items in the content hub **Created content** column.
#### Workbook
-1. Select **View template** to open the workbook and see the visualizations.
-1. To create an instance of the workbook template select **Save**.
+
+To customize a workbook created from a template, create an instance of a workbook.
+
+1. Select **View template** to open the workbook and see the visualizations.
+1. Select **Save** to create an instance of the workbook template.
1. View your saved customizable workbook by selecting **View saved workbook**. 1. From the content hub, select the **1 item** link in the **Created content** column to manage the workbook. :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-workbook.png" alt-text="Screenshot of saved workbook item in solution for Azure Activity." lightbox="media/sentinel-solutions-deploy/manage-solution-workbook.png" :::
-#### Parser
+#### Parser
+ When a solution is installed, any parsers included are added as workspace functions in Log Analytics.
-1. Select **Load the function code** to open Log Analytics and view or run the function code.
+
+1. Select **Load the function code** to open Log Analytics and view or run the function code.
1. Select **Use in editor** to open Log Analytics with the parser name ready to add to your custom query. :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-parser.png" alt-text="Screenshot of parser content type in a solution." lightbox="media/sentinel-solutions-deploy/manage-solution-parser.png"::: #### Playbook
-1. Select the **Content name** link of the playbook, in this example **BatchImportToSentinel**.
-1. This playbook template will populate the search field. From the results choose the template and select **Create playbook**.
-1. Once created, the active playbook is shown in the **Created content** column.
-1. Click the active playbook **1 item** link to manage the playbook.
- :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-playbook.png" alt-text="Screenshot of playbook type content type in a solution." lightbox="media/sentinel-solutions-deploy/manage-solution-playbook.png":::
+Create a playbook from a template.
+1. Select the **Content name** link of the playbook.
+1. Choose the template and select **Create playbook**.
+1. After the playbook is created, the active playbook is shown in the **Created content** column.
+1. Select the active playbook **1 item** link to manage the playbook.
+
+ :::image type="content" source="media/sentinel-solutions-deploy/manage-solution-playbook.png" alt-text="Screenshot of playbook type content type in a solution." lightbox="media/sentinel-solutions-deploy/manage-solution-playbook.png":::
## Find the support model for your content
Each solution and standalone content item explains its support model on its deta
:::image type="content" source="media/sentinel-solutions-deploy/find-support-details.png" alt-text="Screenshot of where you can find your support model for your solution." lightbox="media/sentinel-solutions-deploy/find-support-details.png":::
-When contacting support, you may need other details about your solution, such as a publisher, provider, and plan ID values. You can find each of these on the details page, on the **Usage information & support** tab. For example:
+When contacting support, you might need other details about your solution, such as a publisher, provider, and plan ID values. Find this information on the details page in the **Usage information & support** tab.
:::image type="content" source="media/sentinel-solutions-deploy/usage-support.png" alt-text="Screenshot of usage and support details for a solution.":::
In this document, you learned how to find and deploy built-in solutions and stan
- Learn more about [Microsoft Sentinel solutions](sentinel-solutions.md). - See the full Microsoft Sentinel solutions catalog in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?filters=solution-templates&page=1&search=sentinel). - Find domain specific solutions in the [Microsoft Sentinel content hub catalog](sentinel-solutions-catalog.md).-- [Delete installed Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-delete.md)
+- [Delete installed Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-delete.md).
-Many solutions include data connectors that you'll need to configure so that you can start ingesting your data into Microsoft Sentinel. Each data connector will have its own set of requirements, detailed on the data connector page in Microsoft Sentinel.
+Many solutions include data connectors that you need to configure so that you can start ingesting your data into Microsoft Sentinel. Each data connector has its own set of requirements that are detailed on the data connector page in Microsoft Sentinel.
-For more information, see [Connect your data source](data-connectors-reference.md).
+For more information, see [Connect your data source](data-connectors-reference.md).
service-bus-messaging Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/network-security.md
You can use service tags to define network access controls on [network security
> [!NOTE]
-> You can use service tags only for **premium** namespaces. If you are using a **standard** namespace, use the IP address that you see when you run the following command: `nslookup <host name for the namespace>`. For example: `nslookup contosons.servicebus.windows.net`.
+> You can use service tags only for **premium** namespaces. If you are using a **standard** namespace, use the FQDN of the namespace instead, in the form of <contoso.servicebus.windows.net>. Alternatively you can use the IP address that you see when you run the following command: `nslookup <host name for the namespace>`, however this is not recommended or supported, and you will need to keep track of changes to the IP addresses.
## IP firewall By default, Service Bus namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
service-fabric Cli Create Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/cli-create-cluster.md
description: How to create a secure Service Fabric Linux cluster in Azure via th
-tags: azure-service-management
- Last updated 01/18/2018
service-fabric Cli Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/cli-deploy-application.md
description: Deploy an application to an Azure Service Fabric cluster using the
-tags: azure-service-management
- Last updated 04/16/2018
service-fabric Cli Remove Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/cli-remove-application.md
description: Remove an application from an Azure Service Fabric cluster using th
-tags: azure-service-management
- Last updated 12/06/2017
service-fabric Service Fabric Powershell Add Application Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-application-certificate.md
description: Azure PowerShell Script Sample - Add an application certificate to
-tags: azure-service-management
- Last updated 01/18/2018
service-fabric Service Fabric Powershell Add Nsg Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-add-nsg-rule.md
description: Azure PowerShell Script Sample - Adds a network security group to a
-tags: azure-service-management
- Last updated 11/28/2017
service-fabric Service Fabric Powershell Change Rdp Port Range https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-port-range.md
Title: Azure PowerShell Script Sample - Change the RDP port range | Microsoft Docs description: Azure PowerShell Script Sample - Changes the RDP port range of a deployed cluster.
-tags: azure-service-management
service-fabric Service Fabric Powershell Change Rdp User And Pw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-change-rdp-user-and-pw.md
description: Azure PowerShell Script Sample - Update the RDP username and passwo
-tags: azure-service-management
- Last updated 03/19/2018
service-fabric Service Fabric Powershell Create Secure Cluster Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-create-secure-cluster-cert.md
description: Azure PowerShell Script Sample - Create a Service Fabric cluster se
-tags: azure-service-management
- ms.assetid: 0f9c8bc5-3789-4eb3-8deb-ae6e2200795a
service-fabric Service Fabric Powershell Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-deploy-application.md
description: Azure PowerShell Script Sample - Deploy an application to a Service
-tags: azure-service-management
- Last updated 01/18/2018
service-fabric Service Fabric Powershell Open Port In Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-open-port-in-load-balancer.md
description: Azure PowerShell Script Sample - Open a port in the Azure load bala
-tags: azure-service-management
- Last updated 05/18/2018
service-fabric Service Fabric Powershell Remove Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-remove-application.md
description: Azure PowerShell Script Sample - Remove an application from a Servi
-tags: azure-service-management
- Last updated 01/18/2018
service-fabric Service Fabric Powershell Upgrade Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/scripts/service-fabric-powershell-upgrade-application.md
description: Azure PowerShell Script Sample - Upgrade and monitor an Azure Servi
-tags: azure-service-management
- Last updated 01/18/2018
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
Previously updated : 04/19/2022 Last updated : 02/16/2024 ms.devlang: powershell # ms.devlang: powershell, azurecli
Microsoft Entra authorizes access rights to secured resources through [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). Azure Storage defines a set of Azure built-in roles that encompass common sets of permissions used to access blob data.
-When an Azure role is assigned to a Microsoft Entra security principal, Azure grants access to those resources for that security principal. A Microsoft Entra security principal may be a user, a group, an application service principal, or a [managed identity for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+When an Azure role is assigned to a Microsoft Entra security principal, Azure grants access to those resources for that security principal. A Microsoft Entra security principal can be a user, a group, an application service principal, or a [managed identity for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
To learn more about using Microsoft Entra ID to authorize access to blob data, see [Authorize access to blobs using Microsoft Entra ID](authorize-access-azure-active-directory.md).
To access blob data in the Azure portal with Microsoft Entra credentials, a user
To learn how to assign these roles to a user, follow the instructions provided in [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-The [Reader](../../role-based-access-control/built-in-roles.md#reader) role is an Azure Resource Manager role that permits users to view storage account resources, but not modify them. It does not provide read permissions to data in Azure Storage, but only to account management resources. The **Reader** role is necessary so that users can navigate to blob containers in the Azure portal.
+The [Reader](../../role-based-access-control/built-in-roles.md#reader) role is an Azure Resource Manager role that permits users to view storage account resources, but not modify them. It doesn't provide read permissions to data in Azure Storage, but only to account management resources. The **Reader** role is necessary so that users can navigate to blob containers in the Azure portal.
-For example, if you assign the **Storage Blob Data Contributor** role to user Mary at the level of a container named **sample-container**, then Mary is granted read, write, and delete access to all of the blobs in that container. However, if Mary wants to view a blob in the Azure portal, then the **Storage Blob Data Contributor** role by itself will not provide sufficient permissions to navigate through the portal to the blob in order to view it. The additional permissions are required to navigate through the portal and view the other resources that are visible there.
+For example, if you assign the **Storage Blob Data Contributor** role to user Mary at the level of a container named **sample-container**, then Mary is granted read, write, and delete access to all of the blobs in that container. However, if Mary wants to view a blob in the Azure portal, then the **Storage Blob Data Contributor** role by itself won't provide sufficient permissions to navigate through the portal to the blob in order to view it. The additional permissions are required to navigate through the portal and view the other resources that are visible there.
-A user must be assigned the **Reader** role to use the Azure portal with Microsoft Entra credentials. However, if a user has been assigned a role with **Microsoft.Storage/storageAccounts/listKeys/action** permissions, then the user can use the portal with the storage account keys, via Shared Key authorization. To use the storage account keys, Shared Key access must be permitted for the storage account. For more information on permitting or disallowing Shared Key access, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md).
+A user must be assigned the **Reader** role to use the Azure portal with Microsoft Entra credentials. However, if a user is assigned a role with **Microsoft.Storage/storageAccounts/listKeys/action** permissions, then the user can use the portal with the storage account keys, via Shared Key authorization. To use the storage account keys, Shared Key access must be permitted for the storage account. For more information on permitting or disallowing Shared Key access, see [Prevent Shared Key authorization for an Azure Storage account](../common/shared-key-authorization-prevent.md).
-You can also assign an Azure Resource Manager role that provides additional permissions beyond than the **Reader** role. Assigning the least possible permissions is recommended as a security best practice. For more information, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
+You can also assign an Azure Resource Manager role that provides additional permissions beyond the **Reader** role. Assigning the least possible permissions is recommended as a security best practice. For more information, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
> [!NOTE] > Prior to assigning yourself a role for data access, you will be able to access data in your storage account via the Azure portal because the Azure portal can also use the account key for data access. For more information, see [Choose how to authorize access to blob data in the Azure portal](../blobs/authorize-data-operations-portal.md). # [PowerShell](#tab/powershell)
-To assign an Azure role to a security principal with PowerShell, call the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) command. In order to run the command, you must have a role that includes **Microsoft.Authorization/roleAssignments/write** permissions assigned to you at the corresponding scope or above.
+To assign an Azure role to a security principal with PowerShell, call the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) command. In order to run the command, you must have a role that includes **Microsoft.Authorization/roleAssignments/write** permissions assigned to you at the corresponding scope or higher.
-The format of the command can differ based on the scope of the assignment, but the `-ObjectId` and `-RoleDefinitionName` are required parameters. Passing a value for the `-Scope` parameter, while not required, is highly recommended to retain the principle of least privilege. By limiting roles and scopes, you limit the resources which are at risk if the security principal is ever compromised.
+The format of the command can differ based on the scope of the assignment, but the `-ObjectId` and `-RoleDefinitionName` are required parameters. Passing a value for the `-Scope` parameter, while not required, is highly recommended to retain the principle of least privilege. By limiting roles and scopes, you limit the resources that are at risk if the security principal is ever compromised.
-The `-ObjectId` parameter is the Microsoft Entra object ID of the user, group or service principal to which the role will be assigned. To retrieve the identifier, you can use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to filter Microsoft Entra users, as shown in the following example.
+The `-ObjectId` parameter is the Microsoft Entra object ID of the user, group, or service principal to which the role is being assigned. To retrieve the identifier, you can use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to filter Microsoft Entra users, as shown in the following example.
```azurepowershell Get-AzADUser -DisplayName '<Display Name>'
For information about assigning roles with PowerShell at the subscription or res
# [Azure CLI](#tab/azure-cli)
-To assign an Azure role to a security principal with Azure CLI, use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command. The format of the command can differ based on the scope of the assignment. In order to run the command, you must have a role that includes **Microsoft.Authorization/roleAssignments/write** permissions assigned to you at the corresponding scope or above.
+To assign an Azure role to a security principal with Azure CLI, use the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command. The format of the command can differ based on the scope of the assignment. In order to run the command, you must have a role that includes **Microsoft.Authorization/roleAssignments/write** permissions assigned to you at the corresponding scope or higher.
To assign a role scoped to a container, specify a string containing the scope of the container for the `--scope` parameter. The scope for a container is in the form:
az role assignment create \
--scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>/blobServices/default/containers/<container-name>" ```
-For information about assigning roles with PowerShell at the subscription, resource group, or storage account scope, see [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
+The following example assigns the **Storage Blob Data Reader** role to a user by specifying the object ID. To learn more about the `--assignee-object-id` and `--assignee-principal-type` parameters, see [az role assignment](/cli/azure/role/assignment). In this example, the role assignment is scoped to the level of the storage account. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values:
+
+<!-- replaycheck-task id="66526dae" -->
+```azurecli-interactive
+az role assignment create \
+ --role "Storage Blob Data Reader" \
+ --assignee-object-id "ab12cd34-ef56-ab12-cd34-ef56ab12cd34" \
+ --assignee-principal-type "User" \
+ --scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>"
+```
+
+For information about assigning roles with Azure CLI at the subscription, resource group, or storage account scope, see [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
# [Template](#tab/template)
To learn how to use an Azure Resource Manager template to assign an Azure role,
Keep in mind the following points about Azure role assignments in Azure Storage: -- When you create an Azure Storage account, you are not automatically assigned permissions to access data via Microsoft Entra ID. You must explicitly assign yourself an Azure role for Azure Storage. You can assign it at the level of your subscription, resource group, storage account, or container.
+- When you create an Azure Storage account, you aren't automatically assigned permissions to access data via Microsoft Entra ID. You must explicitly assign yourself an Azure role for Azure Storage. You can assign it at the level of your subscription, resource group, storage account, or container.
- If the storage account is locked with an Azure Resource Manager read-only lock, then the lock prevents the assignment of Azure roles that are scoped to the storage account or a container.-- If you have set the appropriate allow permissions to access data via Microsoft Entra ID and are unable to access the data, for example you are getting an "AuthorizationPermissionMismatch" error. Be sure to allow enough time for the permissions changes you have made in Microsoft Entra ID to replicate, and be sure that you do not have any deny assignments that block your access, see [Understand Azure deny assignments](../../role-based-access-control/deny-assignments.md).
+- If you set the appropriate allow permissions to access data via Microsoft Entra ID and are unable to access the data, for example you're getting an "AuthorizationPermissionMismatch" error. Be sure to allow enough time for the permissions changes you made in Microsoft Entra ID to replicate, and be sure that you don't have any deny assignments that block your access, see [Understand Azure deny assignments](../../role-based-access-control/deny-assignments.md).
> [!NOTE] > You can create custom Azure RBAC roles for granular access to blob data. For more information, see [Azure custom roles](../../role-based-access-control/custom-roles.md).
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
If the resource group was created successfully, you'll see output similar to thi
Before deploying Azure Container Storage, you'll need to decide which back-end storage option you want to use to create your storage pool and persistent volumes. Three options are currently available: -- **Azure Elastic SAN**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CI/CD environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
+- **Azure Elastic SAN**: Azure Elastic SAN is a good fit for general purpose databases, streaming and messaging services, CI/CD environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
- **Azure Disks**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Title: Release notes for Azure File Sync
-description: Release notes for Azure File Sync which lets you centralize your organization's file shares in Azure Files.
+description: Release notes for Azure File Sync, which lets you centralize your organization's file shares in Azure Files.
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--||
+| V17.1 Release - [KB5023054](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b)| 17.1.0.0 | February 13, 2024 | Supported - Security Update|
+| V16.2 Release - [KB5023052](https://support.microsoft.com/topic/azure-file-sync-agent-v16-2-release-february-2024-security-only-update-8247bf99-8f51-4eb6-b378-b86b6d1d45b8)| 16.2.0.0 | February 13, 2024 | Supported - Security Update|
| V17.0 Release - [KB5023053](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9)| 17.0.0.0 | December 6, 2023 | Supported - Flighting | | V16.0 Release - [KB5013877](https://support.microsoft.com/topic/ffdc8fe2-c653-43c8-8b47-0865267fd520)| 16.0.0.0 | January 30, 2023 | Supported | | V15.2 Release - [KB5013875](https://support.microsoft.com/topic/9159eee2-3d16-4523-ade4-1bac78469280)| 15.2.0.0 | November 21, 2022 | Supported - Agent version will expire on March 19, 2024 |
The following Azure File Sync agent versions have expired and are no longer supp
### Azure File Sync agent update policy [!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)]
+## Version 17.1.0.0 (Security Update)
+The following release notes are for Azure File Sync version 17.1.0.0 (released February 13, 2024). This release contains a security update for the Azure File Sync agent. These notes are in addition to the release notes listed for version 17.0.0.0.
+
+### Improvements and issues that are fixed
+- Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397).
+
+## Version 16.2.0.0 (Security Update)
+The following release notes are for Azure File Sync version 16.2.0.0 (released February 13, 2024). This release contains security updates for the Azure File Sync agent. These notes are in addition to the release notes listed for version 16.0.0.0.
+
+### Improvements and issues that are fixed
+- Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397).
+ ## Version 17.0.0.0 (Flighting) The following release notes are for Azure File Sync version 17.0.0.0 (released December 6, 2023). This release contains improvements for the Azure File Sync service and agent.
The following release notes are for Azure File Sync version 16.0.0.0 (released J
### Improvements and issues that are fixed - Improved Azure File Sync service availability
- - Azure File Sync is now a zone-redundant service which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully leverage this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](../files/files-redundancy.md).
+ - Azure File Sync is now a zone-redundant service, which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully use this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](../files/files-redundancy.md).
- Immediately run server change enumeration to detect files changes that were missed on the server - Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files will not sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the Invoke-StorageSyncServerChangeDetection PowerShell cmdlet to immediately run server change enumeration on a server endpoint path.
storage Analyze Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/analyze-files-metrics.md
Previously updated : 09/06/2023 Last updated : 02/13/2024 # Analyze Azure Files metrics using Azure Monitor
-Understanding how to monitor file share performance is critical to ensuring that your application is running as efficiently as possible. This article shows you how to use [Azure Monitor](../../azure-monitor/overview.md) to analyze Azure Files metrics such as availability, latency, and utilization.
+Understanding how to monitor file share performance is critical to ensuring that your application is running as efficiently as possible. This article shows you how to use [Azure Monitor](/azure/azure-monitor/overview) to analyze Azure Files metrics such as availability, latency, and utilization.
+
+See [Monitor Azure Files](storage-files-monitoring.md) for details on the monitoring data you can collect for Azure Files and how to use it.
## Applies to | File share type | SMB | NFS |
Understanding how to monitor file share performance is critical to ensuring that
## Supported metrics
-For a list of all Azure Monitor support metrics, which includes Azure Files, see [Azure Monitor supported metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftstoragestorageaccountsfileservices).
+Metrics for Azure Files are in these namespaces:
-### [Azure portal](#tab/azure-portal)
+- Microsoft.Storage/storageAccounts
+- Microsoft.Storage/storageAccounts/fileServices
-You can analyze metrics for Azure Storage with metrics from other Azure services by using Metrics Explorer. Open Metrics Explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](../../azure-monitor/essentials/analyze-metrics.md).
+For a list of available metrics for Azure Files, see [Azure Files monitoring data reference](storage-files-monitoring-reference.md#supported-metrics-for-microsoftstoragestorageaccountsfileservices).
-For metrics that support dimensions, you can filter the metric with the desired dimension value. For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](storage-files-monitoring-reference.md#metrics-dimensions). Metrics for Azure Files are in these namespaces:
+For a list of all Azure Monitor supported metrics, which includes Azure Files, see [Azure Monitor supported metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index#supported-metrics-per-resource-type).
-- Microsoft.Storage/storageAccounts-- Microsoft.Storage/storageAccounts/fileServices
+## View Azure Files metrics data
+
+You can view Azure Files metrics by using the Azure portal, PowerShell, Azure CLI, or .NET.
+
+### [Azure portal](#tab/azure-portal)
+
+You can analyze metrics for Azure Storage with metrics from other Azure services by using Azure Monitor Metrics Explorer. Open metrics explorer by choosing **Metrics** from the **Azure Monitor** menu. For details on using this tool, see [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics).
+
+For metrics that support dimensions, you can filter the metric with the desired dimension value. For a complete list of the dimensions that Azure Storage supports, see [Metrics dimensions](storage-files-monitoring-reference.md#metrics-dimensions).
### [PowerShell](#tab/azure-powershell)
Azure Monitor provides the [.NET SDK](https://www.nuget.org/packages/Microsoft.A
In these examples, replace the `<resource-ID>` placeholder with the resource ID of the entire storage account or the Azure Files service. You can find these resource IDs on the **Properties** pages of your storage account in the Azure portal.
-Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create a Microsoft Entra application and service principal that can access resources](../../active-directory/develop/howto-create-service-principal-portal.md).
+Replace the `<subscription-ID>` variable with the ID of your subscription. For guidance on how to obtain values for `<tenant-ID>`, `<application-ID>`, and `<AccessKey>`, see [Use the portal to create a Microsoft Entra application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal).
### List the account-level metric definition
Compared against the **Bandwidth by Max MiB/s**, we achieved 123 MiB/s at peak.
:::image type="content" source="media/analyze-files-metrics/bandwidth-by-max-mibs.png" alt-text="Screenshot showing bandwidth by max MIBS." lightbox="media/analyze-files-metrics/bandwidth-by-max-mibs.png" border="false":::
-## Analyze logs
-
-You can access resource logs either as a blob in a storage account, as event data, or through Log Analytics queries. For information about how to find those logs, see [Azure resource logs](../../azure-monitor/essentials/resource-logs.md).
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../../azure-monitor/essentials/resource-logs-schema.md). The schema for Azure Files resource logs is found in [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
-
-To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages).
-
-Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure File service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-
-The [Activity log](../../azure-monitor/essentials/activity-log.md) is a type of platform log located in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
--
-### Log authenticated requests
-
- The following types of authenticated requests are logged:
--- Successful requests-- Failed requests, including timeout, throttling, network, authorization, and other errors-- Requests that use Kerberos, NTLM or shared access signature (SAS), including failed and successful requests-- Requests to analytics data (classic log data in the **$logs** container and classic metric data in the **$metric** tables)-
-Requests made by the Azure Files service itself, such as log creation or deletion, aren't logged. For a full list of the SMB and REST requests that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
-
-### Sample Kusto queries
-
-If you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
-
-Here are some queries that you can enter in the **Log search** bar to help you monitor your Azure Files. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
-
-> [!IMPORTANT]
-> When you select **Logs** from the storage account resource group menu, Log Analytics is opened with the query scope set to the current resource group. This means that log queries will only include data from that resource group. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../../azure-monitor/logs/scope.md) for details.
-
-Use these queries to help you monitor your Azure file shares:
--- View SMB errors over the last week-
-```Kusto
-StorageFileLogs
-| where Protocol == "SMB" and TimeGenerated >= ago(7d) and StatusCode contains "-"
-| sort by StatusCode
-```
-- Create a pie chart of SMB operations over the last week-
-```Kusto
-StorageFileLogs
-| where Protocol == "SMB" and TimeGenerated >= ago(7d)
-| summarize count() by OperationName
-| sort by count_ desc
-| render piechart
-```
--- View REST errors over the last week-
-```Kusto
-StorageFileLogs
-| where Protocol == "HTTPS" and TimeGenerated >= ago(7d) and StatusText !contains "Success"
-| sort by StatusText asc
-```
--- Create a pie chart of REST operations over the last week-
-```Kusto
-StorageFileLogs
-| where Protocol == "HTTPS" and TimeGenerated >= ago(7d)
-| summarize count() by OperationName
-| sort by count_ desc
-| render piechart
-```
-
-To view the list of column names and descriptions for Azure Files, see [StorageFileLogs](/azure/azure-monitor/reference/tables/storagefilelogs).
-
-For more information on how to write queries, see [Log Analytics tutorial](../../azure-monitor/logs/log-analytics-tutorial.md).
-
-## Next steps
+## Related content
- [Monitor Azure Files](storage-files-monitoring.md)-- [Create monitoring alerts for Azure Files](files-monitoring-alerts.md) - [Azure Files monitoring data reference](storage-files-monitoring-reference.md)-- [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md)
+- [Create monitoring alerts for Azure Files](files-monitoring-alerts.md)
+- [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource)
- [Understand Azure Files performance](understand-performance.md) - [Troubleshoot ClientOtherErrors](/troubleshoot/azure/azure-storage/files-client-other-errors?toc=/azure/storage/files/toc.json)
storage Files Monitoring Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-monitoring-alerts.md
Previously updated : 09/06/2023 Last updated : 02/13/2024 # Create monitoring alerts for Azure Files
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../../azure-monitor/alerts/alerts-metric-overview.md), [logs](../../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../../azure-monitor/alerts/activity-log-alerts.md).
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts).
-To learn more about how to create an alert, see [Create or edit an alert rule](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+This article shows you how to create alerts on throttling, capacity, egress, and high server latency. To learn more about creating alerts, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
+
+For more information about alert types and alerts, see [Monitor Azure Files](storage-files-monitoring.md#alerts).
## Applies to | File share type | SMB | NFS |
The following table lists some example scenarios to monitor and the proper metri
To create an alert that will notify you if a file share is being throttled, follow these steps.
-1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
2. In the **Condition** tab, select the **Transactions** metric.
To create an alert that will notify you if a file share is being throttled, foll
## How to create an alert if the Azure file share size is 80% of capacity
-1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
2. In the **Condition** tab of the **Create an alert rule** dialog box, select the **File Capacity** metric.
To create an alert that will notify you if a file share is being throttled, foll
## How to create an alert if the Azure file share egress has exceeded 500 GiB in a day
-1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
2. In the **Condition** tab of the **Create an alert rule** dialog box, select the **Egress** metric.
To create an alert that will notify you if a file share is being throttled, foll
To create an alert for high server latency (average), follow these steps.
-1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](../../azure-monitor/alerts/alerts-create-new-alert-rule.md).
+1. Open the **Create an alert rule** dialog box. For more information, see [Create or edit an alert rule](/azure/azure-monitor/alerts/alerts-create-new-alert-rule).
2. In the **Condition** tab of the **Create an alert rule** dialog box, select the **Success Server Latency** metric.
To create an alert for high server latency (average), follow these steps.
4. Define the **Alert Logic** by selecting either Static or Dynamic. For Static, select **Average** Aggregation, **Greater than** Operator, and Threshold value. For Dynamic, select **Average** Aggregation, **Greater than** Operator, and Threshold Sensitivity. > [!TIP]
- > If you're using a static threshold, the metric chart can help determine a reasonable threshold value if the file share is currently experiencing high latency. If you're using a dynamic threshold, the metric chart will display the calculated thresholds based on recent data. We recommend using the Dynamic logic with Medium threshold sensitivity and further adjust as needed. To learn more, see [Understanding dynamic thresholds](../../azure-monitor/alerts/alerts-dynamic-thresholds.md#understand-dynamic-thresholds-charts).
+ > If you're using a static threshold, the metric chart can help determine a reasonable threshold value if the file share is currently experiencing high latency. If you're using a dynamic threshold, the metric chart will display the calculated thresholds based on recent data. We recommend using the Dynamic logic with Medium threshold sensitivity and further adjust as needed. To learn more, see [Understanding dynamic thresholds](/azure/azure-monitor/alerts/alerts-dynamic-thresholds#understand-dynamic-thresholds-charts).
5. Define the lookback period and frequency of evaluation.
To create an alert for high server latency (average), follow these steps.
8. Select **Review + create** to create the alert.
-## Next steps
+## Related content
- [Monitor Azure Files](storage-files-monitoring.md)-- [Analyze Azure Files metrics using Azure Monitor](analyze-files-metrics.md) - [Azure Files monitoring data reference](storage-files-monitoring-reference.md)-- [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md)
+- [Analyze Azure Files metrics](analyze-files-metrics.md)
+- [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource)
- [Azure Storage metrics migration](../common/storage-metrics-migration.md)
storage Storage Files Monitoring Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring-reference.md
Title: Azure Files monitoring data reference
-description: Log and metrics reference for monitoring data from Azure Files.
---
+ Title: Monitoring data reference for Azure Files
+description: This article contains important reference material you need when you monitor Azure Files.
Last updated : 02/13/2024+ Previously updated : 07/31/2023+ -+
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace Azure Files with the official name of your service.
+2. Search and replace blob-storage with the service name to use in GitHub filenames.-->
+
+<!-- VERSION 3.0 2024_01_01
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
+
+<!-- Most services can use the following sections unchanged. All headings are required unless otherwise noted.
+The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
+At a minimum your service should have the following two articles:
+1. The primary monitoring article (based on the template monitor-service-template.md)
+ - Title: "Monitor Azure Files"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-blob-storage.md"
+2. A reference article that lists all the metrics and logs for your service (based on this template).
+ - Title: "Azure Files monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "monitor-blob-storage-reference.md".
+-->
+ # Azure Files monitoring data reference
-See [Monitoring Azure Files](storage-files-monitoring.md) for details on collecting and analyzing monitoring data for Azure Files.
+<!-- Intro -->
+
+See [Monitor Azure Files](storage-files-monitoring.md) for details on the data you can collect for Azure Files and how to use it.
## Applies to | File share type | SMB | NFS |
See [Monitoring Azure Files](storage-files-monitoring.md) for details on collect
| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-## Metrics
-
-The following tables list the platform metrics collected for Azure Files.
-
-### Capacity metrics
-
-Capacity metrics values are refreshed daily (up to 24 hours). The time grain defines the time interval for which metrics values are presented. The supported time grain for all capacity metrics is one hour (PT1H).
-
-Azure Files provides the following capacity metrics in Azure Monitor.
-
-#### Account Level
-
+<!-- ## Metrics. Required section. -->
-#### Azure Files
+### Supported metrics for Microsoft.Storage/storageAccounts
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts resource type.
-This table shows [supported metrics for Azure Files](/azure/azure-monitor/reference/supported-metrics/microsoft-storage-storageaccounts-fileservices-metrics).
+### Supported metrics for Microsoft.Storage/storageAccounts/fileServices
+The following table lists the metrics available for the Microsoft.Storage/storageAccounts/fileServices resource type.
-| Metric | Description |
-| - | -- |
-| FileCapacity | The amount of File storage used by the storage account. <br/><br/> Unit: Bytes <br/> Aggregation Type: Average <br/> Dimensions: FileShare, Tier <br/> Value example: 1024 |
-| FileCount | The number of files in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Dimensions: FileShare, Tier <br/> Value example: 1024 |
-| FileShareCapacityQuota | The upper limit on the amount of storage that can be used by Azure Files Service in bytes. <br/><br/> Unit: Bytes <br/> Aggregation Type: Average <br/> Value example: 1024|
-| FileShareCount | The number of file shares in the storage account. <br/><br/> Unit: Count <br/> Aggregation Type: Average <br/> Value example: 1024 |
-| FileShareProvisionedIOPS | The number of provisioned IOPS on a file share. This metric is applicable to premium file storage only. <br/><br/> Unit: CountPerSecond <br/> Aggregation Type: Average |
-| FileShareProvisionedBandwidthMiBps | The number of provisioned bandwidth in MiB/s on a file share. This metric is applicable to premium file storage only. <br/><br/> Unit: CountPerSecond <br/> Aggregation Type: Average |
-| FileShareSnapshotCount | The number of snapshots present on the share in storage account's Azure Files service. <br/><br/> Unit: Count <br/> Aggregation Type: Average |
-| FileShareSnapshotSize | The amount of storage used by the snapshots in storage account's Azure Files service. <br/><br/> Unit: Bytes <br/> Aggregation Type: Average |
-| FileShareMaxUsedIOPS | The maximum number of used IOPS at the lowest time granularity of 1-minute for the premium file share in the premium files storage account. <br/><br/> Unit: CountPerSecond <br/> Aggregation Type: Max |
-| FileShareMaxUsedBandwidthMiBps | The maximum number of used bandwidth in MiB/s at the lowest time granularity of 1-minute for the premium file share in the premium files storage account. <br/><br/> Unit: CountPerSecond <br/> Aggregation Type: Max |
-
-### Transaction metrics
-
-Transaction metrics are emitted on every request to a storage account from Azure Storage to Azure Monitor. In the case of no activity on your storage account, there will be no data on transaction metrics in the period. All transaction metrics are available at both account and Azure Files service level. The time grain defines the time interval that metric values are presented. The supported time grains for all transaction metrics are PT1H and PT1M.
--
-<a id="metrics-dimensions"></a>
-
-## Metrics dimensions
-
-Azure Files supports following dimensions for metrics in Azure Monitor.
+<a name="metrics-dimensions"></a>
+<!-- ## Metric dimensions. Required section. -->
> [!NOTE] > The File Share dimension is not available for standard file shares (only premium file shares). When using standard file shares, the metrics provided are for all files shares in the storage account. To get per-share metrics for standard file shares, create one file share per storage account. [!INCLUDE [Metrics dimensions](../../../includes/azure-storage-account-metrics-dimensions.md)]
-<a id="resource-logs-preview"></a>
+<!-- ## Resource logs. Required section. -->
+<a name="resource-logs-preview"></a>
-## Resource logs
+### Supported resource logs for Microsoft.Storage/storageAccounts/fileServices
-The following table lists the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
+<!-- ## Azure Monitor Logs tables. Required section. -->
-### Fields that describe the operation
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics)
+- [StorageFileLogs](/azure/azure-monitor/reference/tables/storagefilelogs)
+
+The following tables list the properties for Azure Storage resource logs when they're collected in Azure Monitor Logs or Azure Storage. The properties describe the operation, the service, and the type of authorization that was used to perform the operation.
+### Fields that describe the operation
[!INCLUDE [Account level capacity metrics](../../../includes/azure-storage-logs-properties-operation.md)]
The following table lists the properties for Azure Storage resource logs when th
[!INCLUDE [Account level capacity metrics](../../../includes/azure-storage-logs-properties-service.md)]
-## See also
+<!-- ## Activity log. Required section. -->
+- [Microsoft.Storage resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftstorage)
+
+<!-- ## Other schemas. Optional section. Please keep heading in this order. If your service uses other schemas, add the following include and information.
+<!-- List other schemas and their usage here. These can be resource logs, alerts, event hub formats, etc. depending on what you think is important. You can put JSON messages, API responses not listed in the REST API docs, and other similar types of info here. -->
+
+## Related content
-- See [Monitoring Azure Files](storage-files-monitoring-reference.md) for a description of monitoring Azure Storage.-- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitor Azure Files](storage-files-monitoring.md) for a description of monitoring Azure Files.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
storage Storage Files Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md
Title: Monitor Azure Files
-description: Learn how to monitor the performance and availability of Azure Files. Monitor Azure Files data and learn about configuration.
+description: Start here to learn how to monitor Azure Files.
Last updated : 02/13/2024++ --- Previously updated : 09/06/2023 -+
+<!--
+IMPORTANT
+To make this template easier to use, first:
+1. Search and replace Azure Files with the official name of your service.
+2. Search and replace files with the service name to use in GitHub filenames.-->
+
+<!-- VERSION 3.0 2024_01_07
+For background about this template, see https://review.learn.microsoft.com/en-us/help/contribute/contribute-monitoring?branch=main -->
+
+<!-- Most services can use the following sections unchanged. The sections use #included text you don't have to maintain, which changes when Azure Monitor functionality changes. Add info into the designated service-specific places if necessary. Remove #includes or template content that aren't relevant to your service.
+At a minimum your service should have the following two articles:
+1. The primary monitoring article (based on this template)
+ - Title: "Monitor Azure Files"
+ - TOC Title: "Monitor"
+ - Filename: "monitor-files.md"
+2. A reference article that lists all the metrics and logs for your service (based on the template data-reference-template.md).
+ - Title: "Azure Files monitoring data reference"
+ - TOC Title: "Monitoring data reference"
+ - Filename: "storage-files-monitoring-reference.md".
+-->
+ # Monitor Azure Files
-When you have critical applications and business processes that rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure Files and how you can use the features of Azure Monitor to analyze alerts on this data.
+<!-- Intro -->
## Applies to | File share type | SMB | NFS |
When you have critical applications and business processes that rely on Azure re
| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-## Monitor overview
+>[!IMPORTANT]
+>Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. For more information, see [Migrate to Azure Resource Manager](/azure/virtual-machines/migration-classic-resource-manager-overview).
+
+<!-- ## Insights. Optional section. If your service has insights, add the following include and information. -->
+<!-- Insights service-specific information. Add brief information about what your Azure Monitor insights provide here. You can refer to another article that gives details or add a screenshot. -->
+Azure Storage insights offer a unified view of storage performance, capacity, and availability. See [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md).
+
+<!-- ## Resource types. Required section. -->
+
+<!-- ## Data storage. Required section. Optionally, add service-specific information about storing your monitoring data after the include. -->
+<!-- Add service-specific information about storing monitoring data here, if applicable. For example, SQL Server stores other monitoring data in its own databases. -->
+
+<!-- METRICS SECTION START ->
+
+<!-- ## Platform metrics. Required section.
+ - If your service doesn't collect platform metrics, use the following include: [!INCLUDE [horz-monitor-no-platform-metrics](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-platform-metrics.md)]
+ - If your service collects platform metrics, add the following include, statement, and service-specific information as appropriate. -->
+For a list of available metrics for Azure Files, see [Azure Files monitoring data reference](storage-files-monitoring-reference.md#metrics).
+<!-- Platform metrics service-specific information. Add service-specific information about your platform metrics here.-->
+
+<!-- ## Prometheus/container metrics. Optional. If your service uses containers/Prometheus metrics, add the following include and information.
+<!-- Add service-specific information about your container/Prometheus metrics here.-->
+
+<!-- ## System metrics. Optional. If your service uses system-imported metrics, add the following include and information.
+<!-- Add service-specific information about your system-imported metrics here.-->
+
+<!-- ## Custom metrics. Optional. If your service uses custom imported metrics, add the following include and information.
+<!-- Custom imported service-specific information. Add service-specific information about your custom imported metrics here.-->
+
+<!-- ## Non-Azure Monitor metrics. Optional. If your service uses any non-Azure Monitor based metrics, add the following include and information.
+<!-- Non-Monitor metrics service-specific information. Add service-specific information about your non-Azure Monitor metrics here.-->
+
+<!-- METRICS SECTION END ->
+
+<!-- LOGS SECTION START -->
+
+<!-- ## Resource logs. Required section.
+ - If your service doesn't collect resource logs, use the following include [!INCLUDE [horz-monitor-no-resource-logs](~/articles/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-no-resource-logs.md)]
+ - If your service collects resource logs, add the following include, statement, and service-specific information as appropriate. -->
+<a name="collection-and-routing"></a>
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Files, see [Azure Files monitoring data reference](storage-files-monitoring-reference.md#resource-logs).
+<!-- Resource logs service-specific information. Add service-specific information about your resource logs here.
+NOTE: Azure Monitor already has general information on how to configure and route resource logs. See https://learn.microsoft.com/azure/azure-monitor/platform/diagnostic-settings. Ideally, don't repeat that information here. You can provide a single screenshot of the diagnostic settings portal experience if you want. -->
+
+To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
+
+### Destination limitations
+
+For general destination limitations, see [Destination limitations](/azure/azure-monitor/essentials/diagnostic-settings#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
+
+- You can't send logs to the same storage account that you're monitoring with this setting.
+ This situation would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
+
+- You can't set a retention policy.
+
+ If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automatically managing the data lifecycle](../blobs/lifecycle-management-overview.md).
-The **Overview** page in the Azure portal for each Azure Files resource includes a brief view of the resource usage, such as requests and hourly billing. This information is useful, but only a small amount of the monitoring data is available. Some of this data is collected automatically and is available for analysis as soon as you create the resource. You can enable additional types of data collection with some configuration.
+ If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](/azure/azure-monitor/logs/data-retention-archive).
-## What is Azure Monitor?
-Azure Files creates monitoring data by using [Azure Monitor](../../azure-monitor/overview.md), which is a full stack monitoring service in Azure. Azure Monitor provides a complete set of features to monitor your Azure resources and resources in other clouds and on-premises.
+<!-- ## Activity log. Required section. Optionally, add service-specific information about your activity log after the include. -->
+<!-- Activity log service-specific information. Add service-specific information about your activity log here. -->
-Start with the article [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md), which describes the following:
+<!-- ## Imported logs. Optional section. If your service uses imported logs, add the following include and information.
+<!-- Add service-specific information about your imported logs here. -->
-- What is Azure Monitor?-- Costs associated with monitoring-- Monitoring data collected in Azure-- Configuring data collection-- Standard tools in Azure for analyzing and alerting on monitoring data
+<!-- ## Other logs. Optional section.
+If your service has other logs that aren't resource logs or in the activity log, add information that states what they are and what they cover here. You can describe how to route them in a later section. -->
-The following sections build on this article by describing the specific data gathered from Azure Files. Examples show how to configure data collection and analyze this data with Azure tools.
+<!-- LOGS SECTION END ->
-## Monitoring data
+<!-- ANALYSIS SECTION START -->
-Azure Files collects the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](../../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+<!-- ## Analyze data. Required section. -->
-See [Azure File monitoring data reference](storage-files-monitoring-reference.md) for detailed information on the metrics and logs metrics created by Azure Files.
+<!-- ### External tools. Required section. -->
-Metrics and logs in Azure Monitor support only Azure Resource Manager storage accounts. Azure Monitor doesn't support classic storage accounts. If you want to use metrics or logs on a classic storage account, you need to migrate to an Azure Resource Manager storage account. See [Migrate to Azure Resource Manager](../../virtual-machines/migration-classic-resource-manager-overview.md).
+### Analyze metrics for Azure Files
-## Collection and routing
+Metrics for Azure Files are in these namespaces:
-Platform metrics and the Activity log are collected automatically, but can be routed to other locations by using a diagnostic setting.
+- Microsoft.Storage/storageAccounts
+- Microsoft.Storage/storageAccounts/fileServices
-To collect resource logs, you must create a diagnostic setting. When you create the setting, choose **file** as the type of storage that you want to enable logs for. Then, specify one of the following categories of operations for which you want to collect logs.
+For a list of available metrics for Azure Files, see [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
-| Category | Description |
-|:|:|
-| StorageRead | Read operations on objects. |
-| StorageWrite | Write operations on objects. |
-| StorageDelete | Delete operations on objects. |
+For a list of all Azure Monitor supported metrics, which includes Azure Files, see [Azure Monitor supported metrics](/azure/azure-monitor/essentials/metrics-supported#microsoftstoragestorageaccountsfileservices).
-The **audit** resource log category group allows you to collect the baseline of resource logs that Microsoft deems necessary for auditing your resource. What's collected is dynamic, and Microsoft may change it over time as new resource log categories become available. If you choose the **audit** category group, you can't specify any other resource categories, because the system will decide which logs to collect. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../../azure-monitor/essentials/diagnostic-settings.md#resource-logs).
+For detailed instructions on how to access and analyze Azure Files metrics such as availability, latency, and utilization, see [Analyze Azure Files metrics using Azure Monitor](analyze-files-metrics.md).
+
+<a name="analyzing-logs"></a>
+### Analyze logs for Azure Files
+
+You can access resource logs either as a blob in a storage account, as event data, or through Log Analytics queries. For information about how to send resource logs to different destinations, see [Azure resource logs](/azure/azure-monitor/essentials/resource-logs).
To get the list of SMB and REST operations that are logged, see [Storage logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) and [Azure Files monitoring data reference](storage-files-monitoring-reference.md).
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/platform/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, and PowerShell. You can also find links to information about how to create a diagnostic setting by using an Azure Resource Manager template or an Azure Policy definition.
+Log entries are created only if there are requests made against the service endpoint. For example, if a storage account has activity in its file endpoint but not in its table or queue endpoints, only logs that pertain to the Azure File service are created. Azure Storage logs contain detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Requests are logged on a best-effort basis.
-## Destination limitations
+#### Log authenticated requests
-For general destination limitations, see [Destination limitations](../../azure-monitor/essentials/diagnostic-settings.md#destination-limitations). The following limitations apply only to monitoring Azure Storage accounts.
+ The following types of authenticated requests are logged:
-- You can't send logs to the same storage account that you're monitoring with this setting.
+- Successful requests
+- Failed requests, including timeout, throttling, network, authorization, and other errors
+- Requests that use Kerberos, NTLM or shared access signature (SAS), including failed and successful requests
+- Requests to analytics data (classic log data in the **$logs** container and classic metric data in the **$metric** tables)
- This would lead to recursive logs in which a log entry describes the writing of another log entry. You must create an account or use another existing account to store log information.
+Requests made by the Azure Files service itself, such as log creation or deletion, aren't logged.
-- You can't set a retention policy.
+<!-- ### Sample Kusto queries. Required section. If you have sample Kusto queries for your service, add them after the include. -->
+<!-- Add sample Kusto queries for your service here. -->
+Here are some queries that you can enter in the **Log search** bar to help you monitor your Azure file shares. These queries work with the [new language](../../azure-monitor/logs/log-query-overview.md).
+
+- View SMB errors over the last week.
+
+ ```Kusto
+ StorageFileLogs
+ | where Protocol == "SMB" and TimeGenerated >= ago(7d) and StatusCode contains "-"
+ | sort by StatusCode
+ ```
+- Create a pie chart of SMB operations over the last week.
+
+ ```Kusto
+ StorageFileLogs
+ | where Protocol == "SMB" and TimeGenerated >= ago(7d)
+ | summarize count() by OperationName
+ | sort by count_ desc
+ | render piechart
+ ```
+
+- View REST errors over the last week.
+
+ ```Kusto
+ StorageFileLogs
+ | where Protocol == "HTTPS" and TimeGenerated >= ago(7d) and StatusText !contains "Success"
+ | sort by StatusText asc
+ ```
+
+- Create a pie chart of REST operations over the last week.
+
+ ```Kusto
+ StorageFileLogs
+ | where Protocol == "HTTPS" and TimeGenerated >= ago(7d)
+ | summarize count() by OperationName
+ | sort by count_ desc
+ | render piechart
+ ```
+
+To view the list of column names and descriptions for Azure Files, see [StorageFileLogs](/azure/azure-monitor/reference/tables/storagefilelogs#columns).
+
+For more information on how to write queries, see [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial).
+
+<!-- ANALYSIS SECTION END ->
+
+<!-- ALERTS SECTION START -->
+
+<!-- ## Alerts. Required section. -->
+
+<!-- ### Azure Files alert rules. Required section.
+**MUST HAVE** service-specific alert rules. Include useful alerts on metrics, logs, log conditions, or activity log.
+Fill in the following table with metric and log alerts that would be valuable for your service. Change the format as necessary for readability. You can instead link to an article that discusses your common alerts in detail.
+Ask your PMs if you don't know. This information is the BIGGEST request we get in Azure Monitor, so don't avoid it long term. People don't know what to monitor for best results. Be prescriptive. -->
+
+### Azure Files alert rules
+The following table lists common and recommended alert rules for Azure Files and the proper metric to use for the alert.
+
+> [!TIP]
+> If you create an alert and it's too noisy, adjust the threshold value and alert logic.
+
+| Alert type | Condition | Description |
+|-|-|-|
+|Metric | File share is throttled. | Transactions<br>Dimension name: Response type <br>Dimension name: FileShare (premium file share only) |
+|Metric | File share size is 80% of capacity. | File Capacity<br>Dimension name: FileShare (premium file share only) |
+|Metric | File share egress exceeds 500 GiB in one day. | Egress<br>Dimension name: FileShare (premium file share only) |
+|Metric | High server latency. | Success Server Latency<br>Dimension name: API Name, for example Read and Write API|
+
+For instructions on how to create alerts on throttling, capacity, egress, and high server latency, see [Create monitoring alerts for Azure Files](files-monitoring-alerts.md).
+
+<!-- ### Advisor recommendations -->
+
+<!-- ALERTS SECTION END -->
+
+## Related content
+<!-- You can change the wording and add more links if useful. -->
+
+Other Azure Files monitoring content:
+- [Azure Files monitoring data reference](storage-files-monitoring-reference.md). A reference of the logs and metrics created by Azure Files.
+- [Understand Azure Files performance](understand-performance.md)
- If you archive logs to a storage account, you can manage the retention policy of a log container by defining a lifecycle management policy. To learn how, see [Optimize costs by automating Azure Blob Storage access tiers](../blobs/lifecycle-management-overview.md).
+Overall Azure Storage monitoring content:
+- [Monitor storage with Azure Monitor Storage insights](../common/storage-insights-overview.md). Get a unified view of storage performance, capacity, and availability.
+- [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md). Move from Storage Analytics metrics to metrics in Azure Monitor.
+- [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/files/toc.json). See common performance issues and guidance about how to troubleshoot them.
+- [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/files/toc.json). See common availability issues and guidance about how to troubleshoot them.
+- [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/files/toc.json). See common issues with connecting clients and how to troubleshoot them.
+- [Troubleshoot ClientOtherErrors](/troubleshoot/azure/azure-storage/files-client-other-errors?toc=/azure/storage/files/toc.json)
- If you send logs to Log Analytics, you can manage the data retention period of Log Analytics at the workspace level or even specify different retention settings by data type. To learn how, see [Change the data retention period](../../azure-monitor/logs/data-retention-archive.md).
+Azure Monitor content:
+- [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource). General details on monitoring Azure resources.
+- [Azure Monitor Metrics overview](/azure/azure-monitor/essentials/data-platform-metrics). The basics of metrics and metric dimensions.
+- [Azure Monitor Logs overview](/azure/azure-monitor/logs/data-platform-logs). The basics of logs and how to collect and analyze them.
+- [Analyze metrics with Azure Monitor metrics explorer](/azure/azure-monitor/essentials/analyze-metrics). A tour of Metrics Explorer.
+- [Overview of Log Analytics in Azure Monitor](/azure/azure-monitor/logs/log-analytics-overview). A tour of Log Analytics.
-## Next steps
+Training modules:
+- [Monitor, diagnose, and troubleshoot your Azure Storage](/training/modules/monitor-diagnose-and-troubleshoot-azure-storage/). Troubleshoot storage account issues, with step-by-step guidance.
-- [Analyze Azure Files metrics using Azure Monitor](analyze-files-metrics.md)-- [Create monitoring alerts for Azure Files](files-monitoring-alerts.md)-- [Azure Files monitoring data reference](storage-files-monitoring-reference.md)-- [Monitor Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md)-- [Azure Storage metrics migration](../common/storage-metrics-migration.md)
stream-analytics Azure Data Lake Storage Gen1 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/azure-data-lake-storage-gen1-output.md
- Title: Azure Data Lake Storage Gen 1 output from Azure Stream Analytics
-description: This article describes Azure Data Lake Storage Gen 1 as an output option for Azure Stream Analytics.
---- Previously updated : 08/25/2020--
-# Azure Data Lake Storage Gen 1 output from Azure Stream Analytics
-
-Stream Analytics supports [Azure Data Lake Storage Gen 1](../data-lake-store/data-lake-store-overview.md) outputs. Azure Data Lake Storage is an enterprise-wide, hyperscale repository for big data analytic workloads. You can use Data Lake Storage to store data of any size, type, and ingestion speed for operational and exploratory analytics. Stream Analytics needs to be authorized to access Data Lake Storage.
-
-Azure Data Lake Storage output from Stream Analytics is not available in Microsoft Azure operated by 21Vianet and Azure Germany (T-Systems International).
-
-## Output configuration
-
-The following table lists property names and their descriptions to configure your Data Lake Storage Gen 1 output.
-
-| Property name | Description |
-| | |
-| Output alias | A friendly name used in queries to direct the query output to Data Lake Store. |
-| Subscription | The subscription that contains your Azure Data Lake Storage account. |
-| Account name | The name of the Data Lake Store account where you're sending your output. You're presented with a drop-down list of Data Lake Store accounts that are available in your subscription. |
-| Path prefix pattern | The file path that's used to write your files within the specified Data Lake Store account. You can specify one or more instances of the {date} and {time} variables:<br /><ul><li>Example 1: folder1/logs/{date}/{time}</li><li>Example 2: folder1/logs/{date}</li></ul><br />The time stamp of the created folder structure follows UTC and not local time.<br /><br />If the file path pattern doesn't contain a trailing slash (/), the last pattern in the file path is treated as a file name prefix. <br /><br />New files are created in these circumstances:<ul><li>Change in output schema</li><li>External or internal restart of a job</li></ul> |
-| Date format | Optional. If the date token is used in the prefix path, you can select the date format in which your files are organized. Example: YYYY/MM/DD |
-|Time format | Optional. If the time token is used in the prefix path, specify the time format in which your files are organized. Currently the only supported value is HH. |
-| Event serialization format | The serialization format for output data. JSON, CSV, and Avro are supported.|
-| Encoding | If you're using CSV or JSON format, an encoding must be specified. UTF-8 is the only supported encoding format at this time.|
-| Delimiter | Applicable only for CSV serialization. Stream Analytics supports a number of common delimiters for serializing CSV data. Supported values are comma, semicolon, space, tab, and vertical bar.|
-| Format | Applicable only for JSON serialization. **Line separated** specifies that the output is formatted by having each JSON object separated by a new line. If you select **Line separated**, the JSON is read one object at a time. The whole content by itself would not be a valid JSON. **Array** specifies that the output is formatted as an array of JSON objects. This array is closed only when the job stops or Stream Analytics has moved on to the next time window. In general, it's preferable to use line-separated JSON, because it doesn't require any special handling while the output file is still being written to.|
-| Authentication mode | You can authorize access to your Data Lake Storage account using [Managed Identity](stream-analytics-managed-identities-adls.md) (preview) or User token. Once you grant access, you can revoke access by changing the user account password, deleting the Data Lake Storage output for this job, or deleting the Stream Analytics job. |
-
-## Partitioning
-
-For the partition key, use {date} and {time} tokens in the path prefix pattern. Choose a date format, such as YYYY/MM/DD, DD/MM/YYYY, or MM-DD-YYYY. Use HH for the time format. The number of output writers follows the input partitioning for [fully parallelizable queries](stream-analytics-scale-jobs.md).
-
-## Output batch size
-
-For the maximum message size, see [Data Lake Storage limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-lake-storage-limits). To optimize batch size, use up to 4 MB per write operation.
-
-## Next steps
-
-* [Authenticate Stream Analytics to Azure Data Lake Storage Gen1 using managed identities (preview)](stream-analytics-managed-identities-adls.md)
-* [Quickstart: Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md)
stream-analytics No Code Build Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-build-power-bi-dashboard.md
Previously updated : 2/17/2023 Last updated : 02/14/2024 # Build real-time dashboard with Power BI dataset produced from Stream Analytics no code editor
This article describes how you can use the no code editor to easily create a Str
## Develop a Stream Analytics job to create Power BI dataset with selected data 1. In the [Azure portal](https://portal.azure.com), locate and select the Azure Event Hubs instance.
-1. Select **Features** > **Process Data** and then select **Start** on the **Build the real-time data dashboard with Power BI** card.
-
- :::image type="content" source="./media/no-code-build-power-bi-dashboard/event-hub-process-data-templates.png" alt-text="Screenshot showing the Filter and ingest to ADLS Gen2 card where you select Start." lightbox="./media/no-code-build-power-bi-dashboard/event-hub-process-data-templates.png" :::
+1. Select **Features** > **Process Data** and then select **Start** on the **Build near real-time data dashboard with Power BI** card.
+ :::image type="content" source="./media/no-code-build-power-bi-dashboard/event-hub-process-data-templates.png" alt-text="Screenshot showing the Process data page of an event hub." lightbox="./media/no-code-build-power-bi-dashboard/event-hub-process-data-templates.png" :::
1. Enter a name for the Stream Analytics job, then select **Create**. :::image type="content" source="./media/no-code-build-power-bi-dashboard/create-new-stream-analytics-job.png" alt-text="Screenshot showing where to enter a job name." lightbox="./media/no-code-build-power-bi-dashboard/create-new-stream-analytics-job.png" :::
+1. Specify the **Serialization type** of your data in the Event Hubs window and the **Authentication method** that the job uses to connect to the Event Hubs. Then select **Connect**.
-1. Specify the **Serialization type** of your data in the Event Hubs window and the **Authentication method** that the job will use it to connect to the Event Hubs. Then select **Connect**.
:::image type="content" source="./media/no-code-build-power-bi-dashboard/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/no-code-build-power-bi-dashboard/event-hub-configuration.png" :::- 1. When the connection is established successfully and you have data streams flowing into your Event Hubs instance, you immediately see two things: - Fields that are present in the input data. You can choose **Add field** or select the three dot symbol next to a field to remove, rename, or change its type.
+
:::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-schema.png" alt-text="Screenshot showing the Event Hubs field list where you can remove, rename, or change the field type." lightbox="./media/no-code-build-power-bi-dashboard/no-code-schema.png" ::: - A live sample of incoming data in the **Data preview** table under the diagram view. It automatically refreshes periodically. You can select **Pause streaming preview** to see a static view of the sample input data.
+
:::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-sample-input.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/no-code-build-power-bi-dashboard/no-code-sample-input.png" :::-- 1. Select the **Manage** tile. In the **Manage fields** configuration panel, choose the fields you want to output. If you want to add all the fields, select **Add all fields**.
- :::image type="content" source="./media/no-code-build-power-bi-dashboard/manage-fields-configuration.png" alt-text="Screenshot that shows the manage field operator configuration." lightbox="./media/no-code-build-power-bi-dashboard/manage-fields-configuration.png" :::
-
+ :::image type="content" source="./media/no-code-build-power-bi-dashboard/manage-fields-configuration.png" alt-text="Screenshot that shows the Manage field operator configuration." lightbox="./media/no-code-build-power-bi-dashboard/manage-fields-configuration.png" :::
1. Select **Power BI** tile. In the **Power BI** configuration panel, fill in needed parameters and connect. - **Dataset**: it's the Power BI destination where the Azure Stream Analytics job output data is written into. - **Table**: it's the table name in the Dataset where the output data goes to.
This article describes how you can use the no code editor to easily create a Str
- **Output data error handling** ΓÇô It allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop such output events. :::image type="content" source="./media/no-code-build-power-bi-dashboard/no-code-start-job.png" alt-text="Screenshot showing the Start Stream Analytics job options where you can change the output time, set the number of streaming units, and select the Output data error handling options." lightbox="./media/no-code-build-power-bi-dashboard/no-code-start-job.png" :::
-1. After you select **Start**, the job starts running within two minutes, and the metrics will be open in tab section below.
+1. After you select **Start**, the job starts running within two minutes, and the metrics will be open in tab section.
:::image type="content" source="./media/no-code-build-power-bi-dashboard/job-metrics-after-started.png" alt-text="Screenshot that shows the job metrics after it's started." lightbox="./media/no-code-build-power-bi-dashboard/job-metrics-after-started.png" :::
Now, you have the Azure Stream Analytics job running and the data is continuousl
:::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-select-dataset.png" alt-text="Screenshot of the pbi dashboard adding tile with selected dataset." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-select-dataset.png" ::: 4. Fill in the tile details, and follow the next step to complete the tile configuration. :::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-details.png" alt-text="Screenshot of the pbi dashboard adding tile with configured details." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-add-tile-details.png" :::
-5. Then, you can adjust its size and get the continuously updated dashboard as below.
+5. Then, you can adjust its size and get the continuously updated dashboard as shown in the following example.
:::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-report.png" alt-text="Screenshot of the pbi dashboard report." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-report.png" :::
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-outputs.md
All outputs support batching, but only some support setting the output batch siz
|[Azure Synapse Analytics](azure-synapse-analytics-output.md)|Yes|SQL user auth, </br> Managed Identity| |[Blob storage and Azure Data Lake Gen 2](blob-storage-azure-data-lake-gen2-output.md)|Yes|Access key, </br> Managed Identity| |[Azure Cosmos DB](azure-cosmos-db-output.md)|Yes|Access key, </br> Managed Identity|
-|[Azure Data Lake Storage Gen 1](azure-data-lake-storage-gen1-output.md)|Yes|Microsoft Entra user </br> Managed Identity|
+|[Azure Data Lake Storage Gen 2](blob-output-managed-identity.md)|Yes|Microsoft Entra user </br> Managed Identity|
|[Azure Event Hubs](event-hubs-output.md)|Yes, need to set the partition key column in output configuration.|Access key, </br> Managed Identity| |[Kafka (preview)](kafka-output.md)|Yes, need to set the partition key column in output configuration.|Access key, </br> Managed Identity| |[Azure Database for PostgreSQL](postgresql-database-output.md)|Yes|Username and password auth|
stream-analytics Stream Analytics Managed Identities Adls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-managed-identities-adls.md
- Title: Authenticate Azure Stream Analytics to Azure Data Lake Storage Gen1
-description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to Azure Data Lake Storage Gen1 output.
---- Previously updated : 03/16/2021--
-# Authenticate Stream Analytics to Azure Data Lake Storage Gen1 using managed identities
-
-Azure Stream Analytics supports managed identity authentication with Azure Data Lake Storage (ADLS) Gen1 output. The identity is a managed application registered in Microsoft Entra ID that represents a given Stream Analytics job, and can be used to authenticate to a targeted resource. Managed identities eliminate the limitations of user-based authentication methods, like needing to reauthenticate due to password changes or user token expirations that occur every 90 days. Additionally, managed identities help with the automation of Stream Analytics job deployments that output to Azure Data Lake Storage Gen1.
-
-This article shows you three ways to enable managed identity for an Azure Stream Analytics job that outputs to an Azure Data Lake Storage Gen1 through the Azure portal, Azure Resource Manager template deployment, and Azure Stream Analytics tools for Visual Studio.
--
-## Azure portal
-
-1. Start by creating a new Stream Analytics job or by opening an existing job in Azure portal. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Configure**.
-
- ![Configure Stream Analytics managed identity](./media/stream-analytics-managed-identities-adls/stream-analytics-managed-identity-preview.png)
-
-2. Select **Use System-assigned Managed Identity** from the window that appears on the right. Click **Save** to a service principal for the identity of the Stream Analytics job in Microsoft Entra ID. The life cycle of the newly created identity will be managed by Azure. When the Stream Analytics job is deleted, the associated identity (that is, the service principal) is automatically deleted by Azure.
-
- When the configuration is saved, the Object ID (OID) of the service principal is listed as the Principal ID as shown below:
-
- ![Stream Analytics service principal ID](./media/stream-analytics-managed-identities-adls/stream-analytics-principal-id.png)
-
- The service principal has the same name as the Stream Analytics job. For example, if the name of your job is **MyASAJob**, the name of the service principal created is also **MyASAJob**.
-
-3. In the output properties window of the ADLS Gen1 output sink, click the Authentication mode drop-down and select **Managed Identity**.
-
-4. Fill out the rest of the properties. To learn more about creating an ADLS output, see [Create a Data lake Store output with stream analytics](../data-lake-store/data-lake-store-stream-analytics.md). When you are finished, click **Save**.
-
- ![Configure Azure Data Lake Storage](./media/stream-analytics-managed-identities-adls/stream-analytics-configure-adls.png)
-
-5. Navigate to the Overview page of your ADLS Gen1 and click on **Data explorer**.
-
- ![Configure Data Lake Storage Overview](./media/stream-analytics-managed-identities-adls/stream-analytics-adls-overview.png)
-
-6. In the Data explorer pane, select **Access** and click **Add** in the Access pane.
-
- ![Configure Data Lake Storage Access](./media/stream-analytics-managed-identities-adls/stream-analytics-adls-access.png)
-
-7. In the text box on the **Select user or group** pane, type the name of the service principal. Remember that the name of the service principal is also the name of the corresponding Stream Analytics job. As you begin typing the principal name, it will appear below the text box. Choose the desired service principal name and click **Select**.
-
- ![Select a service principal name](./media/stream-analytics-managed-identities-adls/stream-analytics-service-principal-name.png)
-
-8. In the **Permissions** pane, check the **Write** and **Execute** permissions and assign it to **This Folder and all children**. Then click **Ok**.
-
- ![Select write and execute permissions](./media/stream-analytics-managed-identities-adls/stream-analytics-select-permissions.png)
-
-9. The service principal is listed under **Assigned Permissions** on the **Access** pane as shown below. You can now go back and start your Stream Analytics job.
-
- ![Stream Analytics access list in portal](./media/stream-analytics-managed-identities-adls/stream-analytics-access-list.png)
-
- To learn more about Data Lake Storage Gen1 file system permissions, see [Access Control in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md).
-
-## Stream Analytics tools for Visual Studio
-
-1. In JobConfig.json, set **Use System-assigned Identity** to **True**.
-
- ![Stream Analytics job config managed identities](./media/stream-analytics-managed-identities-adls/adls-mi-jobconfig-vs.png)
-
-2. In the output properties window of the ADLS Gen1 output sink, click the Authentication mode drop-down and select **Managed Identity**.
-
- ![ADLS output managed identities](./media/stream-analytics-managed-identities-adls/adls-mi-output-vs.png)
-
-3. Fill out the rest of the properties, and click **Save**.
-
-4. Click **Submit to Azure** in the query editor.
-
- When you submit the job, the tools do two things:
-
- * Automatically creates a service principal for the identity of the Stream Analytics job in Microsoft Entra ID. The life cycle of the newly created identity will be managed by Azure. When the Stream Analytics job is deleted, the associated identity (that is, the service principal) is automatically deleted by Azure.
-
- * Automatically set **Write** and **Execute** permissions for the ADLS Gen1 prefix path used in the job and assign it to this folder and all children.
-
-5. You can generate the Resource Manager templates with the following property using [Stream Analytics CI.CD NuGet package](https://www.nuget.org/packages/Microsoft.Azure.StreamAnalytics.CICD/) version 1.5.0 or above on a build machine (outside of Visual Studio). Follow the Resource Manager template deployment steps in the next section to get the service principal and grant access to the service principal via PowerShell.
-
-## Resource Manager template deployment
-
-1. You can create a *Microsoft.StreamAnalytics/streamingjobs* resource with a managed identity by including the following property in the resource section of your Resource Manager template:
-
- ```json
- "Identity": {
- "Type": "SystemAssigned",
- },
- ```
-
- This property tells Azure Resource Manager to create and manage the identity for your Azure Stream Analytics job.
-
- **Sample job**
-
- ```json
- {
- "Name": "AsaJobWithIdentity",
- "Type": "Microsoft.StreamAnalytics/streamingjobs",
- "Location": "West US",
- "Identity": {
- "Type": "SystemAssigned",
- },
- "properties": {
- "sku": {
- "name": "standard"
- },
- "outputs": [
- {
- "name": "string",
- "properties":{
- "datasource": {
- "type": "Microsoft.DataLake/Accounts",
- "properties": {
- "accountName": "myDataLakeAccountName",
- "filePathPrefix": "cluster1/logs/{date}/{time}",
- "dateFormat": "YYYY/MM/DD",
- "timeFormat": "HH",
- "authenticationMode": "Msi"
- }
- }
- }
- }
- }
- }
- ```
-
- **Sample job response**
-
- ```json
- {
- "Name": "mySAJob",
- "Type": "Microsoft.StreamAnalytics/streamingjobs",
- "Location": "West US",
- "Identity": {
- "Type": "SystemAssigned",
- "principalId": "GUID",
- "tenantId": "GUID",
- },
- "properties": {
- "sku": {
- "name": "standard"
- },
- }
- }
- ```
-
- Take note of the Principal ID from the job response to grant access to the required ADLS resource.
-
- The **Tenant ID** is the ID of the Microsoft Entra tenant where the service principal is created. The service principal is created in the Azure tenant that is trusted by the subscription.
-
- The **Type** indicates the type of managed identity as explained in types of managed identities. Only the System Assigned type is supported.
-
-2. Provide Access to the service principal using PowerShell. To give access to the service principal via PowerShell, execute the following command:
-
- ```powershell
- Set-AzDataLakeStoreItemAclEntry -AccountName <accountName> -Path <Path> -AceType User -Id <PrinicpalId> -Permissions <Permissions>
- ```
-
- The **PrincipalId** is the Object ID of the service principal and is listed on the portal screen once the service principal is created. If you created the job using a Resource Manager template deployment, the Object ID is listed in the Identity property of the job response.
-
- **Example**
-
- ```powershell
- PS > Set-AzDataLakeStoreItemAclEntry -AccountName "adlsmsidemo" -Path / -AceType
- User -Id 14c6fd67-d9f5-4680-a394-cd7df1f9bacf -Permissions WriteExecute
- ```
-
- To learn more about the above PowerShell command, refer to the [Set-AzDataLakeStoreItemAclEntry](/powershell/module/az.datalakestore/set-azdatalakestoreitemaclentry) documentation.
-
-## Remove Managed Identity
-
-The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There is no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again.
-
-## Limitations
-This feature doesnΓÇÖt support the following:
-
-1. **Multi-tenant access**: The Service principal created for a given Stream Analytics job will reside on the Microsoft Entra tenant on which the job was created, and cannot be used against a resource that resides on a different Microsoft Entra tenant. Therefore, you can only use MSI on ADLS Gen 1 resources that are within the same Microsoft Entra tenant as your Azure Stream Analytics job.
-
-2. **[User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md)**: is not supported. This means the user is not able to enter their own service principal to be used by their Stream Analytics job. The service principal is generated by Azure Stream Analytics.
-
-## Next steps
-
-* [Create a Data lake Store output with stream analytics](../data-lake-store/data-lake-store-stream-analytics.md)
-* [Test Stream Analytics queries locally with Visual Studio](stream-analytics-vs-tools-local-run.md)
-* [Test live data locally using Azure Stream Analytics tools for Visual Studio](stream-analytics-live-data-local-testing.md)
stream-analytics Stream Analytics Tools For Visual Studio Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-tools-for-visual-studio-cicd.md
The default parameters in the parameters.json file are from the settings in your
``` Learn more about how to [deploy with a Resource Manager template file and Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md). Learn more about how to [use an object as a parameter in a Resource Manager template](/azure/architecture/guide/azure-resource-manager/advanced-templates/objects-as-parameters).
-To use Managed Identity for Azure Data Lake Store Gen1 as output sink, you need to provide Access to the service principal using PowerShell before deploying to Azure. Learn more about how to [deploy ADLS Gen1 with Managed Identity with Resource Manager template](stream-analytics-managed-identities-adls.md#resource-manager-template-deployment).
+To use Managed Identity for Azure Data Lake Store Gen2 as output sink, you need to provide Access to the service principal using PowerShell before deploying to Azure. Learn more about how to [deploy ADLS Gen2 with Managed Identity with Resource Manager template](blob-output-managed-identity.md#azure-resource-manager-deployment).
## Command-line tool
synapse-analytics Business Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/business-intelligence.md
Title: Business Intelligence partners
-description: Lists of third-party business intelligence partners with solutions that support Azure Synapse Analytics.
+description: Lists of business intelligence partners with solutions that support Azure Synapse Analytics.
Previously updated : 06/14/2023 Last updated : 02/15/2024
To create your data warehouse solution, you can choose from different kinds of i
| Partner | Description | Website/Product link | | - | -- | -- |
-| :::image type="content" source="./media/business-intelligence/atscale-logo.png" alt-text="The logo of AtScale."::: |**AtScale**<br>AtScale provides a single, secured, and governed workspace for distributed data. AtScale's Cloud OLAP, Autonomous Data Engineering&trade;, and Universal Semantic Layer&trade; powers business intelligence results for faster, more accurate business decisions. |[AtScale](https://www.atscale.com/solutions/atscale-and-microsoft-azure/)<br> |
-| :::image type="content" source="./media/business-intelligence/birst_logo.png" alt-text="The logo of Birst."::: |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Birst](https://www.infor.com/solutions/advanced-analytics/business-intelligence/birst)<br> |
-| :::image type="content" source="./media/business-intelligence/count-logo.png" alt-text="The logo of Count."::: |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At Count's core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few selects. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Count](https://count.co/)<br>|
-| :::image type="content" source="./media/business-intelligence/dremio-logo.png" alt-text="The logo of Dremio."::: |**Dremio**<br> Analysts and data scientists can discover, explore and curate data using Dremio's intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Dremio](https://www.dremio.com/azure/)<br>[Dremio Community Edition in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> |
-| :::image type="content" source="./media/business-intelligence/dundas_software_logo.png" alt-text="The logo of Dundas."::: |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Dundas](https://www.dundas.com/dundas-bi)<br> |
-| :::image type="content" source="./media/business-intelligence/cognos_analytics_logo.png" alt-text="The logo of IBM Cognos."::: |**IBM Cognos Analytics**<br>Cognos Analytics includes self-service capabilities that make it simple, clear, and easy to use, whether you're an experienced business analyst examining a vast supply chain, or a marketer optimizing a campaign. Cognos Analytics uses AI and other capabilities to guide data exploration. It makes it easier for users to get the answers they need|[IBM](https://www.ibm.com/products/cognos-analytics)<br>|
-| :::image type="content" source="./media/business-intelligence/informationbuilders_logo.png" alt-text="The logo of Information Builders."::: |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Information Builders](https://www.ibi.com/)<br> |
-| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Logi Analytics](https://insightsoftware.com/logi-analytics/)<br>|
-| :::image type="content" source="./media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Report**<br>Logi Report is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Logi Report](https://insightsoftware.com/logi-analytics/logi-report/)<br> |
-| :::image type="content" source="./media/business-intelligence/looker_logo.png" alt-text="The logo of Looker."::: |**Looker for Business Intelligence**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Looker for BI](https://looker.com/)<br> [Looker Analytics Platform Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
-| :::image type="content" source="./media/business-intelligence/microstrategy_logo.png" alt-text="The logo of Microstrategy."::: |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensure you have everything you need to extend access to analytics across every team.|[MicroStrategy](https://www.microstrategy.com/enterprise-analytics)<br> [MicroStrategy Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud)<br> |
-| :::image type="content" source="./media/business-intelligence/mode-logo.png" alt-text="The logo of Mode Analytics."::: |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Mode](https://mode.com/)<br> |
-| :::image type="content" source="./media/business-intelligence/pyramid-logo.png" alt-text="The logo of Pyramid Analytics."::: |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help ΓÇö on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Non-technical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Pyramid Analytics](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Pyramid Analytics in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020-25-102) |
-| :::image type="content" source="./media/business-intelligence/qlik_logo.png" alt-text="The logo of Qlik."::: |**Qlik Sense**<br>Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions. |[Qlik Sense](https://www.qlik.com/us/products/qlik-sense)<br> [Qlik Sense in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense) |
-| :::image type="content" source="./media/business-intelligence/sas-logo.jpg" alt-text="The logo of SAS."::: |**SAS&reg; Viya&reg;**<br>SAS&reg; Viya&reg; is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone ΓÇô from data scientists to business users ΓÇô to collaborate and realize innovative results faster. Using open source or SAS models, SAS&reg; Viya&reg; can be accessed through APIs or interactive interfaces to transform raw data into actions. |[SAS&reg; Viya&reg;](https://www.sas.com/microsoft)<br> [SAS&reg; Viya&reg; in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br>|
-| :::image type="content" source="./media/business-intelligence/sisense_logo.png" alt-text="The logo of SiSense."::: |**SiSense**<br>SiSense is a full-stack Business Intelligence software that comes with tools that a business needs to analyze and visualize data: a high-performance analytical database, the ability to join multiple sources, simple data extraction (ETL), and web-based data visualization. Start to analyze and visualize large data sets with SiSense BI and Analytics today. |[SiSense](https://www.sisense.com/)<br> |
-| :::image type="content" source="./media/business-intelligence/tableau_sparkle_logo.png" alt-text="The logo of Tableau."::: |**Tableau**<br>Tableau's self-service analytics help anyone see and understand their data, across many kinds of data from flat files to databases. Tableau has a native, optimized connector to Synapse SQL pool that supports both live data and in-memory analytics. |[Tableau](https://www.tableau.com/)<br> [Tableau Server in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tableau.tableau-server)<br>|
-| :::image type="content" source="./media/business-intelligence/targit_logo.png" alt-text="The logo of Targit."::: |**Targit (Decision Suite)**<br>Targit Decision Suite provides a BI platform that delivers real-time dashboards, self-service analytics, user-friendly reporting, stunning mobile capabilities, and simple data-discovery technology. Everything in a single, cohesive solution. Targit gives companies the courage to act. |[Targit](https://www.targit.com/targit-decision-suite/analytics)<br> [Targit in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/targit.targit-decision-suite)<br> |
-| :::image type="content" source="./media/business-intelligence/thoughtspot-logo.png" alt-text="The logo of ThoughtSpot."::: |**ThoughtSpot**<br>Use search to get granular insights from billions of rows, or let AI uncover insights from questions you might not have thought about. ThoughtSpot helps businesspeople find insights hidden in their company data in seconds. Use search to analyze your data and get automated insights when you need them.|[ThoughtSpot](https://www.thoughtspot.com)<br>|
-| :::image type="content" source="./media/business-intelligence/yellowfin_logo.png" alt-text="The logo of Yellowfin."::: |**Yellowfin**<br>Yellowfin is a top rated Cloud BI vendor for _ad hoc_ Reporting and Dashboards by BARC; The BI Survey. Connect to a dedicated SQL pool in Azure Synapse Analytics, then create and share beautiful reports and dashboards with award winning collaborative BI and location intelligence features. |[Yellowfin](https://www.yellowfinbi.com/)<br> [Yellowfin in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/yellowfininternationalptyltd1616363974066.yellowfin-for-azure-byol-v2) |
+| :::image type="content" source="media/business-intelligence/atscale-logo.png" alt-text="The logo of AtScale."::: |**AtScale**<br>AtScale provides a single, secured, and governed workspace for distributed data. AtScale's Cloud OLAP, Autonomous Data Engineering&trade;, and Universal Semantic Layer&trade; powers business intelligence results for faster, more accurate business decisions. |[AtScale](https://www.atscale.com/solutions/atscale-and-microsoft-azure/)<br> |
+| :::image type="content" source="media/business-intelligence/birst_logo.png" alt-text="The logo of Birst."::: |**Birst**<br>Birst connects the entire organization through a network of interwoven virtualized BI instances on-top of a shared common analytical fabric|[Birst](https://www.infor.com/solutions/advanced-analytics/business-intelligence/birst)<br> |
+| :::image type="content" source="media/business-intelligence/count-logo.png" alt-text="The logo of Count."::: |**Count**<br> Count is the next generation SQL editor, giving you the fastest way to explore and share your data with your team. At Count's core is a data notebook built for SQL, allowing you to structure your code, iterate quickly and stay in flow. Visualize your results instantly or customize them to build beautifully detailed charts in just a few selects. Instantly share anything from one-off queries to full interactive data stories built off any of your Azure Synapse data sources. |[Count](https://count.co/)<br>|
+| :::image type="content" source="media/business-intelligence/dremio-logo.png" alt-text="The logo of Dremio."::: |**Dremio**<br> Analysts and data scientists can discover, explore, and curate data using Dremio's intuitive UI, while IT maintains governance and security. Dremio makes it easy to join ADLS with Blob Storage, Azure SQL Database, Azure Synapse SQL, HDInsight, and more. With Dremio, Power BI analysts can search for new datasets stored on ADLS, immediately access that data in Power BI with no preparation by IT, create visualizations, and iteratively refine reports in real-time. And analysts can create new reports that combine data between ADLS and other databases. |[Dremio](https://www.dremio.com/azure/)<br>[Dremio Community Edition in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dremiocorporation.dremio_ce)<br> |
+| :::image type="content" source="media/business-intelligence/dundas_software_logo.png" alt-text="The logo of Dundas."::: |**Dundas BI**<br>Dundas Data Visualization is a leading, global provider of Business Intelligence and Data Visualization software. Dundas dashboards, reporting, and visual data analytics provide seamless integration into business applications, enabling better decisions and faster insights.|[Dundas](https://www.dundas.com/dundas-bi)<br> |
+| :::image type="content" source="media/business-intelligence/cognos_analytics_logo.png" alt-text="The logo of IBM Cognos."::: |**IBM Cognos Analytics**<br>Cognos Analytics includes self-service capabilities that make it simple, clear, and easy to use, whether you're an experienced business analyst examining a vast supply chain, or a marketer optimizing a campaign. Cognos Analytics uses AI and other capabilities to guide data exploration. It makes it easier for users to get the answers they need|[IBM](https://www.ibm.com/products/cognos-analytics)<br>|
+| :::image type="content" source="media/business-intelligence/informationbuilders_logo.png" alt-text="The logo of Information Builders."::: |**Information Builders (WebFOCUS)**<br>WebFOCUS business intelligence helps companies use data more strategically across and beyond the enterprise. It allows users and administrators to rapidly create dashboards that combine content from multiple data sources and formats. It also provides robust security and comprehensive governance that enables seamless and secure sharing of any BI and analytics content|[Information Builders](https://www.ibi.com/)<br> |
+| :::image type="content" source="media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Analytics**<br>Together, Logi Analytics enables your organization to collect, analyze, and immediately act on the largest and most diverse data sets in the world. |[Logi Analytics](https://insightsoftware.com/logi-analytics/)<br>|
+| :::image type="content" source="media/business-intelligence/logianalytics_logo.png" alt-text="The logo of LogiAnalytics."::: |**Logi Report**<br>Logi Report is an embeddable BI solution for the enterprise. The solution offers capabilities such as report creation, dashboards, and data analysis on cloud, big data, and transactional data sources. By visualizing data, you can conduct your own reporting and data discovery for agile, on-the-fly decision making. |[Logi Report](https://insightsoftware.com/logi-analytics/logi-report/)<br> |
+| :::image type="content" source="media/business-intelligence/looker_logo.png" alt-text="The logo of Looker."::: |**Looker for Business Intelligence**<br>Looker gives everyone in your company the ability to explore and understand the data that drives your business. Looker also gives the data analyst a flexible and reusable modeling layer to control and curate that data. Companies have fundamentally transformed their culture using Looker as the catalyst.|[Looker for BI](https://looker.com/)<br> [Looker Analytics Platform Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aad.lookeranalyticsplatform)<br> |
+| :::image type="content" source="media/business-intelligence/microstrategy_logo.png" alt-text="The logo of Microstrategy."::: |**MicroStrategy**<br>The MicroStrategy platform offers a complete set of business intelligence and analytics capabilities that enable organizations to get value from their business data. MicroStrategy's powerful analytical engine, comprehensive toolsets, variety of data connectors, and open architecture ensure you have everything you need to extend access to analytics across every team.|[MicroStrategy](https://www.microstrategy.com/enterprise-analytics)<br> [MicroStrategy Cloud in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microstrategy.microstrategy_cloud)<br> |
+| :::image type="content" source="media/business-intelligence/mode-logo.png" alt-text="The logo of Mode Analytics."::: |**Mode**<br>Mode is a modern analytics and BI solution that helps teams make decisions through unreasonably fast and unexpectedly delightful data analysis. Data teams move faster through a preferred workflow that combines SQL, Python, R, and visual analysis, while stakeholders work alongside them exploring and sharing data on their own. With data more accessible to everyone, we shorten the distance from questions to answers and help businesses make better decisions, faster.|[Mode](https://mode.com/)<br> |
+| :::image type="content" source="media/business-intelligence/pyramid-logo.png" alt-text="The logo of Pyramid Analytics."::: |**Pyramid Analytics**<br>Pyramid 2020 is the trusted analytics platform that connects your teams, drives confident decisions, and produces winning results. Business users can do high-end, cloud-scale analytics and data science without IT help, on any browser or device. Data scientists can take advantage of machine learning algorithms and scripting to understand difficult business problems. Power users can prepare and model their own data to create illuminating analytic content. Nontechnical users can benefit from stunning visualizations and guided analytic presentations. It's the next generation of self-service analytics with governance. |[Pyramid Analytics](https://www.pyramidanalytics.com/resources/analyst-reports/)<br> [Pyramid Analytics in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/pyramidanalytics.pyramid2020-25-102) |
+| :::image type="content" source="media/business-intelligence/qlik_logo.png" alt-text="The logo of Qlik."::: |**Qlik Sense**<br>Drive insight discovery with the data visualization app that anyone can use. With Qlik Sense, everyone in your organization can easily create flexible, interactive visualizations and make meaningful decisions. |[Qlik Sense](https://www.qlik.com/us/products/qlik-sense)<br> [Qlik Sense in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik-sense) |
+| :::image type="content" source="media/business-intelligence/sas-logo.jpg" alt-text="The logo of SAS."::: |**SAS&reg; Viya&reg;**<br>SAS&reg; Viya&reg; is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone ΓÇô from data scientists to business users ΓÇô to collaborate and realize innovative results faster. Using open source or SAS models, SAS&reg; Viya&reg; can be accessed through APIs or interactive interfaces to transform raw data into actions. |[SAS&reg; Viya&reg;](https://www.sas.com/microsoft)<br> [SAS&reg; Viya&reg; in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br>|
+| :::image type="content" source="media/business-intelligence/sisense_logo.png" alt-text="The logo of SiSense."::: |**SiSense**<br>SiSense is a full-stack Business Intelligence software that comes with tools that a business needs to analyze and visualize data: a high-performance analytical database, the ability to join multiple sources, simple data extraction (ETL), and web-based data visualization. Start to analyze and visualize large data sets with SiSense BI and Analytics today. |[SiSense](https://www.sisense.com/)<br> |
+| :::image type="content" source="media/business-intelligence/tableau_sparkle_logo.png" alt-text="The logo of Tableau."::: |**Tableau**<br>Tableau's self-service analytics help anyone see and understand their data, across many kinds of data from flat files to databases. Tableau has a native, optimized connector to Synapse SQL pool that supports both live data and in-memory analytics. |[Tableau](https://www.tableau.com/)<br> [Tableau Server in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tableau.tableau-server)<br>|
+| :::image type="content" source="media/business-intelligence/targit_logo.png" alt-text="The logo of Targit."::: |**Targit (Decision Suite)**<br>Targit Decision Suite provides a BI platform that delivers real-time dashboards, self-service analytics, user-friendly reporting, stunning mobile capabilities, and simple data-discovery technology. Everything in a single, cohesive solution. Targit gives companies the courage to act. |[Targit](https://www.targit.com/targit-decision-suite/analytics)<br> [Targit in the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/targit.targit-decision-suite)<br> |
+| :::image type="content" source="media/business-intelligence/thoughtspot-logo.png" alt-text="The logo of ThoughtSpot."::: |**ThoughtSpot**<br>Use search to get granular insights from billions of rows, or let AI uncover insights from questions you might not have thought about. ThoughtSpot helps businesspeople find insights hidden in their company data in seconds. Use search to analyze your data and get automated insights when you need them.|[ThoughtSpot](https://www.thoughtspot.com)<br>|
+| :::image type="content" source="media/business-intelligence/yellowfin_logo.png" alt-text="The logo of Yellowfin."::: |**Yellowfin**<br>Yellowfin is a top rated Cloud BI vendor for _ad hoc_ Reporting and Dashboards by BARC; The BI Survey. Connect to a dedicated SQL pool in Azure Synapse Analytics, then create and share beautiful reports and dashboards with award winning collaborative BI and location intelligence features. |[Yellowfin](https://www.yellowfinbi.com/) |
-## Next steps
+## Related content
- To learn more about some of our other partners, see [Data Integration partners](data-integration.md), [Data Management partners](data-management.md), and [Machine Learning and AI partners](machine-learning-ai.md).
synapse-analytics Column Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/column-level-security.md
Last updated 09/19/2023
-tags: azure-synapse
# Column-level security
synapse-analytics Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-arm-template.md
-tags: azure-resource-manager
Last updated 06/09/2020
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
Last updated 3/24/2022
-tags: azure-synapse
# Dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics release notes
synapse-analytics Sql Data Warehouse Overview Manage Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-manage-security.md
Last updated 04/17/2018
-tags: azure-synapse
# Secure a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Striim Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/striim-quickstart.md
Title: Striim quick start
description: Get started quickly with Striim and Azure Synapse Analytics. Previously updated : 10/12/2018 Last updated : 02/15/2024
This quickstart assumes that you already have a pre-existing instance of Azure Synapse Analytics.
-Search for Striim in the Azure Marketplace, and select the Striim for Data Integration to Azure Synapse Analytics (Staged) option
+1. Search for Striim in the Azure Marketplace, and select the Striim for Data Integration to Azure Synapse Analytics (Staged) option.
-![Install Striim][install]
+ ![Install Striim][install]
-Configure the Striim VM with specified properties, noting down the Striim cluster name, password, and admin password
+1. Configure the Striim Azure Virtual Machine (VM) with specified properties, noting down the Striim cluster name, password, and admin password.
-![Configure Striim][configure]
+ ![Configure Striim][configure]
-Once deployed, click on \<VM Name>-masternode in the Azure portal, click Connect, and copy the Login using VM local account
+1. Once deployed, select `<VM Name>-masternode` in the Azure portal, select **Connect**, and copy the sign in using VM local account.
-![Connect Striim to Azure Synapse Analytics][connect]
+ ![Connect Striim to Azure Synapse Analytics][connect]
-Download the [Microsoft JDBC Driver 4.2 for SQL Server](https://www.microsoft.com/download/details.aspx?id=54671) file to your local machine.
+1. Download the [Microsoft JDBC Driver for SQL Server](/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server-support-matrix). Use the [latest supported version specified by Striim](https://www.striim.com/docs/). Install to your local machine.
-Open a command-line window, and change directories to where you downloaded the JDBC driver. SCP the driver file to your Striim VM, getting the address and password from the Azure portal.
+1. Open a command-line window, and change directories to where you downloaded the JDBC driver. SCP the driver file to your Striim VM, getting the address and password from the Azure portal.
-![Copy driver file to your VM][copy-jar]
+ ![Copy driver file to your VM][copy-jar]
-Open another command-line window, or use an ssh utility to ssh into the Striim cluster.
+1. Open another command-line window, or use an ssh utility to ssh into the Striim cluster.
-![SSH into the cluster][ssh]
+ ![SSH into the cluster][ssh]
-Execute the following commands to move the file into Striim's lib directory, and start and stop the server.
+1. Execute the following commands to move the file into Striim's lib directory, and start and stop the server.
- 1. sudo su
- 2. cd /tmp
- 3. mv sqljdbc42.jar /opt/striim/lib
- 4. systemctl stop striim-node
- 5. systemctl stop striim-dbms
- 6. systemctl start striim-dbms
- 7. systemctl start striim-node
+ 1. `sudo su`
+ 1. `cd /tmp`
+ 1. `mv sqljdbc42.jar /opt/striim/lib`
+ 1. `systemctl stop striim-node`
+ 1. `systemctl stop striim-dbms`
+ 1. `systemctl start striim-dbms`
+ 1. `systemctl start striim-node`
-![Start the Striim cluster][start-striim]
+ ![Start the Striim cluster][start-striim]
-Now, open your favorite browser and navigate to \<DNS Name>:9080
+1. Now, open your favorite browser and navigate to `<DNS Name>:9080`.
-![Navigate to the login screen][navigate]
+ ![Navigate to the login screen][navigate]
-Log in with the username and the password you set up in the Azure portal, and select your preferred wizard to get started, or go to the Apps page to start using the drag and drop UI
+1. Sign in with the username and the password you set up in the Azure portal, and select your preferred wizard to get started, or go to the Apps page to start using the drag and drop UI.
-![Log in with server credentials][login]
+ ![Log in with server credentials][login]
+## Related content
+- [Blog: Enabling real-time data warehousing with Azure SQL Data Warehouse](https://azure.microsoft.com/blog/enabling-real-time-data-warehousing-with-azure-sql-data-warehouse/)
+- [Blog: Announcing Striim Cloud integration with Azure Synapse Analytics for continuous data integration](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-striim-cloud-integration-with-azure-synapse-analytics/ba-p/3593753)
[install]: ./media/striim-quickstart/install.png [configure]: ./media/striim-quickstart/configure.png
Log in with the username and the password you set up in the Azure portal, and se
[ssh]:./media/striim-quickstart/ssh.png [start-striim]:./media/striim-quickstart/start-striim.png [navigate]:./media/striim-quickstart/navigate.png
-[login]:./media/striim-quickstart/login.png
+[login]:./media/striim-quickstart/login.png
virtual-desktop Drain Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/drain-mode.md
Previously updated : 04/14/2021 Last updated : 02/09/2024
Drain mode isolates a session host when you want to apply patches and do maintenance without disrupting user sessions. When isolated, the session host won't accept new user sessions. Any new connections will be redirected to the next available session host. Existing connections in the session host will keep working until the user signs out or the administrator ends the session. When the session host is in drain mode, admins can also remotely connect to the server without going through the Azure Virtual Desktop service. You can apply this setting to both pooled and personal desktops.
-## Set drain mode using the Azure portal
+## Prerequisites
+
+If you're using either the Azure portal or PowerShell method, you'll need the following things:
+
+- A host pool with at least one session host.
+- An Azure account assigned the [Desktop Virtualization Session Host Operator](rbac.md#desktop-virtualization-session-host-operator) role.
+- If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
++
+## Enable drain mode
+
+Here's how to enable drain mode using the Azure portal and PowerShell.
+
+### [Portal](#tab/portal)
To turn on drain mode in the Azure portal:
-1. Open the Azure portal and go to the host pool you want to isolate.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Enter **Azure Virtual Desktop** into the search bar.
-2. In the navigation menu, select **Session hosts**.
+1. Under **Services**, select **Azure Virtual Desktop**.
-3. Next, select the hosts you want to turn on drain mode for, then select **Turn drain mode on**.
+1. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
-4. To turn off drain mode, select the host pools that have drain mode turned on, then select **Turn drain mode off**.
+1. Select the host pool you want to isolate.
-## Set drain mode using PowerShell
+1. In the navigation menu, select **Session hosts**.
+
+1. Next, select the hosts you want to turn on drain mode for, then select **Turn drain mode on**.
+
+1. To turn off drain mode, select the host pools that have drain mode turned on, then select **Turn drain mode off**.
+
+### [PowerShell](#tab/powershell)
You can set drain mode in PowerShell with the *AllowNewSessions* parameter, which is part of the [Update-AzWvdSessionhost](/powershell/module/az.desktopvirtualization/update-azwvdsessionhost) command.
-Run this cmdlet to enable drain mode:
+
+2. Run this cmdlet to enable drain mode:
```powershell
-Update-AzWvdSessionHost -ResourceGroupName <resourceGroupName> -HostPoolName <hostpoolname> -Name <hostname> -AllowNewSession:$False
+$params = @{
+ ResourceGroupName = "<resourceGroupName>"
+ HostPoolName = "<hostpoolname>"
+ Name = "<hostname>"
+ AllowNewSession = $False
+}
+
+Update-AzWvdSessionHost @params
```
-Run this cmdlet to disable drain mode:
+3. Run this cmdlet to disable drain mode:
```powershell
-Update-AzWvdSessionHost -ResourceGroupName <resourceGroupName> -HostPoolName <hostpoolname> -Name <hostname> -AllowNewSession:$True
+$params = @{
+ ResourceGroupName = "<resourceGroupName>"
+ HostPoolName = "<hostpoolname>"
+ Name = "<hostname>"
+ AllowNewSession = $True
+}
+
+Update-AzWvdSessionHost @params
``` >[!IMPORTANT] >You'll need to run this command for every session host you're applying the setting to. ++ ## Next steps If you want to learn more about the Azure portal for Azure Virtual Desktop, check out [our tutorials](create-host-pools-azure-marketplace.md). If you're already familiar with the basics, check out some of the other features you can use with the Azure portal, such as [MSIX app attach](app-attach-azure-portal.md) and [Azure Advisor](../advisor/advisor-overview.md).
virtual-desktop Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rbac.md
description: An overview of built-in Azure RBAC roles for Azure Virtual Desktop
Previously updated : 01/23/2024 Last updated : 01/25/2024 # Built-in Azure RBAC roles for Azure Virtual Desktop
The Desktop Virtualization Power On Contributor role is used to allow the Azure
| Action type | Permissions | |--|--|
-| actions | <ul><li>Microsoft.Compute/virtualMachines/start/action</li><li>Microsoft.Compute/virtualMachines/read</li><li>Microsoft.Compute/virtualMachines/instanceView/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li></ul> |
+| actions | <ul><li>Microsoft.Compute/virtualMachines/start/action</li><li>Microsoft.Compute/virtualMachines/read</li><li>Microsoft.Compute/virtualMachines/instanceView/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/read</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/start/action</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/stop/action</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/restart/action</li><li>Microsoft.HybridCompute/machines/read</li><li>Microsoft.HybridCompute/operations/read</li><li>Microsoft.HybridCompute/locations/operationresults/read</li><li>Microsoft.HybridCompute/locations/operationstatus/read</li></ul> |
| notActions | None | | dataActions | None | | notDataActions | None |
The Desktop Virtualization Power On Off Contributor role is used to allow the Az
| Action type | Permissions | |--|--|
-| actions | <ul><li>Microsoft.Compute/virtualMachines/start/action</li><li>Microsoft.Compute/virtualMachines/read</li><li>Microsoft.Compute/virtualMachines/instanceView/read</li><li>Microsoft.Compute/virtualMachines/deallocate/action</li><li>Microsoft.Compute/virtualMachines/restart/action</li><li>Microsoft.Compute/virtualMachines/powerOff/action</li><li>Microsoft.Insights/eventtypes/values/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.DesktopVirtualization/hostpools/read</li><li>Microsoft.DesktopVirtualization/hostpools/write</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/write</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/delete</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action</li></ul> |
+| actions | <ul><li>Microsoft.Compute/virtualMachines/start/action</li><li>Microsoft.Compute/virtualMachines/read</li><li>Microsoft.Compute/virtualMachines/instanceView/read</li><li>Microsoft.Compute/virtualMachines/deallocate/action</li><li>Microsoft.Compute/virtualMachines/restart/action</li><li>Microsoft.Compute/virtualMachines/powerOff/action</li><li>Microsoft.Insights/eventtypes/values/read</li><li>Microsoft.Authorization/\*/read</li><li>Microsoft.Insights/alertRules/\*</li><li>Microsoft.Resources/deployments/\*</li><li>Microsoft.Resources/subscriptions/resourceGroups/read</li><li>Microsoft.DesktopVirtualization/hostpools/read</li><li>Microsoft.DesktopVirtualization/hostpools/write</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/write</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/delete</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/read</li><li>Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/read</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/start/action</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/stop/action</li><li>Microsoft.AzureStackHCI/virtualMachineInstances/restart/action</li><li>Microsoft.HybridCompute/machines/read</li><li>Microsoft.HybridCompute/operations/read</li><li>Microsoft.HybridCompute/locations/operationresults/read</li><li>Microsoft.HybridCompute/locations/operationstatus/read</li></ul> |
| notActions | None | | dataActions | None | | notDataActions | None |
virtual-desktop Troubleshoot Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-teams.md
Using Teams in a virtualized environment is different from using Teams in a non-
- If you've opened a window overlapping the window you're currently sharing during a meeting, the contents of the shared window that are covered by the overlapping window won't update for meeting users. - If you're sharing admin windows for programs like Windows Task Manager, meeting participants may see a black area where the presenter toolbar or call monitor is located. - Switching tenants can result in call-related issues such as screen sharing not rendering correctly. You can mitigate these issues by restarting your Teams client.
+- Teams does not support the ability to be on a native Teams call and a Teams call in the Azure Virtual Desktop session simultaneously while connected to a HID device.
For Teams known issues that aren't related to virtualized environments, see [Support Teams in your organization](/microsoftteams/known-issues).
virtual-machines Basv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/basv2.md
Basv2-series virtual machines offer a balance of compute, memory, and network re
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> <br>
-| Size | vCPU | RAM | Base CPU Performance of VM (%) | Initial Credits (#) | Credits banked/hour | Max Banked Credits (#) | Max uncached disk throughput: IOPS/M8ps | Max burst uncached disk throughput: IOPS/MBps | Max Data Disks | Max Network Bandwidth (Gbps) | Max NICs |
+| Size | vCPU | RAM | Base CPU Performance of VM (%) | Initial Credits (#) | Credits banked/hour | Max Banked Credits (#) | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max Data Disks | Max Network Bandwidth (Gbps) | Max NICs |
|--||--|--||||--|--|-||-| | Standard_B2ats_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 | | Standard_B2als_v2 | 2 | 4 | 30% | 60 | 36 | 864 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 |
virtual-machines Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/custom-data.md
# Custom data and cloud-init on Azure Virtual Machines
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets You might need to inject a script or other metadata into a Microsoft Azure virtual machine (VM) at provisioning time. In other clouds, this concept is often called *user data*. Microsoft Azure has a similar feature called *custom data*.
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
Last updated 10/30/2023 -- # Enabling NVMe and SCSI Interface on Virtual Machine
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
NVMe stands for nonvolatile memory express, which is a communication protocol that facilitates faster and more efficient data transfer between servers and storage systems. With NVMe, data can be transferred at the highest throughput and with the fastest response time. Azure now supports the NVMe interface on the Ebsv5 and Ebdsv5 family, offering the highest IOPS and throughput performance for remote disk storage among all the GP v5 VM series.
SCSI (Small Computer System Interface) is a legacy standard for physically conne
## Prerequisites
-A new feature has been added to the VM configuration, called DiskControllerType, which allows customers to select their preferred controller type as NVMe or SCSI. If the customer doesn't specify a DiskControllerType value then the platform will automatically choose the default controller based on the VM size configuration. If the VM size is configured for SCSI as the default and supports NVMe, SCSI will be used unless updated to the NVMe DiskControllerType.
+A new feature has been added to the VM configuration, called DiskControllerType, which allows customers to select their preferred controller type as NVMe or SCSI. If the customer doesn't specify a DiskControllerType value then the platform will automatically choose the default controller based on the VM size configuration. If the VM size is configured for SCSI as the default and supports NVMe, SCSI will be used unless updated to the NVMe DiskControllerType.
To enable the NVMe interface, the following prerequisites must be met:
By meeting the above five conditions, you'll be able to enable NVMe on the suppo
### Windows - [Azure portal - Plan ID: 2019-datacenter-core-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore)-- [Azure portal - Plan ID: 2019-datacenter-core-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore2019-datacenter-core-smalldisk-g2)
+- [Azure portal - Plan ID: 2019-datacenter-core-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore2019-datacenter-core-smalldisk-g2)
- [Azure portal - Plan ID: 2019 datacenter-core](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCore) - [Azure portal - Plan ID: 2019-datacenter-core-g2](https://portal.azure.com/#create/Microsoft.WindowsServer2019DatacenterServerCore2019-datacenter-core-g2) - [Azure portal - Plan ID: 2019-datacenter-core-with-containers-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCorewithContainers)
By meeting the above five conditions, you'll be able to enable NVMe on the suppo
## Launching a VM with NVMe interface
-NVMe can be enabled during VM creation using various methods such as: Azure portal, CLI, PowerShell, and ARM templates. To create an NVMe VM, you must first enable the NVMe option on a VM and select the NVMe controller disk type for the VM. Note that the NVMe diskcontrollertype can be enabled during creation or updated to NVMe when the VM is stopped and deallocated, provided that the VM size supports NVMe.
+NVMe can be enabled during VM creation using various methods such as: Azure portal, CLI, PowerShell, and ARM templates. To create an NVMe VM, you must first enable the NVMe option on a VM and select the NVMe controller disk type for the VM. Note that the NVMe diskcontrollertype can be enabled during creation or updated to NVMe when the VM is stopped and deallocated, provided that the VM size supports NVMe.
### Azure portal View
NVMe can be enabled during VM creation using various methods such as: Azure port
```json
-{
-    "apiVersion": "2022-08-01",
-    "type": "Microsoft.Compute/virtualMachines",
-    "name": "[variables('vmName')]",
-    "location": "[parameters('location')]",
-    "identity": {
-        "type": "userAssigned",
-        "userAssignedIdentities": {
-            "/subscriptions/ <EnterSubscriptionIdHere> /resourcegroups/ManagedIdentities/providers/Microsoft.ManagedIdentity/userAssignedIdentities/KeyVaultReader": {}
-        }
-    },
-    "dependsOn": [
-        "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
-    ],
-    "properties": {
-        "hardwareProfile": {
-            "vmSize": "[parameters('vmSize')]"
-        },
-        "osProfile": "[variables('vOsProfile')]",
-        "storageProfile": {
-            "imageReference": "[parameters('osDiskImageReference')]",
-            "osDisk": {
-                "name": "[variables('diskName')]",
-                "caching": "ReadWrite",
-                "createOption": "FromImage"
-            },
-            "copy": [
-                {
-                    "name": "dataDisks",
-                    "count": "[parameters('numDataDisks')]",
-                    "input": {
-                        "caching": "[parameters('dataDiskCachePolicy')]",
-                        "writeAcceleratorEnabled": "[parameters('writeAcceleratorEnabled')]",
-                        "diskSizeGB": "[parameters('dataDiskSize')]",
-                        "lun": "[add(copyIndex('dataDisks'), parameters('lunStartsAt'))]",
-                        "name": "[concat(variables('vmName'), '-datadisk-', copyIndex('dataDisks'))]",
-                        "createOption": "Attach",
-                        "managedDisk": {
-                            "storageAccountType": "[parameters('storageType')]",
-                            "id": "[resourceId('Microsoft.Compute/disks/', concat(variables('vmName'), '-datadisk-', copyIndex('dataDisks')))]"
-                        }
-                    }
-                }
-            ],
-            "diskControllerTypes": "NVME"
-        },
-        "securityProfile": {
-            "encryptionAtHost": "[parameters('encryptionAtHost')]"
-        },
-                          
-        "networkProfile": {
-            "networkInterfaces": [
-                {
-                    "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
-                }
-            ]
-        },
-        "availabilitySet": {
-            "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
-        }
-    },
-    "tags": {
-        "vmName": "[variables('vmName')]",
-
-      "location": "[parameters('location')]",
-
-                "dataDiskSize": "[parameters('dataDiskSize')]",
-
-                "numDataDisks": "[parameters('numDataDisks')]",
-
-                "dataDiskCachePolicy": "[parameters('dataDiskCachePolicy')]",
-
-                "availabilitySetName": "[parameters('availabilitySetName')]",
-
-                "customScriptURL": "[parameters('customScriptURL')]",
-
-                "SkipLinuxAzSecPack": "True",
-
-                "SkipASMAzSecPack": "True",
-
-                "EnableCrashConsistentRestorePoint": "[parameters('enableCrashConsistentRestorePoint')]"
-
-            }
+{
+    "apiVersion": "2022-08-01",
+    "type": "Microsoft.Compute/virtualMachines",
+    "name": "[variables('vmName')]",
+    "location": "[parameters('location')]",
+    "identity": {
+        "type": "userAssigned",
+        "userAssignedIdentities": {
+            "/subscriptions/ <EnterSubscriptionIdHere> /resourcegroups/ManagedIdentities/providers/Microsoft.ManagedIdentity/userAssignedIdentities/KeyVaultReader": {}
+        }
+    },
+    "dependsOn": [
+        "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
+    ],
+    "properties": {
+        "hardwareProfile": {
+            "vmSize": "[parameters('vmSize')]"
+        },
+        "osProfile": "[variables('vOsProfile')]",
+        "storageProfile": {
+            "imageReference": "[parameters('osDiskImageReference')]",
+            "osDisk": {
+                "name": "[variables('diskName')]",
+                "caching": "ReadWrite",
+                "createOption": "FromImage"
+            },
+            "copy": [
+                {
+                    "name": "dataDisks",
+                    "count": "[parameters('numDataDisks')]",
+                    "input": {
+                        "caching": "[parameters('dataDiskCachePolicy')]",
+                        "writeAcceleratorEnabled": "[parameters('writeAcceleratorEnabled')]",
+                        "diskSizeGB": "[parameters('dataDiskSize')]",
+                        "lun": "[add(copyIndex('dataDisks'), parameters('lunStartsAt'))]",
+                        "name": "[concat(variables('vmName'), '-datadisk-', copyIndex('dataDisks'))]",
+                        "createOption": "Attach",
+                        "managedDisk": {
+                            "storageAccountType": "[parameters('storageType')]",
+                            "id": "[resourceId('Microsoft.Compute/disks/', concat(variables('vmName'), '-datadisk-', copyIndex('dataDisks')))]"
+                        }
+                    }
+                }
+            ],
+            "diskControllerTypes": "NVME"
+        },
+        "securityProfile": {
+            "encryptionAtHost": "[parameters('encryptionAtHost')]"
+        },
+                          
+        "networkProfile": {
+            "networkInterfaces": [
+                {
+                    "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
+                }
+            ]
+        },
+        "availabilitySet": {
+            "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
+        }
+    },
+    "tags": {
+        "vmName": "[variables('vmName')]",
+
+      "location": "[parameters('location')]",
+
+                "dataDiskSize": "[parameters('dataDiskSize')]",
+
+                "numDataDisks": "[parameters('numDataDisks')]",
+
+                "dataDiskCachePolicy": "[parameters('dataDiskCachePolicy')]",
+
+                "availabilitySetName": "[parameters('availabilitySetName')]",
+
+                "customScriptURL": "[parameters('customScriptURL')]",
+
+                "SkipLinuxAzSecPack": "True",
+
+                "SkipASMAzSecPack": "True",
+
+                "EnableCrashConsistentRestorePoint": "[parameters('enableCrashConsistentRestorePoint')]"
+
+            }
        } ```
-
->[!TIP]
+
+>[!TIP]
> Use the same parameter **DiskControllerType** if you are using the PowerShell or CLI tools to launch the NVMe supported VM. ## Next steps
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
- Title: Azure Linux VM Agent overview
-description: Learn how to install and configure the Azure Linux VM Agent (waagent) to manage your virtual machine's interaction with the Azure fabric controller.
------- Previously updated : 03/28/2023-
-# Azure Linux VM Agent overview
-
-The Microsoft Azure Linux VM Agent (waagent) manages Linux and FreeBSD provisioning, along with virtual machine (VM) interaction with the Azure fabric controller. In addition to the Linux agent providing provisioning functionality, Azure provides the option of using cloud-init for some Linux operating systems.
-
-The Linux agent provides the following functionality for Linux and FreeBSD Azure Virtual Machines deployments. For more information, see the [Azure Linux VM Agent readme on GitHub](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
-
-### Image provisioning
--- Creates a user account-- Configures SSH authentication types-- Deploys SSH public keys and key pairs-- Sets the host name-- Publishes the host name to the platform DNS-- Reports the SSH host key fingerprint to the platform-- Manages the resource disk-- Formats and mounts the resource disk-- Configures swap space-
-### Networking
--- Manages routes to improve compatibility with platform DHCP servers-- Ensures the stability of the network interface name-
-### Kernel
--- Configures virtual NUMA (disabled for kernel 2.6.37)-- Consumes Hyper-V entropy for */dev/random*-- Configures SCSI timeouts for the root device, which can be remote-
-### Diagnostics
--- Provides console redirection to the serial port-
-### System Center Virtual Machine Manager deployments
--- Detects and bootstraps the Virtual Machine Manager agent for Linux when it's running in a System Center Virtual Machine Manager 2012 R2 environment-
-### VM Extension
--- Injects components authored by Microsoft and partners into Linux VMs to enable software and configuration automation-
-You can find a VM Extension reference implementation on [GitHub](https://github.com/Azure/azure-linux-extensions).
-
-## Communication
-
-Information flow from the platform to the agent occurs through two channels:
--- A boot-time attached DVD for VM deployments. This DVD includes an Open Virtualization Format (OVF)-compliant configuration file that contains all provisioning information other than the SSH key pairs.-- A TCP endpoint that exposes a REST API that's used to get deployment and topology configuration.-
-## Requirements
-
-Testing has confirmed that the following systems work with the Azure Linux VM Agent.
-
-> [!NOTE]
-> This list might differ from the [endorsed Linux distributions on Azure](../linux/endorsed-distros.md).
-
-| Distribution | x64 | ARM64 |
-|:--|:--:|:--:|
-| Alma Linux | 9.x+ | 9.x+ |
-| CentOS | 7.x+, 8.x+ | 7.x+ |
-| Debian | 10+ | 11.x+ |
-| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
-| Azure Linux | 2.x | 2.x |
-| openSUSE | 12.3+ | *Not supported* |
-| Oracle Linux | 6.4+, 7.x+, 8.x+ | *Not supported* |
-| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+, 9.x+ | 8.6+, 9.0+ |
-| Rocky Linux | 9.x+ | 9.x+ |
-| SLES | 12.x+, 15.x+ | 15.x SP4+ |
-| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
-
-> [!IMPORTANT]
-> RHEL/Oracle Linux 6.10 is the only RHEL/OL 6 version with Extended Lifecycle Support available. [The extended maintenance ends on June 30, 2024](https://access.redhat.com/support/policy/updates/errata).
-
-Other supported systems:
--- The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [GitHub repository](https://github.com/Azure/WALinuxAgent) and we may be able to help.-
-The Linux agent depends on these system packages to function properly:
--- Python 2.6+-- OpenSSL 1.0+-- OpenSSH 5.3+-- File system utilities: sfdisk, fdisk, mkfs, parted-- Password tools: chpasswd, sudo-- Text processing tools: sed, grep-- Network tools: ip-route-- Kernel support for mounting UDF file systems-
-Ensure that your VM has access to IP address 168.63.129.16. For more information, see [What is IP address 168.63.129.16?](../../virtual-network/what-is-ip-address-168-63-129-16.md).
-
-## Installation
-
-The supported method of installing and upgrading the Azure Linux VM Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux VM Agent package into their images and repositories.
-Some Linux distributions might disable the Azure Linux VM Agent **Auto Update** feature and some of the repositories might also contain older versions, those might have issues with modern extensions so, we recommend to have the latest stable version installed.
-To make sure the Azure Linux VM Agent is updating properly we recommend having the option `AutoUpdate.Enabled=Y` in the `/etc/waagent.conf` file or simply commenting out that option will result in its defaults too. Having `AutoUpdate.Enabled=N` will not allow the Azure Linux VM Agent to update properly.
-
-For advanced installation options, such as installing from a source or to custom locations or prefixes, see [Microsoft Azure Linux VM Agent](https://github.com/Azure/WALinuxAgent). Other than these scenarios, we do not support or recommend upgrading or reinstalling the Azure Linux VM Agent from source.
-
-## Command-line options
-
-### Flags
--- `verbose`: Increases verbosity of the specified command.-- `force`: Skips interactive confirmation for some commands.-
-### Commands
--- `help`: Lists the supported commands and flags.-- `deprovision`: Attempts to clean the system and make it suitable for reprovisioning. The operation deletes:
- - All SSH host keys, if `Provisioning.RegenerateSshHostKeyPair` is `y` in the configuration file.
- - `Nameserver` configuration in */etc/resolv.conf*.
- - The root password from */etc/shadow*, if `Provisioning.DeleteRootPassword` is `y` in the configuration file.
- - Cached DHCP client leases.
-
- The client resets the host name to `localhost.localdomain`.
-
- > [!WARNING]
- > Deprovisioning doesn't guarantee that the image is cleared of all sensitive information and suitable for redistribution.
--- `deprovision+user`: Performs everything in `deprovision` and deletes the last provisioned user account (obtained from */var/lib/waagent*) and associated data. Use this parameter when you deprovision an image that was previously provisioned on Azure so that it can be captured and reused.-- `version`: Displays the version of waagent.-- `serialconsole`: Configures GRUB to mark ttyS0, the first serial port, as the boot console. This option ensures that kernel boot logs are sent to the serial port and made available for debugging.-- `daemon`: Runs waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent *init* script.-- `start`: Runs waagent as a background process.-
-## Configuration
-
-The */etc/waagent.conf* configuration file controls the actions of waagent. Here's an example of a configuration file:
-
-```config
-Provisioning.Enabled=y
-Provisioning.DeleteRootPassword=n
-Provisioning.RegenerateSshHostKeyPair=y
-Provisioning.SshHostKeyPairType=rsa
-Provisioning.MonitorHostName=y
-Provisioning.DecodeCustomData=n
-Provisioning.ExecuteCustomData=n
-Provisioning.AllowResetSysUser=n
-Provisioning.PasswordCryptId=6
-Provisioning.PasswordCryptSaltLength=10
-ResourceDisk.Format=y
-ResourceDisk.Filesystem=ext4
-ResourceDisk.MountPoint=/mnt/resource
-ResourceDisk.MountOptions=None
-ResourceDisk.EnableSwap=n
-ResourceDisk.SwapSizeMB=0
-LBProbeResponder=y
-Logs.Verbose=n
-OS.RootDeviceScsiTimeout=300
-OS.OpensslPath=None
-HttpProxy.Host=None
-HttpProxy.Port=None
-AutoUpdate.Enabled=y
-```
-
-Configuration options are of three types: `Boolean`, `String`, or `Integer`. You can specify the `Boolean` configuration options as `y` or `n`. The special keyword `None` might be used for some string type configuration entries.
-
-### Provisioning.Enabled
-
-```txt
-Type: Boolean
-Default: y
-```
-
-This option allows the user to enable or disable the provisioning functionality in the agent. Valid values are `y` and `n`. If provisioning is disabled, SSH host and user keys in the image are preserved and configuration in the Azure provisioning API is ignored.
-
-> [!NOTE]
-> The `Provisioning.Enabled` parameter defaults to `n` on Ubuntu Cloud Images that use cloud-init for provisioning.
-
-### Provisioning.DeleteRootPassword
-
-```txt
-Type: Boolean
-Default: n
-```
-
-If the value is `y`, the agent erases the root password in the */etc/shadow* file during the provisioning process.
-
-### Provisioning.RegenerateSshHostKeyPair
-
-```txt
-Type: Boolean
-Default: y
-```
-
-If the value is `y`, the agent deletes all SSH host key pairs from */etc/ssh/* during the provisioning process, including ECDSA, DSA, and RSA. The agent generates a single fresh key pair.
-
-Configure the encryption type for the fresh key pair by using the `Provisioning.SshHostKeyPairType` entry. Some distributions re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted--for example, after a reboot.
-
-### Provisioning.SshHostKeyPairType
-
-```txt
-Type: String
-Default: rsa
-```
-
-You can set this option to an encryption algorithm type that the SSH daemon supports on the VM. The typically supported values are `rsa`, `dsa`, and `ecdsa`. The *putty.exe* file on Windows doesn't support `ecdsa`. If you intend to use *putty.exe* on Windows to connect to a Linux deployment, use `rsa` or `dsa`.
-
-### Provisioning.MonitorHostName
-
-```txt
-Type: Boolean
-Default: y
-```
-
-If the value is `y`, waagent monitors the Linux VM for a host name change, as returned by the `hostname` command. Waagent then automatically updates the networking configuration in the image to reflect the change. To push the name change to the DNS servers, networking restarts on the VM. This restart results in brief loss of internet connectivity.
-
-### Provisioning.DecodeCustomData
-
-```txt
-Type: Boolean
-Default: n
-```
-
-If the value is `y`, waagent decodes `CustomData` from Base64.
-
-### Provisioning.ExecuteCustomData
-
-```txt
-Type: Boolean
-Default: n
-```
-
-If the value is `y`, waagent runs `CustomData` after provisioning.
-
-### Provisioning.AllowResetSysUser
-
-```txt
-Type: Boolean
-Default: n
-```
-
-This option allows the password for the system user to be reset. It's disabled by default.
-
-### Provisioning.PasswordCryptId
-
-```txt
-Type: String
-Default: 6
-```
-
-This option specifies the algorithm that `crypt` uses when it's generating a password hash. Valid values are:
--- `1`: MD5-- `2a`: Blowfish-- `5`: SHA-256-- `6`: SHA-512-
-### Provisioning.PasswordCryptSaltLength
-
-```txt
-Type: String
-Default: 10
-```
-
-This option specifies the length of random salt used in generating a password hash.
-
-### ResourceDisk.Format
-
-```txt
-Type: Boolean
-Default: y
-```
-
-If the value is `y`, waagent formats and mounts the resource disk that the platform provides, unless the file system type that the user requested in `ResourceDisk.Filesystem` is `ntfs`. The agent makes a single Linux partition (ID 83) available on the disk. This partition isn't formatted if it can be successfully mounted.
-
-### ResourceDisk.Filesystem
-
-```txt
-Type: String
-Default: ext4
-```
-
-This option specifies the file system type for the resource disk. Supported values vary by Linux distribution. If the string is `X`, then `mkfs.X` should be present on the Linux image.
-
-### ResourceDisk.MountPoint
-
-```txt
-Type: String
-Default: /mnt/resource
-```
-
-This option specifies the path at which the resource disk is mounted. The resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned.
-
-### ResourceDisk.MountOptions
-
-```txt
-Type: String
-Default: None
-```
-
-This option specifies disk mount options to be passed to the `mount -o` command. The value is a comma-separated list of values, for example, `nodev,nosuid`. For more information, see the `mount(8)` manual page.
-
-### ResourceDisk.EnableSwap
-
-```txt
-Type: Boolean
-Default: n
-```
-
-If you set this option, the agent creates a swap file (*/swapfile*) on the resource disk and adds it to the system swap space.
-
-### ResourceDisk.SwapSizeMB
-
-```txt
-Type: Integer
-Default: 0
-```
-
-This option specifies the size of the swap file in megabytes.
-
-### Logs.Verbose
-
-```txt
-Type: Boolean
-Default: n
-```
-
-If you set this option, log verbosity is boosted. Waagent logs to */var/log/waagent.log* and uses the system `logrotate` functionality to rotate logs.
-
-### OS.EnableRDMA
-
-```txt
-Type: Boolean
-Default: n
-```
-
-If you set this option, the agent attempts to install and then load an RDMA kernel driver that matches the version of the firmware on the underlying hardware.
-
-### OS.RootDeviceScsiTimeout
-
-```txt
-Type: Integer
-Default: 300
-```
-
-This option configures the SCSI timeout in seconds on the OS disk and data drives. If it's not set, the system defaults are used.
-
-### OS.OpensslPath
-
-```txt
-Type: String
-Default: None
-```
-
-You can use this option to specify an alternate path for the *openssl* binary to use for cryptographic operations.
-
-### HttpProxy.Host, HttpProxy.Port
-
-```txt
-Type: String
-Default: None
-```
-
-If you set this option, the agent uses this proxy server to access the internet.
-
-### AutoUpdate.Enabled
-
-```txt
-Type: Boolean
-Default: y
-```
-
-Enable or disable autoupdate for goal state processing. The default value is `y`.
-
-## Automatic log collection in the Azure Linux Guest Agent
-
-As of version 2.7+, the Azure Linux Guest Agent has a feature to automatically collect some logs and upload them. This feature currently requires `systemd`. It uses a new `systemd` slice called `azure-walinuxagent-logcollector.slice` to manage resources while it performs the collection.
-
-The purpose is to facilitate offline analysis. The agent produces a *.zip* file of some diagnostics logs before uploading them to the VM's host. Engineering teams and support professionals can retrieve the file to investigate issues for the VM owner. For technical information on the files that the Azure Linux Guest Agent collects, see the *azurelinuxagent/common/logcollector_manifests.py* file in the [agent's GitHub repository](https://github.com/Azure/WALinuxAgent).
-
-You can disable this option by editing */etc/waagent.conf*. Update `Logs.Collect` to `n`.
-
-## Ubuntu Cloud Images
-
-Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-init) to do many configuration tasks that the Azure Linux VM Agent would otherwise manage. The following differences apply:
--- `Provisioning.Enabled` defaults to `n` on Ubuntu Cloud Images that use cloud-init to perform provisioning tasks.-- The following configuration parameters have no effect on Ubuntu Cloud Images that use cloud-init to manage the resource disk and swap space:-
- - `ResourceDisk.Format`
- - `ResourceDisk.Filesystem`
- - `ResourceDisk.MountPoint`
- - `ResourceDisk.EnableSwap`
- - `ResourceDisk.SwapSizeMB`
-
-To configure the resource disk mount point and swap space on Ubuntu Cloud Images during provisioning, see the following resources:
--- [Ubuntu wiki: AzureSwapPartitions](https://go.microsoft.com/fwlink/?LinkID=532955&clcid=0x409)-- [Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension](../windows/tutorial-automate-vm-deployment.md)+
+ Title: Azure Linux VM Agent overview
+description: Learn how to install and configure the Azure Linux VM Agent (waagent) to manage your virtual machine's interaction with the Azure fabric controller.
+++++++ Last updated : 03/28/2023+
+# Azure Linux VM Agent overview
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+The Microsoft Azure Linux VM Agent (waagent) manages Linux and FreeBSD provisioning, along with virtual machine (VM) interaction with the Azure fabric controller. In addition to the Linux agent providing provisioning functionality, Azure provides the option of using cloud-init for some Linux operating systems.
+
+The Linux agent provides the following functionality for Linux and FreeBSD Azure Virtual Machines deployments. For more information, see the [Azure Linux VM Agent readme on GitHub](https://github.com/Azure/WALinuxAgent/blob/master/README.md).
+
+### Image provisioning
+
+- Creates a user account
+- Configures SSH authentication types
+- Deploys SSH public keys and key pairs
+- Sets the host name
+- Publishes the host name to the platform DNS
+- Reports the SSH host key fingerprint to the platform
+- Manages the resource disk
+- Formats and mounts the resource disk
+- Configures swap space
+
+### Networking
+
+- Manages routes to improve compatibility with platform DHCP servers
+- Ensures the stability of the network interface name
+
+### Kernel
+
+- Configures virtual NUMA (disabled for kernel 2.6.37)
+- Consumes Hyper-V entropy for */dev/random*
+- Configures SCSI timeouts for the root device, which can be remote
+
+### Diagnostics
+
+- Provides console redirection to the serial port
+
+### System Center Virtual Machine Manager deployments
+
+- Detects and bootstraps the Virtual Machine Manager agent for Linux when it's running in a System Center Virtual Machine Manager 2012 R2 environment
+
+### VM Extension
+
+- Injects components authored by Microsoft and partners into Linux VMs to enable software and configuration automation
+
+You can find a VM Extension reference implementation on [GitHub](https://github.com/Azure/azure-linux-extensions).
+
+## Communication
+
+Information flow from the platform to the agent occurs through two channels:
+
+- A boot-time attached DVD for VM deployments. This DVD includes an Open Virtualization Format (OVF)-compliant configuration file that contains all provisioning information other than the SSH key pairs.
+- A TCP endpoint that exposes a REST API that's used to get deployment and topology configuration.
+
+## Requirements
+
+Testing has confirmed that the following systems work with the Azure Linux VM Agent.
+
+> [!NOTE]
+> This list might differ from the [endorsed Linux distributions on Azure](../linux/endorsed-distros.md).
+
+| Distribution | x64 | ARM64 |
+|:--|:--:|:--:|
+| Alma Linux | 9.x+ | 9.x+ |
+| CentOS | 7.x+, 8.x+ | 7.x+ |
+| Debian | 10+ | 11.x+ |
+| Flatcar Linux | 3374.2.x+ | 3374.2.x+ |
+| Azure Linux | 2.x | 2.x |
+| openSUSE | 12.3+ | *Not supported* |
+| Oracle Linux | 6.4+, 7.x+, 8.x+ | *Not supported* |
+| Red Hat Enterprise Linux | 6.7+, 7.x+, 8.x+, 9.x+ | 8.6+, 9.0+ |
+| Rocky Linux | 9.x+ | 9.x+ |
+| SLES | 12.x+, 15.x+ | 15.x SP4+ |
+| Ubuntu | 18.04+, 20.04+, 22.04+ | 20.04+, 22.04+ |
+
+> [!IMPORTANT]
+> RHEL/Oracle Linux 6.10 is the only RHEL/OL 6 version with Extended Lifecycle Support available. [The extended maintenance ends on June 30, 2024](https://access.redhat.com/support/policy/updates/errata).
+
+Other supported systems:
+
+- The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [GitHub repository](https://github.com/Azure/WALinuxAgent) and we may be able to help.
+
+The Linux agent depends on these system packages to function properly:
+
+- Python 2.6+
+- OpenSSL 1.0+
+- OpenSSH 5.3+
+- File system utilities: sfdisk, fdisk, mkfs, parted
+- Password tools: chpasswd, sudo
+- Text processing tools: sed, grep
+- Network tools: ip-route
+- Kernel support for mounting UDF file systems
+
+Ensure that your VM has access to IP address 168.63.129.16. For more information, see [What is IP address 168.63.129.16?](../../virtual-network/what-is-ip-address-168-63-129-16.md).
+
+## Installation
+
+The supported method of installing and upgrading the Azure Linux VM Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux VM Agent package into their images and repositories.
+Some Linux distributions might disable the Azure Linux VM Agent **Auto Update** feature and some of the repositories might also contain older versions, those might have issues with modern extensions so, we recommend to have the latest stable version installed.
+To make sure the Azure Linux VM Agent is updating properly we recommend having the option `AutoUpdate.Enabled=Y` in the `/etc/waagent.conf` file or simply commenting out that option will result in its defaults too. Having `AutoUpdate.Enabled=N` will not allow the Azure Linux VM Agent to update properly.
+
+For advanced installation options, such as installing from a source or to custom locations or prefixes, see [Microsoft Azure Linux VM Agent](https://github.com/Azure/WALinuxAgent). Other than these scenarios, we do not support or recommend upgrading or reinstalling the Azure Linux VM Agent from source.
+
+## Command-line options
+
+### Flags
+
+- `verbose`: Increases verbosity of the specified command.
+- `force`: Skips interactive confirmation for some commands.
+
+### Commands
+
+- `help`: Lists the supported commands and flags.
+- `deprovision`: Attempts to clean the system and make it suitable for reprovisioning. The operation deletes:
+ - All SSH host keys, if `Provisioning.RegenerateSshHostKeyPair` is `y` in the configuration file.
+ - `Nameserver` configuration in */etc/resolv.conf*.
+ - The root password from */etc/shadow*, if `Provisioning.DeleteRootPassword` is `y` in the configuration file.
+ - Cached DHCP client leases.
+
+ The client resets the host name to `localhost.localdomain`.
+
+ > [!WARNING]
+ > Deprovisioning doesn't guarantee that the image is cleared of all sensitive information and suitable for redistribution.
+
+- `deprovision+user`: Performs everything in `deprovision` and deletes the last provisioned user account (obtained from */var/lib/waagent*) and associated data. Use this parameter when you deprovision an image that was previously provisioned on Azure so that it can be captured and reused.
+- `version`: Displays the version of waagent.
+- `serialconsole`: Configures GRUB to mark ttyS0, the first serial port, as the boot console. This option ensures that kernel boot logs are sent to the serial port and made available for debugging.
+- `daemon`: Runs waagent as a daemon to manage interaction with the platform. This argument is specified to waagent in the waagent *init* script.
+- `start`: Runs waagent as a background process.
+
+## Configuration
+
+The */etc/waagent.conf* configuration file controls the actions of waagent. Here's an example of a configuration file:
+
+```config
+Provisioning.Enabled=y
+Provisioning.DeleteRootPassword=n
+Provisioning.RegenerateSshHostKeyPair=y
+Provisioning.SshHostKeyPairType=rsa
+Provisioning.MonitorHostName=y
+Provisioning.DecodeCustomData=n
+Provisioning.ExecuteCustomData=n
+Provisioning.AllowResetSysUser=n
+Provisioning.PasswordCryptId=6
+Provisioning.PasswordCryptSaltLength=10
+ResourceDisk.Format=y
+ResourceDisk.Filesystem=ext4
+ResourceDisk.MountPoint=/mnt/resource
+ResourceDisk.MountOptions=None
+ResourceDisk.EnableSwap=n
+ResourceDisk.SwapSizeMB=0
+LBProbeResponder=y
+Logs.Verbose=n
+OS.RootDeviceScsiTimeout=300
+OS.OpensslPath=None
+HttpProxy.Host=None
+HttpProxy.Port=None
+AutoUpdate.Enabled=y
+```
+
+Configuration options are of three types: `Boolean`, `String`, or `Integer`. You can specify the `Boolean` configuration options as `y` or `n`. The special keyword `None` might be used for some string type configuration entries.
+
+### Provisioning.Enabled
+
+```txt
+Type: Boolean
+Default: y
+```
+
+This option allows the user to enable or disable the provisioning functionality in the agent. Valid values are `y` and `n`. If provisioning is disabled, SSH host and user keys in the image are preserved and configuration in the Azure provisioning API is ignored.
+
+> [!NOTE]
+> The `Provisioning.Enabled` parameter defaults to `n` on Ubuntu Cloud Images that use cloud-init for provisioning.
+
+### Provisioning.DeleteRootPassword
+
+```txt
+Type: Boolean
+Default: n
+```
+
+If the value is `y`, the agent erases the root password in the */etc/shadow* file during the provisioning process.
+
+### Provisioning.RegenerateSshHostKeyPair
+
+```txt
+Type: Boolean
+Default: y
+```
+
+If the value is `y`, the agent deletes all SSH host key pairs from */etc/ssh/* during the provisioning process, including ECDSA, DSA, and RSA. The agent generates a single fresh key pair.
+
+Configure the encryption type for the fresh key pair by using the `Provisioning.SshHostKeyPairType` entry. Some distributions re-create SSH key pairs for any missing encryption types when the SSH daemon is restarted--for example, after a reboot.
+
+### Provisioning.SshHostKeyPairType
+
+```txt
+Type: String
+Default: rsa
+```
+
+You can set this option to an encryption algorithm type that the SSH daemon supports on the VM. The typically supported values are `rsa`, `dsa`, and `ecdsa`. The *putty.exe* file on Windows doesn't support `ecdsa`. If you intend to use *putty.exe* on Windows to connect to a Linux deployment, use `rsa` or `dsa`.
+
+### Provisioning.MonitorHostName
+
+```txt
+Type: Boolean
+Default: y
+```
+
+If the value is `y`, waagent monitors the Linux VM for a host name change, as returned by the `hostname` command. Waagent then automatically updates the networking configuration in the image to reflect the change. To push the name change to the DNS servers, networking restarts on the VM. This restart results in brief loss of internet connectivity.
+
+### Provisioning.DecodeCustomData
+
+```txt
+Type: Boolean
+Default: n
+```
+
+If the value is `y`, waagent decodes `CustomData` from Base64.
+
+### Provisioning.ExecuteCustomData
+
+```txt
+Type: Boolean
+Default: n
+```
+
+If the value is `y`, waagent runs `CustomData` after provisioning.
+
+### Provisioning.AllowResetSysUser
+
+```txt
+Type: Boolean
+Default: n
+```
+
+This option allows the password for the system user to be reset. It's disabled by default.
+
+### Provisioning.PasswordCryptId
+
+```txt
+Type: String
+Default: 6
+```
+
+This option specifies the algorithm that `crypt` uses when it's generating a password hash. Valid values are:
+
+- `1`: MD5
+- `2a`: Blowfish
+- `5`: SHA-256
+- `6`: SHA-512
+
+### Provisioning.PasswordCryptSaltLength
+
+```txt
+Type: String
+Default: 10
+```
+
+This option specifies the length of random salt used in generating a password hash.
+
+### ResourceDisk.Format
+
+```txt
+Type: Boolean
+Default: y
+```
+
+If the value is `y`, waagent formats and mounts the resource disk that the platform provides, unless the file system type that the user requested in `ResourceDisk.Filesystem` is `ntfs`. The agent makes a single Linux partition (ID 83) available on the disk. This partition isn't formatted if it can be successfully mounted.
+
+### ResourceDisk.Filesystem
+
+```txt
+Type: String
+Default: ext4
+```
+
+This option specifies the file system type for the resource disk. Supported values vary by Linux distribution. If the string is `X`, then `mkfs.X` should be present on the Linux image.
+
+### ResourceDisk.MountPoint
+
+```txt
+Type: String
+Default: /mnt/resource
+```
+
+This option specifies the path at which the resource disk is mounted. The resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned.
+
+### ResourceDisk.MountOptions
+
+```txt
+Type: String
+Default: None
+```
+
+This option specifies disk mount options to be passed to the `mount -o` command. The value is a comma-separated list of values, for example, `nodev,nosuid`. For more information, see the `mount(8)` manual page.
+
+### ResourceDisk.EnableSwap
+
+```txt
+Type: Boolean
+Default: n
+```
+
+If you set this option, the agent creates a swap file (*/swapfile*) on the resource disk and adds it to the system swap space.
+
+### ResourceDisk.SwapSizeMB
+
+```txt
+Type: Integer
+Default: 0
+```
+
+This option specifies the size of the swap file in megabytes.
+
+### Logs.Verbose
+
+```txt
+Type: Boolean
+Default: n
+```
+
+If you set this option, log verbosity is boosted. Waagent logs to */var/log/waagent.log* and uses the system `logrotate` functionality to rotate logs.
+
+### OS.EnableRDMA
+
+```txt
+Type: Boolean
+Default: n
+```
+
+If you set this option, the agent attempts to install and then load an RDMA kernel driver that matches the version of the firmware on the underlying hardware.
+
+### OS.RootDeviceScsiTimeout
+
+```txt
+Type: Integer
+Default: 300
+```
+
+This option configures the SCSI timeout in seconds on the OS disk and data drives. If it's not set, the system defaults are used.
+
+### OS.OpensslPath
+
+```txt
+Type: String
+Default: None
+```
+
+You can use this option to specify an alternate path for the *openssl* binary to use for cryptographic operations.
+
+### HttpProxy.Host, HttpProxy.Port
+
+```txt
+Type: String
+Default: None
+```
+
+If you set this option, the agent uses this proxy server to access the internet.
+
+### AutoUpdate.Enabled
+
+```txt
+Type: Boolean
+Default: y
+```
+
+Enable or disable autoupdate for goal state processing. The default value is `y`.
+
+## Automatic log collection in the Azure Linux Guest Agent
+
+As of version 2.7+, the Azure Linux Guest Agent has a feature to automatically collect some logs and upload them. This feature currently requires `systemd`. It uses a new `systemd` slice called `azure-walinuxagent-logcollector.slice` to manage resources while it performs the collection.
+
+The purpose is to facilitate offline analysis. The agent produces a *.zip* file of some diagnostics logs before uploading them to the VM's host. Engineering teams and support professionals can retrieve the file to investigate issues for the VM owner. For technical information on the files that the Azure Linux Guest Agent collects, see the *azurelinuxagent/common/logcollector_manifests.py* file in the [agent's GitHub repository](https://github.com/Azure/WALinuxAgent).
+
+You can disable this option by editing */etc/waagent.conf*. Update `Logs.Collect` to `n`.
+
+## Ubuntu Cloud Images
+
+Ubuntu Cloud Images use [cloud-init](https://launchpad.net/ubuntu/+source/cloud-init) to do many configuration tasks that the Azure Linux VM Agent would otherwise manage. The following differences apply:
+
+- `Provisioning.Enabled` defaults to `n` on Ubuntu Cloud Images that use cloud-init to perform provisioning tasks.
+- The following configuration parameters have no effect on Ubuntu Cloud Images that use cloud-init to manage the resource disk and swap space:
+
+ - `ResourceDisk.Format`
+ - `ResourceDisk.Filesystem`
+ - `ResourceDisk.MountPoint`
+ - `ResourceDisk.EnableSwap`
+ - `ResourceDisk.SwapSizeMB`
+
+To configure the resource disk mount point and swap space on Ubuntu Cloud Images during provisioning, see the following resources:
+
+- [Ubuntu wiki: AzureSwapPartitions](https://go.microsoft.com/fwlink/?LinkID=532955&clcid=0x409)
+- [Deploy applications to a Windows virtual machine in Azure with the Custom Script Extension](../windows/tutorial-automate-vm-deployment.md)
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
Last updated 03/31/2023
# Use the Azure Custom Script Extension Version 2 with Linux virtual machines
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The Custom Script Extension Version 2 downloads and runs scripts on Azure virtual machines (VMs). Use this extension for post-deployment configuration, software installation, or any other configuration or management task. You can download scripts from Azure Storage or another accessible internet location, or you can provide them to the extension runtime. The Custom Script Extension integrates with Azure Resource Manager templates. You can also run it by using the Azure CLI, Azure PowerShell, or the Azure Virtual Machines REST API.
virtual-machines Diagnostics Linux V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux-v3.md
ms.devlang: azurecli
# Use Linux diagnostic extension 3.0 to monitor metrics and logs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This document describes version 3.0 and newer of the Linux diagnostic extension (LAD). > [!IMPORTANT]
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
ms.devlang: azurecli
# Use the Linux diagnostic extension 4.0 to monitor metrics and logs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes the latest versions of the Linux diagnostic extension (LAD). > [!IMPORTANT]
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
Title: Desired State Configuration for Azure overview
description: Learn how to use the Microsoft Azure extension handler for PowerShell Desired State Configuration (DSC), including prerequisites, architecture, and cmdlets.
-tags: azure-resource-manager
keywords: 'dsc' ms.assetid: bbacbc93-1e7b-4611-a3ec-e3320641f9ba
virtual-machines Dsc Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-template.md
Title: Desired State Configuration extension with Azure Resource Manager templat
description: Learn about the Resource Manager template definition for the Desired State Configuration (DSC) extension in Azure.
-tags: azure-resource-manager
keywords: 'dsc' ms.assetid: b5402e5a-1768-4075-8c19-b7f7402687af
virtual-machines Enable Infiniband https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md
# Enable InfiniBand
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets [RDMA capable](../sizes-hpc.md#rdma-capable-instances) [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs communicate over the low latency and high bandwidth InfiniBand network. The RDMA capability over such an interconnect is critical to boost the scalability and performance of distributed-node HPC and AI workloads. The InfiniBand enabled HB-series and N-series VMs are connected in a non-blocking fat tree with a low-diameter design for optimized and consistent RDMA performance.
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
# InfiniBand Driver Extension for Linux
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This extension installs InfiniBand OFED drivers on InfiniBand and SR-IOV-enabled ('r' sizes) [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs running Linux. Depending on the VM family, the extension installs the appropriate drivers for the Connect-X NIC. It does not install the InfiniBand ND drivers on the non-SR-IOV enabled [HB-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs. Instructions on manual installation of the OFED drivers are available in [Enable InfiniBand on HPC VMs](enable-infiniband.md#manual-installation).
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
# NVIDIA GPU Driver Extension for Linux
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This extension installs NVIDIA GPU drivers on Linux N-series virtual machines (VMs). Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers by using this extension, you're accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://www.nvidia.com/en-us/data-center/products/nvidia-ai-enterprise/eula/). During the installation process, the VM might reboot to complete the driver setup. Instructions on manual installation of the drivers and the current supported versions are available. An extension is also available to install NVIDIA GPU drivers on [Windows N-series VMs](hpccompute-gpu-windows.md).
virtual-machines Key Vault Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-linux.md
Title: Azure Key Vault VM Extension for Linux
description: Deploy an agent performing automatic refresh of Key Vault certificates on virtual machines using a virtual machine extension.
-tags: keyvault
virtual-machines Key Vault Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md
Title: Azure Key Vault VM extension for Windows
description: Learn how to deploy an agent for automatic refresh of Azure Key Vault secrets on virtual machines with a virtual machine extension.
-tags: keyvault
The following JSON shows the schema for the Key Vault VM extension. Before you c
"autoUpgradeMinorVersion": true, "settings": { "secretsManagementSettings": {
- "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: "3600">,
"linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>, "requireInitialSync": <Initial synchronization of certificates. Example: true>, "observedCertificates": <An array of KeyVault URIs that represent monitored certificates, including certificate store location and ACL permission to certificate private key. Example:
The JSON schema includes the following properties.
| `apiVersion` | 2022-08-01 | date | | `publisher` | Microsoft.Azure.KeyVault | string | | `type` | KeyVaultForWindows | string |
-| `typeHandlerVersion` | 3.0 | int |
-| `pollingIntervalInS` | 3600 | string |
+| `typeHandlerVersion` | "3.0" | string |
+| `pollingIntervalInS` | "3600" | string |
| `linkOnRenewal` (optional) | true | boolean | | `requireInitialSync` (optional) | false | boolean | | `observedCertificates` | [{...}, {...}] | string array |
The JSON schema includes the following properties.
| `apiVersion` | 2022-08-01 | date | | `publisher` | Microsoft.Azure.KeyVault | string | | `type` | KeyVaultForWindows | string |
-| `typeHandlerVersion` | 1.0 | int |
-| `pollingIntervalInS` | 3600 | string |
+| `typeHandlerVersion` | "1.0" | string |
+| `pollingIntervalInS` | "3600" | string |
| `certificateStoreName` | MY | string | | `linkOnRenewal` | true | boolean | | `certificateStoreLocation` | LocalMachine or CurrentUser (case sensitive) | string |
The following JSON snippets provide example settings for an ARM template deploym
"autoUpgradeMinorVersion": true, "settings": { "secretsManagementSettings": {
- "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: "3600">,
"linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>, "observedCertificates": <An array of KeyVault URIs that represent monitored certificates, including certificate store location and ACL permission to certificate private key. Example: [
The following JSON snippets provide example settings for an ARM template deploym
"autoUpgradeMinorVersion": true, "settings": { "secretsManagementSettings": {
- "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: 3600>,
+ "pollingIntervalInS": <A string that specifies the polling interval in seconds. Example: "3600">,
"linkOnRenewal": <Windows only. Ensures s-channel binding when the certificate renews without necessitating redeployment. Example: true>, "certificateStoreName": <The certificate store name. Example: "MY">, "certificateStoreLocation": <The certificate store location, which currently works locally only. Example: "LocalMachine">,
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
# Network Watcher Agent virtual machine extension for Linux
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is a network performance monitoring, diagnostic, and analytics service that allows monitoring for Azure networks. The Network Watcher Agent virtual machine extension is a requirement for some of the Network Watcher features on Azure virtual machines (VMs), such as capturing network traffic on demand, and other advanced functionality. This article details the supported platforms and deployment options for the Network Watcher Agent VM extension for Linux. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. You can install the extension on virtual machines that you deploy. If the virtual machine is deployed by an Azure service, check the documentation for the service to determine whether or not it permits installing extensions in the virtual machine.
virtual-machines Stackify Retrace Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/stackify-retrace-linux.md
ms.devlang: azurecli
# Stackify Retrace Linux Agent Extension
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ ## Overview Stackify provides products that track details about your application to help find and fix problems quickly. For developer teams, Retrace is a fully integrated, multi-environment, app performance super-power. It combines several tools every development team needs.
virtual-machines Tenable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/tenable.md
Last updated 07/18/2023
# Tenable One-Click Nessus Agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Tenable now supports a One-Click deployment of Nessus Agents via Microsoft's Azure portal. This solution provides an easy way to install the latest version of Nessus Agent on Azure virtual machines (VM) (whether Linux or Windows) by either clicking on an icon within the Azure portal or by writing a few lines of PowerShell script. ## Prerequisites
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
Last updated 02/03/2023
# How to update the Azure Linux Agent on a VM
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ To update your [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) on a Linux VM in Azure, you must already have: - A running Linux VM in Azure.
virtual-machines Vmaccess Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmaccess-linux.md
# VMAccess Extension for Linux
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The VMAccess Extension is used to manage administrative users, configure SSH, and check or repair disks on Azure Linux virtual machines. The extension integrates with Azure Resource Manager templates. It can also be invoked using Azure CLI, Azure PowerShell, the Azure portal, and the Azure Virtual Machines REST API. This article describes how to run the VMAccess Extension from the Azure CLI and through an Azure Resource Manager template. This article also provides troubleshooting steps for Linux systems.
virtual-machines Fsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/fsv2-series.md
# Fsv2-series
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The Fsv2-series run on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake), the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors, or the Intel® Xeon® Platinum 8168 (Skylake) processors. It features a sustained all core Turbo clock speed of 3.4 GHz and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions are new on Intel Scalable Processors. These instructions provide up to a 2X performance boost to vector processing workloads on both single and double precision floating point operations. In other words, they're really fast for any computational workload.
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
# Remove machine specific information by deprovisioning or generalizing a VM before creating an image
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Generalizing or deprovisioning a VM is not necessary for creating an image in an [Azure Compute Gallery](shared-image-galleries.md#generalized-and-specialized-images) unless you specifically want to create an image that has no machine specific information, like user accounts. Generalizing is still required when creating a managed image outside of a gallery. Generalizing removes machine specific information so the image can be used to create multiple VMs. Once the VM has been generalized or deprovisioned, you need to let the platform know so that the boot sequence can be set correctly.
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-hc-known-issues.md
# Known issues with HB-series and N-series VMs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets This article attempts to list recent common issues and their solutions when using the [HB-series](sizes-hpc.md) and [N-series](sizes-gpu.md) HPC and GPU VMs.
virtual-machines Hb Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-overview.md
# HB-series virtual machines overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets Maximizing high performance compute (HPC) application performance on AMD EPYC requires a thoughtful approach memory locality and process placement. Below we outline the AMD EPYC architecture and our implementation of it on Azure for HPC applications. We will use the term ΓÇ£pNUMAΓÇ¥ to refer to a physical NUMA domain, and ΓÇ£vNUMAΓÇ¥ to refer to a virtualized NUMA domain.
virtual-machines Hbv2 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series-overview.md
Title: HBv2-series VM overview - Azure Virtual Machines | Microsoft Docs description: Learn about the HBv2-series VM size in Azure.
-tags: azure-resource-manager
# HBv2 series virtual machine overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets Maximizing high performance compute (HPC) application performance on AMD EPYC requires a thoughtful approach memory locality and process placement. Below we outline the AMD EPYC architecture and our implementation of it on Azure for HPC applications. We use the term **pNUMA** to refer to a physical NUMA domain, and **vNUMA** to refer to a virtualized NUMA domain.
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series-overview.md
Title: HBv3-series VM overview, architecture, topology - Azure Virtual Machines | Microsoft Docs description: Learn about the HBv3-series VM size in Azure.
-tags: azure-resource-manager
# HBv3-series virtual machine overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets An [HBv3-series](hbv3-series.md) server features 2 * 64-core EPYC 7V73X CPUs for a total of 128 physical "Zen3" cores with AMD 3D V-Cache. Simultaneous Multithreading (SMT) is disabled on HBv3. These 128 cores are divided into 16 sections (8 per socket), each section containing 8 processor cores with uniform access to a 96 MB L3 cache. Azure HBv3 servers also run the following AMD BIOS settings:
virtual-machines Hbv4 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-series-overview.md
Title: HBv4-series VM overview, architecture, topology - Azure Virtual Machines | Microsoft Docs description: Learn about the HBv4-series VM size in Azure.
-tags: azure-resource-manager
# HBv4-series virtual machine overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets An [HBv4-series](hbv4-series.md) server features 2 * 96-core EPYC 9V33X CPUs for a total of 192 physical "Zen4" cores with AMD 3D-V Cache. Simultaneous Multithreading (SMT) is disabled on HBv4. These 192 cores are divided into 24 sections (12 per socket), each section containing 8 processor cores with uniform access to a 96 MB L3 cache. Azure HBv4 servers also run the following AMD BIOS settings:
virtual-machines Hc Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series-overview.md
# HC-series virtual machine overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets Maximizing HPC application performance on Intel Xeon Scalable Processors requires a thoughtful approach to process placement on this new architecture. Here, we outline our implementation of it on Azure HC-series VMs for HPC applications. We will use the term ΓÇ£pNUMAΓÇ¥ to refer to a physical NUMA domain, and ΓÇ£vNUMAΓÇ¥ to refer to a virtualized NUMA domain. Similarly, we will use the term ΓÇ£pCoreΓÇ¥ to refer to physical CPU cores, and ΓÇ£vCoreΓÇ¥ to refer to virtualized CPU cores.
virtual-machines Hx Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hx-series-overview.md
Title: HX-series VM overview, architecture, topology - Azure Virtual Machines | Microsoft Docs description: Learn about the HX-series VM size in Azure.
-tags: azure-resource-manager
# HX-series virtual machine overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets An [HX-series](hx-series.md) server features 2 * 96-core EPYC 9V33X CPUs for a total of 192 physical "Zen4" cores with AMD 3D-V Cache. Simultaneous Multithreading (SMT) is disabled on HX. These 192 cores are divided into 24 sections (12 per socket), each section containing 8 processor cores with uniform access to a 96 MB L3 cache. Azure HX servers also run the following AMD BIOS settings:
virtual-machines Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md
# Create an image definition and an image version
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ An [Azure Compute Gallery](shared-image-galleries.md) (formerly known as Shared Image Gallery) simplifies custom image sharing across your organization. Custom images are like marketplace images, but you create them yourself. Images can be created from a VM, VHD, snapshot, managed image, or another image version. The Azure Compute Gallery lets you share your custom VM images with others in your organization, within or across regions, within a Microsoft Entra tenant, or publicly using a [community gallery](azure-compute-gallery.md#community). Choose which images you want to share, which regions you want to make them available in, and who you want to share them with. You can create multiple galleries so that you can logically group images. Many new features like ARM64, Accelerated Networking and TrustedVM are only supported through Azure Compute Gallery and not available for managed images.
virtual-machines Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-dns.md
- Title: DNS Name resolution options for Linux VMs
-description: Name Resolution scenarios for Linux virtual machines in Azure IaaS, including provided DNS services, hybrid external DNS and Bring Your Own DNS server.
----- Previously updated : 04/11/2023---
-# DNS Name Resolution options for Linux virtual machines in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-Azure provides DNS name resolution by default for all virtual machines that are in a single virtual network. You can implement your own DNS name resolution solution by configuring your own DNS services on your virtual machines that Azure hosts. The following scenarios should help you choose the one that works for your situation.
-
-* [Name resolution that Azure provides](#name-resolution-that-azure-provides)
-* [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server)
-
-The type of name resolution that you use depends on how your virtual machines and role instances need to communicate with each other.
-
-The following table illustrates scenarios and corresponding name resolution solutions:
-
-| **Scenario** | **Solution** | **Suffix** |
-| | | |
-| Name resolution between role instances or virtual machines in the same virtual network |Name resolution that Azure provides |hostname or fully-qualified domain name (FQDN) |
-| Name resolution between role instances or virtual machines in different virtual networks |Customer-managed DNS servers that forward queries between virtual networks for resolution by Azure (DNS proxy). See [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server). |FQDN only |
-| Resolution of on-premises computers and service names from role instances or virtual machines in Azure |Customer-managed DNS servers (for example, on-premises domain controller, local read-only domain controller, or a DNS secondary synced by using zone transfers). See [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server). |FQDN only |
-| Resolution of Azure hostnames from on-premises computers |Forward queries to a customer-managed DNS proxy server in the corresponding virtual network. The proxy server forwards queries to Azure for resolution. See [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server). |FQDN only |
-| Reverse DNS for internal IPs |[Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server) |n/a |
-
-## Name resolution that Azure provides
-
-Along with resolution of public DNS names, Azure provides internal name resolution for virtual machines and role instances that are in the same virtual network. In virtual networks that are based on Azure Resource Manager, the DNS suffix is consistent across the virtual network; the FQDN is not needed. DNS names can be assigned to both network interface cards (NICs) and virtual machines. Although the name resolution that Azure provides does not require any configuration, it is not the appropriate choice for all deployment scenarios, as seen on the preceding table.
-
-### Features and considerations
-
-**Features:**
-
-* No configuration is required to use name resolution that Azure provides.
-* The name resolution service that Azure provides is highly available. You don't need to create and manage clusters of your own DNS servers.
-* The name resolution service that Azure provides can be used along with your own DNS servers to resolve both on-premises and Azure hostnames.
-* Name resolution is provided between virtual machines in virtual networks without need for the FQDN.
-* You can use hostnames that best describe your deployments rather than working with auto-generated names.
-
-**Considerations:**
-
-* The DNS suffix that Azure creates cannot be modified.
-* You cannot manually register your own records.
-* WINS and NetBIOS are not supported.
-* Hostnames must be DNS-compatible.
- Names must use only 0-9, a-z, and '-', and they cannot start or end with a '-'. See RFC 3696 Section 2.
-* DNS query traffic is throttled for each virtual machine. Throttling shouldn't impact most applications. If request throttling is observed, ensure that client-side caching is enabled. For more information, see [Getting the most from name resolution that Azure provides](#getting-the-most-from-name-resolution-that-azure-provides).
-
-### Getting the most from name resolution that Azure provides\
-
-**Client-side caching:**
-
-Some DNS queries are not sent across the network. Client-side caching helps reduce latency and improve resilience to network inconsistencies by resolving recurring DNS queries from a local cache. DNS records contain a Time-To-Live (TTL), which enables the cache to store the record for as long as possible without impacting record freshness. As a result, client-side caching is suitable for most situations.
-
-Some Linux distributions do not include caching by default. We recommend that you add a cache to each Linux virtual machine after you check that there isn't a local cache already.
-
-Several different DNS caching packages, such as dnsmasq, are available. Here are the steps to install dnsmasq on the most common distributions:
-
-# [Ubuntu](#tab/ubuntu)
-
-1. Install the dnsmasq package:
-
-```bash
-sudo apt-get install dnsmasq
-```
-
-2. Enable the dnsmasq service:
-
-```bash
-sudo systemctl enable dnsmasq.service
-```
-
-3. Start the dnsmasq service:
-
-```bash
-sudo systemctl start dnsmasq.service
-```
-
-# [SUSE](#tab/sles)
-
-1. Install the dnsmasq package:
-
-```bash
-sudo zypper install dnsmasq
-```
-
-2. Enable the dnsmasq service:
-
-```bash
-sudo systemctl enable dnsmasq.service
-```
-
-3. Start the dnsmasq service:
-
-```bash
-sudo systemctl start dnsmasq.service
-```
-
-4. Edit `/etc/sysconfig/network/config` file using a text editor, and change `NETCONFIG_DNS_FORWARDER=""` to `dnsmasq`.
-5. Update `/etc/resolv.conf` to set the cache as the local DNS resolver.
-
-```bash
-sudo netconfig update
-```
-
-# [CentOS/RHEL](#tab/rhel)
-
-1. Install the dnsmasq package:
-
-```bash
-sudo yum install dnsmasq -y
-```
-
-2. Enable the dnsmasq service:
-
-```bash
-sudo systemctl enable dnsmasq.service
-```
-
-3. Start the dnsmasq service:
-
-```bash
-sudo systemctl start dnsmasq.service
-```
-
-4. Add `prepend domain-name-servers 127.0.0.1;` to `/etc/dhcp/dhclient.conf`.
-
-```bash
-sudo echo "prepend domain-name-servers 127.0.0.1;" >> /etc/dhcp/dhclient.conf
-```
-
-5. Restart the network service to set the cache as the local DNS resolver
-
-```bash
-sudo systemctl restart NetworkManager
-```
-
-> [!NOTE]
-> The `dnsmasq` package is only one of the many DNS caches that are available for Linux. Before you use it, check its suitability for your needs and that no other cache is installed.
---
-**Client-side retries**
-
-DNS is primarily a UDP protocol. Because the UDP protocol doesn't guarantee message delivery, the DNS protocol itself handles retry logic. Each DNS client (operating system) can exhibit different retry logic depending on the creator's preference:
-
-* Windows operating systems retry after one second and then again after another two, four, and another four seconds.
-* The default Linux setup retries after five seconds. You should change this to retry five times at one-second intervals.
-
-To check the current settings on a Linux virtual machine, 'cat /etc/resolv.conf', and look at the 'options' line, for example:
-
-```bash
-sudo cat /etc/resolv.conf
-```
-
-```config-conf
-options timeout:1 attempts:5
-```
-
-The `/etc/resolv.conf` file is auto-generated and should not be edited. The specific steps that add the 'options' line vary by distribution:
-
-**Ubuntu** (uses resolvconf)
-
-1. Add the options line to `/etc/resolvconf/resolv.conf.d/head` file.
-2. Run `sudo resolvconf -u` to update.
-
-**SUSE** (uses netconf)
-
-1. Add `timeout:1 attempts:5` to the `NETCONFIG_DNS_RESOLVER_OPTIONS=""` parameter in `/etc/sysconfig/network/config`.
-2. Run `sudo netconfig update` to update.
-
-**CentOS by Rogue Wave Software (formerly OpenLogic)** (uses NetworkManager)
-
-1. Add `RES_OPTIONS="timeout:1 attempts:5"` to `/etc/sysconfig/network`.
-2. Run `systemctl restart NetworkManager` to update.
-
-## Name resolution using your own DNS server
-
-Your name resolution needs may go beyond the features that Azure provides. For example, you might require DNS resolution between virtual networks. To cover this scenario, you can use your own DNS servers.
-
-DNS servers within a virtual network can forward DNS queries to recursive resolvers of Azure to resolve hostnames that are in the same virtual network. For example, a DNS server that runs in Azure can respond to DNS queries for its own DNS zone files and forward all other queries to Azure. This functionality enables virtual machines to see both your entries in your zone files and hostnames that Azure provides (via the forwarder). Access to the recursive resolvers of Azure is provided via the virtual IP 168.63.129.16.
-
-DNS forwarding also enables DNS resolution between virtual networks and enables your on-premises machines to resolve hostnames that Azure provides. To resolve a virtual machine's hostname, the DNS server virtual machine must reside in the same virtual network and be configured to forward hostname queries to Azure. Because the DNS suffix is different in each virtual network, you can use conditional forwarding rules to send DNS queries to the correct virtual network for resolution. The following image shows two virtual networks and an on-premises network doing DNS resolution between virtual networks by using this method:
-
-![DNS resolution between virtual networks](./media/azure-dns/inter-vnet-dns.png)
-
-When you use name resolution that Azure provides, the internal DNS suffix is provided to each virtual machine by using DHCP. When you use your own name resolution solution, this suffix is not supplied to virtual machines because the suffix interferes with other DNS architectures. To refer to machines by FQDN or to configure the suffix on your virtual machines, you can use PowerShell or the API to determine the suffix:
-
-* For virtual networks that are managed by Azure Resource Manager, the suffix is available via the [network interface card](/rest/api/virtualnetwork/networkinterfaces) resource. You can also run the `azure network public-ip show <resource group> <pip name>` command to display the details of your public IP, which includes the FQDN of the NIC.
-
-If forwarding queries to Azure doesn't suit your needs, you need to provide your own DNS solution. Your DNS solution needs to:
-
-* Provide appropriate hostname resolution, for example via [DDNS](../../virtual-network/virtual-networks-name-resolution-ddns.md). If you use DDNS, you might need to disable DNS record scavenging. DHCP leases of Azure are very long and scavenging may remove DNS records prematurely.
-* Provide appropriate recursive resolution to allow resolution of external domain names.
-* Be accessible (TCP and UDP on port 53) from the clients it serves and be able to access the Internet.
-* Be secured against access from the Internet to mitigate threats posed by external agents.
-
-> [!NOTE]
-> For best performance, when you use virtual machines in Azure DNS servers, disable IPv6 and assign an [Instance-Level Public IP](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) to each DNS server virtual machine.
->
->
+
+ Title: DNS Name resolution options for Linux VMs
+description: Name Resolution scenarios for Linux virtual machines in Azure IaaS, including provided DNS services, hybrid external DNS and Bring Your Own DNS server.
+++++ Last updated : 04/11/2023+++
+# DNS Name Resolution options for Linux virtual machines in Azure
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+
+Azure provides DNS name resolution by default for all virtual machines that are in a single virtual network. You can implement your own DNS name resolution solution by configuring your own DNS services on your virtual machines that Azure hosts. The following scenarios should help you choose the one that works for your situation.
+
+* [Name resolution that Azure provides](#name-resolution-that-azure-provides)
+* [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server)
+
+The type of name resolution that you use depends on how your virtual machines and role instances need to communicate with each other.
+
+The following table illustrates scenarios and corresponding name resolution solutions:
+
+| **Scenario** | **Solution** | **Suffix** |
+| | | |
+| Name resolution between role instances or virtual machines in the same virtual network |Name resolution that Azure provides |hostname or fully-qualified domain name (FQDN) |
+| Name resolution between role instances or virtual machines in different virtual networks |Customer-managed DNS servers that forward queries between virtual networks for resolution by Azure (DNS proxy). See [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server). |FQDN only |
+| Resolution of on-premises computers and service names from role instances or virtual machines in Azure |Customer-managed DNS servers (for example, on-premises domain controller, local read-only domain controller, or a DNS secondary synced by using zone transfers). See [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server). |FQDN only |
+| Resolution of Azure hostnames from on-premises computers |Forward queries to a customer-managed DNS proxy server in the corresponding virtual network. The proxy server forwards queries to Azure for resolution. See [Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server). |FQDN only |
+| Reverse DNS for internal IPs |[Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server) |n/a |
+
+## Name resolution that Azure provides
+
+Along with resolution of public DNS names, Azure provides internal name resolution for virtual machines and role instances that are in the same virtual network. In virtual networks that are based on Azure Resource Manager, the DNS suffix is consistent across the virtual network; the FQDN is not needed. DNS names can be assigned to both network interface cards (NICs) and virtual machines. Although the name resolution that Azure provides does not require any configuration, it is not the appropriate choice for all deployment scenarios, as seen on the preceding table.
+
+### Features and considerations
+
+**Features:**
+
+* No configuration is required to use name resolution that Azure provides.
+* The name resolution service that Azure provides is highly available. You don't need to create and manage clusters of your own DNS servers.
+* The name resolution service that Azure provides can be used along with your own DNS servers to resolve both on-premises and Azure hostnames.
+* Name resolution is provided between virtual machines in virtual networks without need for the FQDN.
+* You can use hostnames that best describe your deployments rather than working with auto-generated names.
+
+**Considerations:**
+
+* The DNS suffix that Azure creates cannot be modified.
+* You cannot manually register your own records.
+* WINS and NetBIOS are not supported.
+* Hostnames must be DNS-compatible.
+ Names must use only 0-9, a-z, and '-', and they cannot start or end with a '-'. See RFC 3696 Section 2.
+* DNS query traffic is throttled for each virtual machine. Throttling shouldn't impact most applications. If request throttling is observed, ensure that client-side caching is enabled. For more information, see [Getting the most from name resolution that Azure provides](#getting-the-most-from-name-resolution-that-azure-provides).
+
+### Getting the most from name resolution that Azure provides\
+
+**Client-side caching:**
+
+Some DNS queries are not sent across the network. Client-side caching helps reduce latency and improve resilience to network inconsistencies by resolving recurring DNS queries from a local cache. DNS records contain a Time-To-Live (TTL), which enables the cache to store the record for as long as possible without impacting record freshness. As a result, client-side caching is suitable for most situations.
+
+Some Linux distributions do not include caching by default. We recommend that you add a cache to each Linux virtual machine after you check that there isn't a local cache already.
+
+Several different DNS caching packages, such as dnsmasq, are available. Here are the steps to install dnsmasq on the most common distributions:
+
+# [Ubuntu](#tab/ubuntu)
+
+1. Install the dnsmasq package:
+
+```bash
+sudo apt-get install dnsmasq
+```
+
+2. Enable the dnsmasq service:
+
+```bash
+sudo systemctl enable dnsmasq.service
+```
+
+3. Start the dnsmasq service:
+
+```bash
+sudo systemctl start dnsmasq.service
+```
+
+# [SUSE](#tab/sles)
+
+1. Install the dnsmasq package:
+
+```bash
+sudo zypper install dnsmasq
+```
+
+2. Enable the dnsmasq service:
+
+```bash
+sudo systemctl enable dnsmasq.service
+```
+
+3. Start the dnsmasq service:
+
+```bash
+sudo systemctl start dnsmasq.service
+```
+
+4. Edit `/etc/sysconfig/network/config` file using a text editor, and change `NETCONFIG_DNS_FORWARDER=""` to `dnsmasq`.
+5. Update `/etc/resolv.conf` to set the cache as the local DNS resolver.
+
+```bash
+sudo netconfig update
+```
+
+# [CentOS/RHEL](#tab/rhel)
+
+1. Install the dnsmasq package:
+
+```bash
+sudo yum install dnsmasq -y
+```
+
+2. Enable the dnsmasq service:
+
+```bash
+sudo systemctl enable dnsmasq.service
+```
+
+3. Start the dnsmasq service:
+
+```bash
+sudo systemctl start dnsmasq.service
+```
+
+4. Add `prepend domain-name-servers 127.0.0.1;` to `/etc/dhcp/dhclient.conf`.
+
+```bash
+sudo echo "prepend domain-name-servers 127.0.0.1;" >> /etc/dhcp/dhclient.conf
+```
+
+5. Restart the network service to set the cache as the local DNS resolver
+
+```bash
+sudo systemctl restart NetworkManager
+```
+
+> [!NOTE]
+> The `dnsmasq` package is only one of the many DNS caches that are available for Linux. Before you use it, check its suitability for your needs and that no other cache is installed.
+++
+**Client-side retries**
+
+DNS is primarily a UDP protocol. Because the UDP protocol doesn't guarantee message delivery, the DNS protocol itself handles retry logic. Each DNS client (operating system) can exhibit different retry logic depending on the creator's preference:
+
+* Windows operating systems retry after one second and then again after another two, four, and another four seconds.
+* The default Linux setup retries after five seconds. You should change this to retry five times at one-second intervals.
+
+To check the current settings on a Linux virtual machine, 'cat /etc/resolv.conf', and look at the 'options' line, for example:
+
+```bash
+sudo cat /etc/resolv.conf
+```
+
+```config-conf
+options timeout:1 attempts:5
+```
+
+The `/etc/resolv.conf` file is auto-generated and should not be edited. The specific steps that add the 'options' line vary by distribution:
+
+**Ubuntu** (uses resolvconf)
+
+1. Add the options line to `/etc/resolvconf/resolv.conf.d/head` file.
+2. Run `sudo resolvconf -u` to update.
+
+**SUSE** (uses netconf)
+
+1. Add `timeout:1 attempts:5` to the `NETCONFIG_DNS_RESOLVER_OPTIONS=""` parameter in `/etc/sysconfig/network/config`.
+2. Run `sudo netconfig update` to update.
+
+**CentOS by Rogue Wave Software (formerly OpenLogic)** (uses NetworkManager)
+
+1. Add `RES_OPTIONS="timeout:1 attempts:5"` to `/etc/sysconfig/network`.
+2. Run `systemctl restart NetworkManager` to update.
+
+## Name resolution using your own DNS server
+
+Your name resolution needs may go beyond the features that Azure provides. For example, you might require DNS resolution between virtual networks. To cover this scenario, you can use your own DNS servers.
+
+DNS servers within a virtual network can forward DNS queries to recursive resolvers of Azure to resolve hostnames that are in the same virtual network. For example, a DNS server that runs in Azure can respond to DNS queries for its own DNS zone files and forward all other queries to Azure. This functionality enables virtual machines to see both your entries in your zone files and hostnames that Azure provides (via the forwarder). Access to the recursive resolvers of Azure is provided via the virtual IP 168.63.129.16.
+
+DNS forwarding also enables DNS resolution between virtual networks and enables your on-premises machines to resolve hostnames that Azure provides. To resolve a virtual machine's hostname, the DNS server virtual machine must reside in the same virtual network and be configured to forward hostname queries to Azure. Because the DNS suffix is different in each virtual network, you can use conditional forwarding rules to send DNS queries to the correct virtual network for resolution. The following image shows two virtual networks and an on-premises network doing DNS resolution between virtual networks by using this method:
+
+![DNS resolution between virtual networks](./media/azure-dns/inter-vnet-dns.png)
+
+When you use name resolution that Azure provides, the internal DNS suffix is provided to each virtual machine by using DHCP. When you use your own name resolution solution, this suffix is not supplied to virtual machines because the suffix interferes with other DNS architectures. To refer to machines by FQDN or to configure the suffix on your virtual machines, you can use PowerShell or the API to determine the suffix:
+
+* For virtual networks that are managed by Azure Resource Manager, the suffix is available via the [network interface card](/rest/api/virtualnetwork/networkinterfaces) resource. You can also run the `azure network public-ip show <resource group> <pip name>` command to display the details of your public IP, which includes the FQDN of the NIC.
+
+If forwarding queries to Azure doesn't suit your needs, you need to provide your own DNS solution. Your DNS solution needs to:
+
+* Provide appropriate hostname resolution, for example via [DDNS](../../virtual-network/virtual-networks-name-resolution-ddns.md). If you use DDNS, you might need to disable DNS record scavenging. DHCP leases of Azure are very long and scavenging may remove DNS records prematurely.
+* Provide appropriate recursive resolution to allow resolution of external domain names.
+* Be accessible (TCP and UDP on port 53) from the clients it serves and be able to access the Internet.
+* Be secured against access from the Internet to mitigate threats posed by external agents.
+
+> [!NOTE]
+> For best performance, when you use virtual machines in Azure DNS servers, disable IPv6 and assign an [Instance-Level Public IP](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) to each DNS server virtual machine.
+>
+>
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-ps-findimage.md
- Title: Find and use marketplace purchase plan information using the CLI
-description: Learn how to use the Azure CLI to find image URNs and purchase plan parameters, like the publisher, offer, SKU, and version, for Marketplace VM images.
--- Previously updated : 02/09/2023-----
-# Find Azure Marketplace image information using the Azure CLI
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-
-This topic describes how to use the Azure CLI to find VM images in the Azure Marketplace. Use this information to specify a Marketplace image when you create a VM programmatically with the CLI, Resource Manager templates, or other tools.
-
-You can also browse available images and offers using the [Azure Marketplace](https://azuremarketplace.microsoft.com/) or [Azure PowerShell](../windows/cli-ps-findimage.md).
-
-## Terminology
-
-A Marketplace image in Azure has the following attributes:
-
-* **Publisher**: The organization that created the image. Examples: Canonical, RedHat, SUSE.
-* **Offer**: The name of a group of related images created by a publisher. Examples: 0001-com-ubuntu-server-jammy, RHEL, sles-15-sp3.
-* **SKU**: An instance of an offer, such as a major release of a distribution. Examples: 22_04-lts-gen2, 8-lvm-gen2, gen2.
-* **Version**: The version number of an image SKU.
-
-These values can be passed individually or as an image *URN*, combining the values separated by the colon (:). For example: *Publisher*:*Offer*:*Sku*:*Version*. You can replace the version number in the URN with `latest` to use the latest version of the image.
-
-If the image publisher provides extra license and purchase terms, then you must accept those terms before you can use the image. For more information, see [Check the purchase plan information](#check-the-purchase-plan-information).
---
-## List popular images
-
-You can run the [az vm image list --all](/cli/azure/vm/image) to see all of the images available to you, but it can take several minutes to produce the entire list. A faster option is the use `az vm image list`, without the `--all` option, to see a list of popular VM images in the Azure Marketplace. For example, run the following command to display a cached list of popular images in table format:
-
-```azurecli
-az vm image list --output table
-```
-
-The output includes the image URN. If you omit the `--all` option, you can see the *UrnAlias* for each image, if available. *UrnAlias* is a shortened version created for popular images like *Ubuntu2204*.
-The Linux image alias names and their details outputted by this command are:
-
-```output
-Architecture Offer Publisher Sku Urn UrnAlias Version
- - - --
-x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:latest CentOS85Gen2 latest
-x64 Debian11 Debian 11-backports-gen2 Debian:debian-11:11-backports-gen2:latest Debian-11 latest
-x64 flatcar-container-linux-free kinvolk stable-gen2 kinvolk:flatcar-container-linux-free:stable-gen2:latest FlatcarLinuxFreeGen2 latest
-x64 opensuse-leap-15-4 SUSE gen2 SUSE:opensuse-leap-15-4:gen2:latest OpenSuseLeap154Gen2 latest
-x64 RHEL RedHat 8-lvm-gen2 RedHat:RHEL:8-lvm-gen2:latest RHELRaw8LVMGen2 latest
-x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest SLES latest
-x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts-gen2 Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest Ubuntu2204 latest
-```
-
-The Windows image alias names and their details outputted by this command are:
-
-```output
-Architecture Offer Publisher Sku Urn Alias Version
- - - --
-x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest
-x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest Win2016Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest Win2012R2Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest Win2012Datacenter latest
-```
--
-## Find specific images
-
-You can filter the list of images by `--publisher` or another parameter to limit the results.
-
-For example, the following command displays all Debian offers:
-
-```azurecli-interactive
-az vm image list --offer Debian --all --output table
-```
-
-You can limit your results to a single architecture by adding the `--architecture` parameter. For example, to display all Arm64 images available from Canonical:
-
-```azurecli-interactive
-az vm image list --architecture Arm64 --publisher Canonical --all --output table
-```
-
-## Look at all available images
-
-Another way to find an image in a location is to run the [az vm image list-publishers](/cli/azure/vm/image), [az vm image list-offers](/cli/azure/vm/image), and [az vm image list-skus](/cli/azure/vm/image) commands in sequence. With these commands, you determine these values:
-
-1. List the image publishers for a location. In this example, we're looking at the *West US* region.
-
- ```azurecli-interactive
- az vm image list-publishers --location westus --output table
- ```
-
-1. For a given publisher, list their offers. In this example, we add *RedHat* as the publisher.
-
- ```azurecli-interactive
- az vm image list-offers --location westus --publisher RedHat --output table
- ```
-
-1. For a given offer, list their SKUs. In this example, we add *RHEL* as the offer.
- ```azurecli-interactive
- az vm image list-skus --location westus --publisher RedHat --offer RHEL --output table
- ```
-
-> [!NOTE]
-> Canonical has changed the **Offer** names they use for the most recent versions. Before Ubuntu 20.04, the **Offer** name is UbuntuServer. For Ubuntu 20.04 the **Offer** name is `0001-com-ubuntu-server-focal` and for Ubuntu 22.04 it's `0001-com-ubuntu-server-jammy`.
--
-1. For a given publisher, offer, and SKU, show all of the versions of the image. In this example, we add *9_1* as the SKU.
-
- ```azurecli-interactive
- az vm image list \
- --location westus \
- --publisher RedHat \
- --offer RHEL \
- --sku 9_1 \
- --all --output table
- ```
-
-Pass this value of the URN column with the `--image` parameter when you create a VM with the [az vm create](/cli/azure/vm) command. You can also replace the version number in the URN with "latest", to use the latest version of the image.
-
-If you deploy a VM with a Resource Manager template, you set the image parameters individually in the `imageReference` properties. See the [template reference](/azure/templates/microsoft.compute/virtualmachines).
--
-## Check the purchase plan information
-
-Some VM images in the Azure Marketplace have extra license and purchase terms that you must accept before you can deploy them programmatically.
-
-To deploy a VM from such an image, you'll need to accept the image's terms the first time you use it, once per subscription. You'll also need to specify *purchase plan* parameters to deploy a VM from that image
-
-To view an image's purchase plan information, run the [az vm image show](/cli/azure/image) command with the URN of the image. If the `plan` property in the output isn't `null`, the image has terms you need to accept before programmatic deployment.
-
-For example, the Canonical Ubuntu Server 18.04 LTS image doesn't have extra terms, because the `plan` information is `null`:
-
-```azurecli-interactive
-az vm image show --location westus --urn Canonical:UbuntuServer:18.04-LTS:latest
-```
-
-Output:
-
-```output
-{
- "dataDiskImages": [],
- "id": "/Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/Canonical/ArtifactTypes/VMImage/Offers/UbuntuServer/Skus/18.04-LTS/Versions/18.04.201901220",
- "location": "westus",
- "name": "18.04.201901220",
- "osDiskImage": {
- "operatingSystem": "Linux"
- },
- "plan": null,
- "tags": null
-}
-```
-
-Running a similar command for the RabbitMQ Certified by Bitnami image shows the following `plan` properties: `name`, `product`, and `publisher`. (Some images also have a `promotion code` property.)
-
-```azurecli-interactive
-az vm image show --location westus --urn bitnami:rabbitmq:rabbitmq:latest
-```
-Output:
-
-```output
-{
- "dataDiskImages": [],
- "id": "/Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/bitnami/ArtifactTypes/VMImage/Offers/rabbitmq/Skus/rabbitmq/Versions/3.7.1901151016",
- "location": "westus",
- "name": "3.7.1901151016",
- "osDiskImage": {
- "operatingSystem": "Linux"
- },
- "plan": {
- "name": "rabbitmq",
- "product": "rabbitmq",
- "publisher": "bitnami"
- },
- "tags": null
-}
-```
-
-To deploy this image, you need to accept the terms and provide the purchase plan parameters when you deploy a VM using that image.
-
-## Accept the terms
-
-To view and accept the license terms, use the [az vm image terms](/cli/azure/vm/image/terms) command. When you accept the terms, you enable programmatic deployment in your subscription. You only need to accept terms once per subscription for the image. For example:
-
-```azurecli-interactive
-az vm image terms show --urn bitnami:rabbitmq:rabbitmq:latest
-```
-
-The output includes a `licenseTextLink` to the license terms, and indicates that the value of `accepted` is `true`:
-
-```output
-{
- "accepted": true,
- "additionalProperties": {},
- "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.MarketplaceOrdering/offertypes/bitnami/offers/rabbitmq/plans/rabbitmq",
- "licenseTextLink": "https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_BITNAMI%253a24RABBITMQ%253a24RABBITMQ%253a24IGRT7HHPIFOBV3IQYJHEN2O2FGUVXXZ3WUYIMEIVF3KCUNJ7GTVXNNM23I567GBMNDWRFOY4WXJPN5PUYXNKB2QLAKCHP4IE5GO3B2I.txt",
- "name": "rabbitmq",
- "plan": "rabbitmq",
- "privacyPolicyLink": "https://bitnami.com/privacy",
- "product": "rabbitmq",
- "publisher": "bitnami",
- "retrieveDatetime": "2019-01-25T20:37:49.937096Z",
- "signature": "XXXXXXLAZIK7ZL2YRV5JYQXONPV76NQJW3FKMKDZYCRGXZYVDGX6BVY45JO3BXVMNA2COBOEYG2NO76ONORU7ITTRHGZDYNJNXXXXXX",
- "type": "Microsoft.MarketplaceOrdering/offertypes"
-}
-```
-
-To accept the terms, type:
-
-```azurecli-interactive
-az vm image terms accept --urn bitnami:rabbitmq:rabbitmq:latest
-```
-
-## Deploy a new VM using the image parameters
-
-With information about the image, you can deploy it using the `az vm create` command.
-
-To deploy an image that doesn't have plan information, like the latest Ubuntu Server 18.04 image from Canonical, pass the URN for `--image`:
-
-```azurecli-interactive
-az group create --name myURNVM --location westus
-az vm create \
- --resource-group myURNVM \
- --name myVM \
- --admin-username azureuser \
- --generate-ssh-keys \
- --image Canonical:UbuntuServer:18.04-LTS:latest
-```
--
-For an image with purchase plan parameters, like the RabbitMQ Certified by Bitnami image, you pass the URN for `--image` and also provide the purchase plan parameters:
-
-```azurecli-interactive
-az group create --name myPurchasePlanRG --location westus
-
-az vm create \
- --resource-group myPurchasePlanRG \
- --name myVM \
- --admin-username azureuser \
- --generate-ssh-keys \
- --image bitnami:rabbitmq:rabbitmq:latest \
- --plan-name rabbitmq \
- --plan-product rabbitmq \
- --plan-publisher bitnami
-```
-
-If you get a message about accepting the terms of the image, review section [Accept the terms](#accept-the-terms). Make sure the output of `az vm image accept-terms` returns the value `"accepted": true,` showing that you've accepted the terms of the image.
--
-## Using an existing VHD with purchase plan information
-
-If you have an existing VHD from a VM that was created using a paid Azure Marketplace image, you might need to give the purchase plan information when creating a new VM from that VHD.
-
-If you still have the original VM, or another VM created using the same marketplace image, you can get the plan name, publisher, and product information from it using [az vm get-instance-view](/cli/azure/vm#az-vm-get-instance-view). This example gets a VM named *myVM* in the *myResourceGroup* resource group and then displays the purchase plan information.
-
-```azurecli-interactive
-az vm get-instance-view -g myResourceGroup -n myVM --query plan
-```
-
-If you didn't get the plan information before the original VM was deleted, you can file a [support request](https://portal.azure.com/#create/Microsoft.Support). They'll need the VM name, subscription ID and the time stamp of the delete operation.
-
-Once you have the plan information, you can create the new VM using the `--attach-os-disk` parameter to specify the VHD.
-
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroup \
- --name myNewVM \
- --nics myNic \
- --size Standard_DS1_v2 --os-type Linux \
- --attach-os-disk myVHD \
- --plan-name planName \
- --plan-publisher planPublisher \
- --plan-product planProduct
-```
--
-## Next steps
-To create a virtual machine quickly by using the image information, see [Create and Manage Linux VMs with the Azure CLI](tutorial-manage-vm.md).
+
+ Title: Find and use marketplace purchase plan information using the CLI
+description: Learn how to use the Azure CLI to find image URNs and purchase plan parameters, like the publisher, offer, SKU, and version, for Marketplace VM images.
+++ Last updated : 02/09/2023+++++
+# Find Azure Marketplace image information using the Azure CLI
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+
+This topic describes how to use the Azure CLI to find VM images in the Azure Marketplace. Use this information to specify a Marketplace image when you create a VM programmatically with the CLI, Resource Manager templates, or other tools.
+
+You can also browse available images and offers using the [Azure Marketplace](https://azuremarketplace.microsoft.com/) or [Azure PowerShell](../windows/cli-ps-findimage.md).
+
+## Terminology
+
+A Marketplace image in Azure has the following attributes:
+
+* **Publisher**: The organization that created the image. Examples: Canonical, RedHat, SUSE.
+* **Offer**: The name of a group of related images created by a publisher. Examples: 0001-com-ubuntu-server-jammy, RHEL, sles-15-sp3.
+* **SKU**: An instance of an offer, such as a major release of a distribution. Examples: 22_04-lts-gen2, 8-lvm-gen2, gen2.
+* **Version**: The version number of an image SKU.
+
+These values can be passed individually or as an image *URN*, combining the values separated by the colon (:). For example: *Publisher*:*Offer*:*Sku*:*Version*. You can replace the version number in the URN with `latest` to use the latest version of the image.
+
+If the image publisher provides extra license and purchase terms, then you must accept those terms before you can use the image. For more information, see [Check the purchase plan information](#check-the-purchase-plan-information).
+++
+## List popular images
+
+You can run the [az vm image list --all](/cli/azure/vm/image) to see all of the images available to you, but it can take several minutes to produce the entire list. A faster option is the use `az vm image list`, without the `--all` option, to see a list of popular VM images in the Azure Marketplace. For example, run the following command to display a cached list of popular images in table format:
+
+```azurecli
+az vm image list --output table
+```
+
+The output includes the image URN. If you omit the `--all` option, you can see the *UrnAlias* for each image, if available. *UrnAlias* is a shortened version created for popular images like *Ubuntu2204*.
+The Linux image alias names and their details outputted by this command are:
+
+```output
+Architecture Offer Publisher Sku Urn UrnAlias Version
+-- - - - --
+x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:latest CentOS85Gen2 latest
+x64 Debian11 Debian 11-backports-gen2 Debian:debian-11:11-backports-gen2:latest Debian-11 latest
+x64 flatcar-container-linux-free kinvolk stable-gen2 kinvolk:flatcar-container-linux-free:stable-gen2:latest FlatcarLinuxFreeGen2 latest
+x64 opensuse-leap-15-4 SUSE gen2 SUSE:opensuse-leap-15-4:gen2:latest OpenSuseLeap154Gen2 latest
+x64 RHEL RedHat 8-lvm-gen2 RedHat:RHEL:8-lvm-gen2:latest RHELRaw8LVMGen2 latest
+x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest SLES latest
+x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts-gen2 Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest Ubuntu2204 latest
+```
+
+The Windows image alias names and their details outputted by this command are:
+
+```output
+Architecture Offer Publisher Sku Urn Alias Version
+-- - - - --
+x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest
+x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest Win2016Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest Win2012R2Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest Win2012Datacenter latest
+```
++
+## Find specific images
+
+You can filter the list of images by `--publisher` or another parameter to limit the results.
+
+For example, the following command displays all Debian offers:
+
+```azurecli-interactive
+az vm image list --offer Debian --all --output table
+```
+
+You can limit your results to a single architecture by adding the `--architecture` parameter. For example, to display all Arm64 images available from Canonical:
+
+```azurecli-interactive
+az vm image list --architecture Arm64 --publisher Canonical --all --output table
+```
+
+## Look at all available images
+
+Another way to find an image in a location is to run the [az vm image list-publishers](/cli/azure/vm/image), [az vm image list-offers](/cli/azure/vm/image), and [az vm image list-skus](/cli/azure/vm/image) commands in sequence. With these commands, you determine these values:
+
+1. List the image publishers for a location. In this example, we're looking at the *West US* region.
+
+ ```azurecli-interactive
+ az vm image list-publishers --location westus --output table
+ ```
+
+1. For a given publisher, list their offers. In this example, we add *RedHat* as the publisher.
+
+ ```azurecli-interactive
+ az vm image list-offers --location westus --publisher RedHat --output table
+ ```
+
+1. For a given offer, list their SKUs. In this example, we add *RHEL* as the offer.
+ ```azurecli-interactive
+ az vm image list-skus --location westus --publisher RedHat --offer RHEL --output table
+ ```
+
+> [!NOTE]
+> Canonical has changed the **Offer** names they use for the most recent versions. Before Ubuntu 20.04, the **Offer** name is UbuntuServer. For Ubuntu 20.04 the **Offer** name is `0001-com-ubuntu-server-focal` and for Ubuntu 22.04 it's `0001-com-ubuntu-server-jammy`.
++
+1. For a given publisher, offer, and SKU, show all of the versions of the image. In this example, we add *9_1* as the SKU.
+
+ ```azurecli-interactive
+ az vm image list \
+ --location westus \
+ --publisher RedHat \
+ --offer RHEL \
+ --sku 9_1 \
+ --all --output table
+ ```
+
+Pass this value of the URN column with the `--image` parameter when you create a VM with the [az vm create](/cli/azure/vm) command. You can also replace the version number in the URN with "latest", to use the latest version of the image.
+
+If you deploy a VM with a Resource Manager template, you set the image parameters individually in the `imageReference` properties. See the [template reference](/azure/templates/microsoft.compute/virtualmachines).
++
+## Check the purchase plan information
+
+Some VM images in the Azure Marketplace have extra license and purchase terms that you must accept before you can deploy them programmatically.
+
+To deploy a VM from such an image, you'll need to accept the image's terms the first time you use it, once per subscription. You'll also need to specify *purchase plan* parameters to deploy a VM from that image
+
+To view an image's purchase plan information, run the [az vm image show](/cli/azure/image) command with the URN of the image. If the `plan` property in the output isn't `null`, the image has terms you need to accept before programmatic deployment.
+
+For example, the Canonical Ubuntu Server 18.04 LTS image doesn't have extra terms, because the `plan` information is `null`:
+
+```azurecli-interactive
+az vm image show --location westus --urn Canonical:UbuntuServer:18.04-LTS:latest
+```
+
+Output:
+
+```output
+{
+ "dataDiskImages": [],
+ "id": "/Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/Canonical/ArtifactTypes/VMImage/Offers/UbuntuServer/Skus/18.04-LTS/Versions/18.04.201901220",
+ "location": "westus",
+ "name": "18.04.201901220",
+ "osDiskImage": {
+ "operatingSystem": "Linux"
+ },
+ "plan": null,
+ "tags": null
+}
+```
+
+Running a similar command for the RabbitMQ Certified by Bitnami image shows the following `plan` properties: `name`, `product`, and `publisher`. (Some images also have a `promotion code` property.)
+
+```azurecli-interactive
+az vm image show --location westus --urn bitnami:rabbitmq:rabbitmq:latest
+```
+Output:
+
+```output
+{
+ "dataDiskImages": [],
+ "id": "/Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/bitnami/ArtifactTypes/VMImage/Offers/rabbitmq/Skus/rabbitmq/Versions/3.7.1901151016",
+ "location": "westus",
+ "name": "3.7.1901151016",
+ "osDiskImage": {
+ "operatingSystem": "Linux"
+ },
+ "plan": {
+ "name": "rabbitmq",
+ "product": "rabbitmq",
+ "publisher": "bitnami"
+ },
+ "tags": null
+}
+```
+
+To deploy this image, you need to accept the terms and provide the purchase plan parameters when you deploy a VM using that image.
+
+## Accept the terms
+
+To view and accept the license terms, use the [az vm image terms](/cli/azure/vm/image/terms) command. When you accept the terms, you enable programmatic deployment in your subscription. You only need to accept terms once per subscription for the image. For example:
+
+```azurecli-interactive
+az vm image terms show --urn bitnami:rabbitmq:rabbitmq:latest
+```
+
+The output includes a `licenseTextLink` to the license terms, and indicates that the value of `accepted` is `true`:
+
+```output
+{
+ "accepted": true,
+ "additionalProperties": {},
+ "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/providers/Microsoft.MarketplaceOrdering/offertypes/bitnami/offers/rabbitmq/plans/rabbitmq",
+ "licenseTextLink": "https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_BITNAMI%253a24RABBITMQ%253a24RABBITMQ%253a24IGRT7HHPIFOBV3IQYJHEN2O2FGUVXXZ3WUYIMEIVF3KCUNJ7GTVXNNM23I567GBMNDWRFOY4WXJPN5PUYXNKB2QLAKCHP4IE5GO3B2I.txt",
+ "name": "rabbitmq",
+ "plan": "rabbitmq",
+ "privacyPolicyLink": "https://bitnami.com/privacy",
+ "product": "rabbitmq",
+ "publisher": "bitnami",
+ "retrieveDatetime": "2019-01-25T20:37:49.937096Z",
+ "signature": "XXXXXXLAZIK7ZL2YRV5JYQXONPV76NQJW3FKMKDZYCRGXZYVDGX6BVY45JO3BXVMNA2COBOEYG2NO76ONORU7ITTRHGZDYNJNXXXXXX",
+ "type": "Microsoft.MarketplaceOrdering/offertypes"
+}
+```
+
+To accept the terms, type:
+
+```azurecli-interactive
+az vm image terms accept --urn bitnami:rabbitmq:rabbitmq:latest
+```
+
+## Deploy a new VM using the image parameters
+
+With information about the image, you can deploy it using the `az vm create` command.
+
+To deploy an image that doesn't have plan information, like the latest Ubuntu Server 18.04 image from Canonical, pass the URN for `--image`:
+
+```azurecli-interactive
+az group create --name myURNVM --location westus
+az vm create \
+ --resource-group myURNVM \
+ --name myVM \
+ --admin-username azureuser \
+ --generate-ssh-keys \
+ --image Canonical:UbuntuServer:18.04-LTS:latest
+```
++
+For an image with purchase plan parameters, like the RabbitMQ Certified by Bitnami image, you pass the URN for `--image` and also provide the purchase plan parameters:
+
+```azurecli-interactive
+az group create --name myPurchasePlanRG --location westus
+
+az vm create \
+ --resource-group myPurchasePlanRG \
+ --name myVM \
+ --admin-username azureuser \
+ --generate-ssh-keys \
+ --image bitnami:rabbitmq:rabbitmq:latest \
+ --plan-name rabbitmq \
+ --plan-product rabbitmq \
+ --plan-publisher bitnami
+```
+
+If you get a message about accepting the terms of the image, review section [Accept the terms](#accept-the-terms). Make sure the output of `az vm image accept-terms` returns the value `"accepted": true,` showing that you've accepted the terms of the image.
++
+## Using an existing VHD with purchase plan information
+
+If you have an existing VHD from a VM that was created using a paid Azure Marketplace image, you might need to give the purchase plan information when creating a new VM from that VHD.
+
+If you still have the original VM, or another VM created using the same marketplace image, you can get the plan name, publisher, and product information from it using [az vm get-instance-view](/cli/azure/vm#az-vm-get-instance-view). This example gets a VM named *myVM* in the *myResourceGroup* resource group and then displays the purchase plan information.
+
+```azurecli-interactive
+az vm get-instance-view -g myResourceGroup -n myVM --query plan
+```
+
+If you didn't get the plan information before the original VM was deleted, you can file a [support request](https://portal.azure.com/#create/Microsoft.Support). They'll need the VM name, subscription ID and the time stamp of the delete operation.
+
+Once you have the plan information, you can create the new VM using the `--attach-os-disk` parameter to specify the VHD.
+
+```azurecli-interactive
+az vm create \
+ --resource-group myResourceGroup \
+ --name myNewVM \
+ --nics myNic \
+ --size Standard_DS1_v2 --os-type Linux \
+ --attach-os-disk myVHD \
+ --plan-name planName \
+ --plan-publisher planPublisher \
+ --plan-product planProduct
+```
++
+## Next steps
+To create a virtual machine quickly by using the image information, see [Create and Manage Linux VMs with the Azure CLI](tutorial-manage-vm.md).
virtual-machines Cloud Init Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloud-init-troubleshooting.md
# Troubleshooting VM provisioning with cloud-init
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets If you have been creating generalized custom images, using cloud-init to do provisioning, but have found that VM did not create correctly, you will need to troubleshoot your custom images.
virtual-machines Cloudinit Configure Swapfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-configure-swapfile.md
# Use cloud-init to configure a swap partition on a Linux VM
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io) to configure the swap partition on various Linux distributions. The swap partition was traditionally configured by the Linux Agent (WALA) based on which distributions required one. This document outlines the process for building the swap partition on demand during provisioning time using cloud-init. For more information about how cloud-init works natively in Azure and the supported Linux distros, see [cloud-init overview](using-cloud-init.md)
virtual-machines Cloudinit Update Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm.md
# Use cloud-init to update and install packages in a Linux VM in Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets This article shows you how to use [cloud-init](https://cloudinit.readthedocs.io) to update packages on a Linux virtual machine (VM) or virtual machine scale sets at provisioning time in Azure. These cloud-init scripts run on first boot once the resources have been provisioned by Azure. For more information about how cloud-init works natively in Azure and the supported Linux distros, see [cloud-init overview](using-cloud-init.md)
virtual-machines Create Upload Centos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-centos.md
- Title: Create and upload a CentOS-based Linux VHD
-description: Learn to create and upload an Azure virtual hard disk (VHD) that contains a CentOS-based Linux operating system.
----- Previously updated : 12/14/2022---
-# Prepare a CentOS-based virtual machine for Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-Learn to create and upload an Azure virtual hard disk (VHD) that contains a CentOS-based Linux operating system.
-
-* [Prepare a CentOS 6.x virtual machine for Azure](#centos-6x)
-* [Prepare a CentOS 7.0+ virtual machine for Azure](#centos-70)
--
-## Prerequisites
-
-This article assumes that you've already installed a CentOS (or similar derivative) Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
-
-**CentOS installation notes**
-
-* For more tips on preparing Linux for Azure, see [General Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes).
-* The VHDX format isn't supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet. If you're using VirtualBox, this means selecting **Fixed size** as opposed to the default dynamically allocated when creating the disk.
-* The vfat kernel module must be enabled in the kernel
-* When installing the Linux system, we **recommend** that you use standard partitions rather than LVM (often the default for many installations). This avoids LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another identical VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks.
-* **Kernel support for mounting UDF file systems is necessary.** At first boot on Azure the provisioning configuration is passed to the Linux VM by using UDF-formatted media that is attached to the guest. The Azure Linux agent or cloud-init must mount the UDF file system to read its configuration and provision the VM.
-* Linux kernel versions below 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Centos 2.6.32 kernel and was fixed in Centos 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37 or RHEL-based kernels older than 2.6.32-504 must set the boot parameter `numa=off` on the kernel command-line in grub.conf. For more information, see Red Hat [KB 436883](https://access.redhat.com/solutions/436883).
-* Don't configure a swap partition on the OS disk.
-* All VHDs on Azure must have a virtual size aligned to 1 MB. When converting from a raw disk to VHD, you must ensure that the raw disk size is a multiple of 1 MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information.
-
-> [!NOTE]
-> **_Cloud-init >= 21.2 removes the udf requirement_**. However, without the udf module enabled, the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data using user data. However, unlike custom data, user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
--
-## CentOS 6.x
-
-> [!IMPORTANT]
->Please note that CentOS 6 has reached its End Of Life (EOL) and is no longer supported by the CentOS community. This means that no further updates or security patches will be released for this version, leaving it vulnerable to potential security risks. We strongly recommend upgrading to a more recent version of CentOS to ensure the safety and stability of your system. Please consult with your IT department or system administrator for further assistance.
-
-1. In Hyper-V Manager, select the virtual machine.
-
-2. Click **Connect** to open a console window for the virtual machine.
-
-3. In CentOS 6, NetworkManager can interfere with the Azure Linux agent. Uninstall this package by running the following command:
-
- ```bash
- sudo rpm -e --nodeps NetworkManager
- ```
-
-4. Create or edit the file `/etc/sysconfig/network` and add the following text:
-
- ```config
- NETWORKING=yes
- HOSTNAME=localhost.localdomain
- ```
-
-5. Create or edit the file `/etc/sysconfig/network-scripts/ifcfg-eth0` and add the following text:
-
- ```config
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=dhcp
- TYPE=Ethernet
- USERCTL=no
- PEERDNS=yes
- IPV6INIT=no
- ```
-
-6. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
-
- ```bash
- sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules
- sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
- ```
-
-7. Ensure the network service starts at boot time by running the following command:
-
- ```bash
- sudo chkconfig network on
- ```
-
-8. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the `/etc/yum.repos.d/CentOS-Base.repo` file with the following repositories. This will also add the **[openlogic]** repository that includes extra packages such as the Azure Linux agent:
-
- ```config
- [openlogic]
- name=CentOS-$releasever - openlogic packages for $basearch
- baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/
- enabled=1
- gpgcheck=0
-
- [base]
- name=CentOS-$releasever - Base
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
- gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
-
- #released updates
- [updates]
- name=CentOS-$releasever - Updates
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/
- gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
-
- #additional packages that may be useful
- [extras]
- name=CentOS-$releasever - Extras
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/extras/$basearch/
- gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
-
- #additional packages that extend functionality of existing packages
- [centosplus]
- name=CentOS-$releasever - Plus
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/centosplus/$basearch/
- gpgcheck=1
- enabled=0
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
-
- #contrib - packages by Centos Users
- [contrib]
- name=CentOS-$releasever - Contrib
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/contrib/$basearch/
- gpgcheck=1
- enabled=0
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
- ```
-
- > [!Note]
- > The rest of this guide will assume you're using at least the `[openlogic]` repo, which will be used to install the Azure Linux agent below.
-
-9. Add the following line to /etc/yum.conf:
-
- ```config
- http_caching=packages
- ```
-
-10. Run the following command to clear the current yum metadata and update the system with the latest packages:
-
- ```bash
- sudo yum clean all
- ```
-
- Unless you're creating an image for an older version of CentOS, we recommend to update all the packages to the latest:
-
- ```bash
- sudo yum -y update
- ```
-
- A reboot may be required after running this command.
-
-11. (Optional) Install the drivers for the Linux Integration Services (LIS).
-
- > [!IMPORTANT]
- > The step is **required** for CentOS 6.3 and earlier, and optional for later releases.
-
- ```bash
- sudo rpm -e hypervkvpd ## (may return error if not installed, that's OK)
- sudo yum install microsoft-hyper-v
- ```
-
- Alternatively, you can follow the manual installation instructions on the [LIS download page](https://www.microsoft.com/download/details.aspx?id=55106) to install the RPM onto your VM.
-
-12. Install the Azure Linux Agent and dependencies. Start and enable waagent service:
-
- ```bash
- sudo yum install python-pyasn1 WALinuxAgent
- sudo service waagent start
- sudo chkconfig waagent on
- ```
--
- The WALinuxAgent package removes the NetworkManager and NetworkManager-gnome packages if they were not already removed as described in step 3.
-
-13. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/boot/grub/menu.lst` in a text editor and ensure that the default kernel includes the following parameters:
-
- ```config
- console=ttyS0 earlyprintk=ttyS0 rootdelay=300
- ```
-
- This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
-
- In addition to the above, we recommend to *remove* the following parameters:
-
- ```config
- rhgb quiet crashkernel=auto
- ```
-
- Graphical and `quiet boot` aren't useful in a cloud environment where we want all the logs to be sent to the serial port. The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes.
-
- > [!Important]
- > CentOS 6.5 and earlier must also set the kernel parameter `numa=off`. See Red Hat [KB 436883](https://access.redhat.com/solutions/436883).
-
-14. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
-
-15. Don't create swap space on the OS disk.
-
- The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in `/etc/waagent.conf` appropriately:
-
- ```config
- ResourceDisk.Format=y
- ResourceDisk.Filesystem=ext4
- ResourceDisk.MountPoint=/mnt/resource
- ResourceDisk.EnableSwap=y
- ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
- ```
-
-16. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
-
- ```bash
- sudo waagent -force -deprovision+user
- sudo export HISTSIZE=0
- ```
-> [!NOTE]
-> If you are migrating a specific virtual machine and do not wish to create a generalized image, skip the deprovision step.
--
-17. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [uploaded to Azure](./upload-vhd.md#option-1-upload-a-vhd).
--
-## CentOS 7.0+
-
-**Changes in CentOS 7 (and similar derivatives)**
-
-Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however there are several significant differences worth noting:
-
-* The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by default, and we recommend that it's not removed.
-* GRUB2 is now used as the default bootloader, so the procedure for editing kernel parameters has changed (see below).
-* XFS is now the default file system. The ext4 file system can still be used if desired.
-* Since CentOS 8 Stream and newer no longer include `network.service` by default, you need to install it manually:
-
- ```bash
- sudo yum install network-scripts
- sudo systemctl enable network.service
- ```
-
-**Configuration Steps**
-
-1. In Hyper-V Manager, select the virtual machine.
-
-2. Click **Connect** to open a console window for the virtual machine.
-
-3. Create or edit the file `/etc/sysconfig/network` and add the following text:
-
- ```config
- NETWORKING=yes
- HOSTNAME=localhost.localdomain
- ```
-
-4. Create or edit the file `/etc/sysconfig/network-scripts/ifcfg-eth0` and add the following text:
-
- ```config
- DEVICE=eth0
- ONBOOT=yes
- BOOTPROTO=dhcp
- TYPE=Ethernet
- USERCTL=no
- PEERDNS=yes
- IPV6INIT=no
- NM_CONTROLLED=no
- ```
-
-5. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
-
- ```bash
- sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules
- ```
-
-6. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the `/etc/yum.repos.d/CentOS-Base.repo` file with the following repositories. This will also add the **[openlogic]** repository that includes packages for the Azure Linux agent:
-
- ```confg
- [openlogic]
- name=CentOS-$releasever - openlogic packages for $basearch
- baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/
- enabled=1
- gpgcheck=0
-
- [base]
- name=CentOS-$releasever - Base
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
- gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
- #released updates
- [updates]
- name=CentOS-$releasever - Updates
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/
- gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
- #additional packages that may be useful
- [extras]
- name=CentOS-$releasever - Extras
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/extras/$basearch/
- gpgcheck=1
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
-
- #additional packages that extend functionality of existing packages
- [centosplus]
- name=CentOS-$releasever - Plus
- #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra
- baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/centosplus/$basearch/
- gpgcheck=1
- enabled=0
- gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
- ```
-
- > [!Note]
- > The rest of this guide will assume you're using at least the `[openlogic]` repo, which will be used to install the Azure Linux agent below.
-
-7. Run the following command to clear the current yum metadata and install any updates:
-
- ```bash
- sudo yum clean all
- ```
-
- Unless you're creating an image for an older version of CentOS, we recommend to update all the packages to the latest:
--
- ```bash
- sudo yum -y update
- ```
-
- A reboot may be required after running this command.
-
-8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter, for example:
-
- ```config
- GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
- ```
-
- This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. It also turns off the new CentOS 7 naming conventions for NICs. In addition to the above, we recommend to *remove* the following parameters:
-
- ```config
- rhgb quiet crashkernel=auto
- ```
-
- Graphical and quiet boot isn't useful in a cloud environment where we want all the logs to be sent to the serial port. The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes.
-
-9. Once you're done editing `/etc/default/grub` per above, run the following command to rebuild the grub configuration:
-
- ```bash
- sudo grub2-mkconfig -o /boot/grub2/grub.cfg
- ```
-
-> [!NOTE]
-> If uploading an UEFI enabled VM, the command to update grub is `grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg`. Also, the vfat kernel module must be enabled in the kernel otherwise provisioning will fail.
->
-> Make sure the **'udf'** module is enabled. Removing/disabling them will cause a provisioning/boot failure. **(_Cloud-init >= 21.2 removes the udf requirement. Read top of document for more detail.)**
--
-10. If building the image from **VMware, VirtualBox or KVM:** Ensure the Hyper-V drivers are included in the initramfs:
-
- Edit `/etc/dracut.conf`, add content:
-
- ```config
- add_drivers+=" hv_vmbus hv_netvsc hv_storvsc "
- ```
-
- Rebuild the initramfs:
-
- ```bash
- sudo dracut -f -v
- ```
-
-11. Install the Azure Linux Agent and dependencies for Azure VM Extensions:
-
- ```bash
- sudo yum install python-pyasn1 WALinuxAgent
- sudo systemctl enable waagent
- ```
-
-12. Install cloud-init to handle the provisioning
-
- ```bash
- sudo yum install -y cloud-init cloud-utils-growpart gdisk hyperv-daemons
- ```
- 1. Configure waagent for cloud-init
- ```bash
- sudo sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=auto/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
- ```
- ```bash
- sudo echo "Adding mounts and disk_setup to init stage"
- sudo sed -i '/ - mounts/d' /etc/cloud/cloud.cfg
- sudo sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg
- sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg
- sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg
- ```
- ```bash
- sudo echo "Allow only Azure datasource, disable fetching network setting via IMDS"
- sudo cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg <<EOF
- datasource_list: [ Azure ]
- datasource:
- Azure:
- apply_network_config: False
- EOF
-
- if [[ -f /mnt/swapfile ]]; then
- echo Removing swapfile - RHEL uses a swapfile by default
- swapoff /mnt/swapfile
- rm /mnt/swapfile -f
- fi
-
- echo "Add console log file"
- cat >> /etc/cloud/cloud.cfg.d/05_logging.cfg <<EOF
-
- # This tells cloud-init to redirect its stdout and stderr to
- # 'tee -a /var/log/cloud-init-output.log' so the user can see output
- # there without needing to look on the console.
- output: {all: '| tee -a /var/log/cloud-init-output.log'}
- EOF
- ```
--
-13. Swap configuration
-
- Don't create swap space on the operating system disk.
-
- Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
-
- ```bash
- sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
- ```
-
- If you want mount, format, and create swap, you can either:
- * Pass this in as a cloud-init config every time you create a VM
- * Use a cloud-init directive baked into the image that will do this every time the VM is created:
-
- ```bash
- sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
- sudo cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
- #cloud-config
- # Generated by Azure cloud image build
- disk_setup:
- ephemeral0:
- table_type: mbr
- layout: [66, [33, 82]]
- overwrite: True
- fs_setup:
- - device: ephemeral0.1
- filesystem: ext4
- - device: ephemeral0.2
- filesystem: swap
- mounts:
- - ["ephemeral0.1", "/mnt"]
- - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
- EOF
- ```
-
-14. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
-
- > [!NOTE]
- > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
-
- ```bash
- sudo rm -f /var/log/waagent.log
- sudo cloud-init clean
- sudo waagent -force -deprovision+user
- sudo rm -f ~/.bash_history
- sudo export HISTSIZE=0
- ```
-
-15. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [uploaded to Azure](./upload-vhd.md#option-1-upload-a-vhd).
-
-## Next steps
-
-You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
+
+ Title: Create and upload a CentOS-based Linux VHD
+description: Learn to create and upload an Azure virtual hard disk (VHD) that contains a CentOS-based Linux operating system.
+++++ Last updated : 12/14/2022+++
+# Prepare a CentOS-based virtual machine for Azure
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+
+Learn to create and upload an Azure virtual hard disk (VHD) that contains a CentOS-based Linux operating system.
+
+* [Prepare a CentOS 6.x virtual machine for Azure](#centos-6x)
+* [Prepare a CentOS 7.0+ virtual machine for Azure](#centos-70)
++
+## Prerequisites
+
+This article assumes that you've already installed a CentOS (or similar derivative) Linux operating system to a virtual hard disk. Multiple tools exist to create .vhd files, for example a virtualization solution such as Hyper-V. For instructions, see [Install the Hyper-V Role and Configure a Virtual Machine](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh846766(v=ws.11)).
+
+**CentOS installation notes**
+
+* For more tips on preparing Linux for Azure, see [General Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes).
+* The VHDX format isn't supported in Azure, only **fixed VHD**. You can convert the disk to VHD format using Hyper-V Manager or the convert-vhd cmdlet. If you're using VirtualBox, this means selecting **Fixed size** as opposed to the default dynamically allocated when creating the disk.
+* The vfat kernel module must be enabled in the kernel
+* When installing the Linux system, we **recommend** that you use standard partitions rather than LVM (often the default for many installations). This avoids LVM name conflicts with cloned VMs, particularly if an OS disk ever needs to be attached to another identical VM for troubleshooting. [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) may be used on data disks.
+* **Kernel support for mounting UDF file systems is necessary.** At first boot on Azure the provisioning configuration is passed to the Linux VM by using UDF-formatted media that is attached to the guest. The Azure Linux agent or cloud-init must mount the UDF file system to read its configuration and provision the VM.
+* Linux kernel versions below 2.6.37 don't support NUMA on Hyper-V with larger VM sizes. This issue primarily impacts older distributions using the upstream Centos 2.6.32 kernel and was fixed in Centos 6.6 (kernel-2.6.32-504). Systems running custom kernels older than 2.6.37 or RHEL-based kernels older than 2.6.32-504 must set the boot parameter `numa=off` on the kernel command-line in grub.conf. For more information, see Red Hat [KB 436883](https://access.redhat.com/solutions/436883).
+* Don't configure a swap partition on the OS disk.
+* All VHDs on Azure must have a virtual size aligned to 1 MB. When converting from a raw disk to VHD, you must ensure that the raw disk size is a multiple of 1 MB before conversion. See [Linux Installation Notes](create-upload-generic.md#general-linux-installation-notes) for more information.
+
+> [!NOTE]
+> **_Cloud-init >= 21.2 removes the udf requirement_**. However, without the udf module enabled, the cdrom won't mount during provisioning, preventing custom data from being applied. A workaround for this is to apply custom data using user data. However, unlike custom data, user data isn't encrypted. https://cloudinit.readthedocs.io/en/latest/topics/format.html
++
+## CentOS 6.x
+
+> [!IMPORTANT]
+>Please note that CentOS 6 has reached its End Of Life (EOL) and is no longer supported by the CentOS community. This means that no further updates or security patches will be released for this version, leaving it vulnerable to potential security risks. We strongly recommend upgrading to a more recent version of CentOS to ensure the safety and stability of your system. Please consult with your IT department or system administrator for further assistance.
+
+1. In Hyper-V Manager, select the virtual machine.
+
+2. Click **Connect** to open a console window for the virtual machine.
+
+3. In CentOS 6, NetworkManager can interfere with the Azure Linux agent. Uninstall this package by running the following command:
+
+ ```bash
+ sudo rpm -e --nodeps NetworkManager
+ ```
+
+4. Create or edit the file `/etc/sysconfig/network` and add the following text:
+
+ ```config
+ NETWORKING=yes
+ HOSTNAME=localhost.localdomain
+ ```
+
+5. Create or edit the file `/etc/sysconfig/network-scripts/ifcfg-eth0` and add the following text:
+
+ ```config
+ DEVICE=eth0
+ ONBOOT=yes
+ BOOTPROTO=dhcp
+ TYPE=Ethernet
+ USERCTL=no
+ PEERDNS=yes
+ IPV6INIT=no
+ ```
+
+6. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
+
+ ```bash
+ sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules
+ sudo rm -f /etc/udev/rules.d/70-persistent-net.rules
+ ```
+
+7. Ensure the network service starts at boot time by running the following command:
+
+ ```bash
+ sudo chkconfig network on
+ ```
+
+8. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the `/etc/yum.repos.d/CentOS-Base.repo` file with the following repositories. This will also add the **[openlogic]** repository that includes extra packages such as the Azure Linux agent:
+
+ ```config
+ [openlogic]
+ name=CentOS-$releasever - openlogic packages for $basearch
+ baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/
+ enabled=1
+ gpgcheck=0
+
+ [base]
+ name=CentOS-$releasever - Base
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
+
+ #released updates
+ [updates]
+ name=CentOS-$releasever - Updates
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
+
+ #additional packages that may be useful
+ [extras]
+ name=CentOS-$releasever - Extras
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/extras/$basearch/
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
+
+ #additional packages that extend functionality of existing packages
+ [centosplus]
+ name=CentOS-$releasever - Plus
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/centosplus/$basearch/
+ gpgcheck=1
+ enabled=0
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
+
+ #contrib - packages by Centos Users
+ [contrib]
+ name=CentOS-$releasever - Contrib
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/contrib/$basearch/
+ gpgcheck=1
+ enabled=0
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
+ ```
+
+ > [!Note]
+ > The rest of this guide will assume you're using at least the `[openlogic]` repo, which will be used to install the Azure Linux agent below.
+
+9. Add the following line to /etc/yum.conf:
+
+ ```config
+ http_caching=packages
+ ```
+
+10. Run the following command to clear the current yum metadata and update the system with the latest packages:
+
+ ```bash
+ sudo yum clean all
+ ```
+
+ Unless you're creating an image for an older version of CentOS, we recommend to update all the packages to the latest:
+
+ ```bash
+ sudo yum -y update
+ ```
+
+ A reboot may be required after running this command.
+
+11. (Optional) Install the drivers for the Linux Integration Services (LIS).
+
+ > [!IMPORTANT]
+ > The step is **required** for CentOS 6.3 and earlier, and optional for later releases.
+
+ ```bash
+ sudo rpm -e hypervkvpd ## (may return error if not installed, that's OK)
+ sudo yum install microsoft-hyper-v
+ ```
+
+ Alternatively, you can follow the manual installation instructions on the [LIS download page](https://www.microsoft.com/download/details.aspx?id=55106) to install the RPM onto your VM.
+
+12. Install the Azure Linux Agent and dependencies. Start and enable waagent service:
+
+ ```bash
+ sudo yum install python-pyasn1 WALinuxAgent
+ sudo service waagent start
+ sudo chkconfig waagent on
+ ```
++
+ The WALinuxAgent package removes the NetworkManager and NetworkManager-gnome packages if they were not already removed as described in step 3.
+
+13. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/boot/grub/menu.lst` in a text editor and ensure that the default kernel includes the following parameters:
+
+ ```config
+ console=ttyS0 earlyprintk=ttyS0 rootdelay=300
+ ```
+
+ This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues.
+
+ In addition to the above, we recommend to *remove* the following parameters:
+
+ ```config
+ rhgb quiet crashkernel=auto
+ ```
+
+ Graphical and `quiet boot` aren't useful in a cloud environment where we want all the logs to be sent to the serial port. The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes.
+
+ > [!Important]
+ > CentOS 6.5 and earlier must also set the kernel parameter `numa=off`. See Red Hat [KB 436883](https://access.redhat.com/solutions/436883).
+
+14. Ensure that the SSH server is installed and configured to start at boot time. This is usually the default.
+
+15. Don't create swap space on the OS disk.
+
+ The Azure Linux Agent can automatically configure swap space using the local resource disk that is attached to the VM after provisioning on Azure. The local resource disk is a *temporary* disk and might be emptied when the VM is deprovisioned. After installing the Azure Linux Agent (see previous step), modify the following parameters in `/etc/waagent.conf` appropriately:
+
+ ```config
+ ResourceDisk.Format=y
+ ResourceDisk.Filesystem=ext4
+ ResourceDisk.MountPoint=/mnt/resource
+ ResourceDisk.EnableSwap=y
+ ResourceDisk.SwapSizeMB=2048 ## NOTE: set this to whatever you need it to be.
+ ```
+
+16. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
+
+ ```bash
+ sudo waagent -force -deprovision+user
+ sudo export HISTSIZE=0
+ ```
+> [!NOTE]
+> If you are migrating a specific virtual machine and do not wish to create a generalized image, skip the deprovision step.
++
+17. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [uploaded to Azure](./upload-vhd.md#option-1-upload-a-vhd).
++
+## CentOS 7.0+
+
+**Changes in CentOS 7 (and similar derivatives)**
+
+Preparing a CentOS 7 virtual machine for Azure is similar to CentOS 6, however there are several significant differences worth noting:
+
+* The NetworkManager package no longer conflicts with the Azure Linux agent. This package is installed by default, and we recommend that it's not removed.
+* GRUB2 is now used as the default bootloader, so the procedure for editing kernel parameters has changed (see below).
+* XFS is now the default file system. The ext4 file system can still be used if desired.
+* Since CentOS 8 Stream and newer no longer include `network.service` by default, you need to install it manually:
+
+ ```bash
+ sudo yum install network-scripts
+ sudo systemctl enable network.service
+ ```
+
+**Configuration Steps**
+
+1. In Hyper-V Manager, select the virtual machine.
+
+2. Click **Connect** to open a console window for the virtual machine.
+
+3. Create or edit the file `/etc/sysconfig/network` and add the following text:
+
+ ```config
+ NETWORKING=yes
+ HOSTNAME=localhost.localdomain
+ ```
+
+4. Create or edit the file `/etc/sysconfig/network-scripts/ifcfg-eth0` and add the following text:
+
+ ```config
+ DEVICE=eth0
+ ONBOOT=yes
+ BOOTPROTO=dhcp
+ TYPE=Ethernet
+ USERCTL=no
+ PEERDNS=yes
+ IPV6INIT=no
+ NM_CONTROLLED=no
+ ```
+
+5. Modify udev rules to avoid generating static rules for the Ethernet interface(s). These rules can cause problems when cloning a virtual machine in Microsoft Azure or Hyper-V:
+
+ ```bash
+ sudo ln -s /etc/udev/rules.d/75-persistent-net-generator.rules
+ ```
+
+6. If you would like to use the OpenLogic mirrors that are hosted within the Azure datacenters, then replace the `/etc/yum.repos.d/CentOS-Base.repo` file with the following repositories. This will also add the **[openlogic]** repository that includes packages for the Azure Linux agent:
+
+ ```confg
+ [openlogic]
+ name=CentOS-$releasever - openlogic packages for $basearch
+ baseurl=http://olcentgbl.trafficmanager.net/openlogic/$releasever/openlogic/$basearch/
+ enabled=1
+ gpgcheck=0
+
+ [base]
+ name=CentOS-$releasever - Base
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/os/$basearch/
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
+
+ #released updates
+ [updates]
+ name=CentOS-$releasever - Updates
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/updates/$basearch/
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
+
+ #additional packages that may be useful
+ [extras]
+ name=CentOS-$releasever - Extras
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/extras/$basearch/
+ gpgcheck=1
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
+
+ #additional packages that extend functionality of existing packages
+ [centosplus]
+ name=CentOS-$releasever - Plus
+ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra
+ baseurl=http://olcentgbl.trafficmanager.net/centos/$releasever/centosplus/$basearch/
+ gpgcheck=1
+ enabled=0
+ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
+ ```
+
+ > [!Note]
+ > The rest of this guide will assume you're using at least the `[openlogic]` repo, which will be used to install the Azure Linux agent below.
+
+7. Run the following command to clear the current yum metadata and install any updates:
+
+ ```bash
+ sudo yum clean all
+ ```
+
+ Unless you're creating an image for an older version of CentOS, we recommend to update all the packages to the latest:
++
+ ```bash
+ sudo yum -y update
+ ```
+
+ A reboot may be required after running this command.
+
+8. Modify the kernel boot line in your grub configuration to include additional kernel parameters for Azure. To do this, open `/etc/default/grub` in a text editor and edit the `GRUB_CMDLINE_LINUX` parameter, for example:
+
+ ```config
+ GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
+ ```
+
+ This will also ensure all console messages are sent to the first serial port, which can assist Azure support with debugging issues. It also turns off the new CentOS 7 naming conventions for NICs. In addition to the above, we recommend to *remove* the following parameters:
+
+ ```config
+ rhgb quiet crashkernel=auto
+ ```
+
+ Graphical and quiet boot isn't useful in a cloud environment where we want all the logs to be sent to the serial port. The `crashkernel` option may be left configured if desired, but note that this parameter will reduce the amount of available memory in the VM by 128 MB or more, which may be problematic on the smaller VM sizes.
+
+9. Once you're done editing `/etc/default/grub` per above, run the following command to rebuild the grub configuration:
+
+ ```bash
+ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
+ ```
+
+> [!NOTE]
+> If uploading an UEFI enabled VM, the command to update grub is `grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg`. Also, the vfat kernel module must be enabled in the kernel otherwise provisioning will fail.
+>
+> Make sure the **'udf'** module is enabled. Removing/disabling them will cause a provisioning/boot failure. **(_Cloud-init >= 21.2 removes the udf requirement. Read top of document for more detail.)**
++
+10. If building the image from **VMware, VirtualBox or KVM:** Ensure the Hyper-V drivers are included in the initramfs:
+
+ Edit `/etc/dracut.conf`, add content:
+
+ ```config
+ add_drivers+=" hv_vmbus hv_netvsc hv_storvsc "
+ ```
+
+ Rebuild the initramfs:
+
+ ```bash
+ sudo dracut -f -v
+ ```
+
+11. Install the Azure Linux Agent and dependencies for Azure VM Extensions:
+
+ ```bash
+ sudo yum install python-pyasn1 WALinuxAgent
+ sudo systemctl enable waagent
+ ```
+
+12. Install cloud-init to handle the provisioning
+
+ ```bash
+ sudo yum install -y cloud-init cloud-utils-growpart gdisk hyperv-daemons
+ ```
+ 1. Configure waagent for cloud-init
+ ```bash
+ sudo sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=auto/g' /etc/waagent.conf
+ sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
+ sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
+ ```
+ ```bash
+ sudo echo "Adding mounts and disk_setup to init stage"
+ sudo sed -i '/ - mounts/d' /etc/cloud/cloud.cfg
+ sudo sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg
+ sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg
+ sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg
+ ```
+ ```bash
+ sudo echo "Allow only Azure datasource, disable fetching network setting via IMDS"
+ sudo cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg <<EOF
+ datasource_list: [ Azure ]
+ datasource:
+ Azure:
+ apply_network_config: False
+ EOF
+
+ if [[ -f /mnt/swapfile ]]; then
+ echo Removing swapfile - RHEL uses a swapfile by default
+ swapoff /mnt/swapfile
+ rm /mnt/swapfile -f
+ fi
+
+ echo "Add console log file"
+ cat >> /etc/cloud/cloud.cfg.d/05_logging.cfg <<EOF
+
+ # This tells cloud-init to redirect its stdout and stderr to
+ # 'tee -a /var/log/cloud-init-output.log' so the user can see output
+ # there without needing to look on the console.
+ output: {all: '| tee -a /var/log/cloud-init-output.log'}
+ EOF
+ ```
++
+13. Swap configuration
+
+ Don't create swap space on the operating system disk.
+
+ Previously, the Azure Linux Agent was used to automatically configure swap space by using the local resource disk that is attached to the virtual machine after the virtual machine is provisioned on Azure. However this is now handled by cloud-init, you **must not** use the Linux Agent to format the resource disk create the swap file, modify the following parameters in `/etc/waagent.conf` appropriately:
+
+ ```bash
+ sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
+ sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
+ ```
+
+ If you want mount, format, and create swap, you can either:
+ * Pass this in as a cloud-init config every time you create a VM
+ * Use a cloud-init directive baked into the image that will do this every time the VM is created:
+
+ ```bash
+ sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ sudo cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
+ #cloud-config
+ # Generated by Azure cloud image build
+ disk_setup:
+ ephemeral0:
+ table_type: mbr
+ layout: [66, [33, 82]]
+ overwrite: True
+ fs_setup:
+ - device: ephemeral0.1
+ filesystem: ext4
+ - device: ephemeral0.2
+ filesystem: swap
+ mounts:
+ - ["ephemeral0.1", "/mnt"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
+ EOF
+ ```
+
+14. Run the following commands to deprovision the virtual machine and prepare it for provisioning on Azure:
+
+ > [!NOTE]
+ > If you are migrating a specific virtual machine and don't wish to create a generalized image, skip the deprovision step.
+
+ ```bash
+ sudo rm -f /var/log/waagent.log
+ sudo cloud-init clean
+ sudo waagent -force -deprovision+user
+ sudo rm -f ~/.bash_history
+ sudo export HISTSIZE=0
+ ```
+
+15. Click **Action -> Shut Down** in Hyper-V Manager. Your Linux VHD is now ready to be [uploaded to Azure](./upload-vhd.md#option-1-upload-a-vhd).
+
+## Next steps
+
+You're now ready to use your CentOS Linux virtual hard disk to create new virtual machines in Azure. If this is the first time that you're uploading the .vhd file to Azure, see [Create a Linux VM from a custom disk](upload-vhd.md#option-1-upload-a-vhd).
virtual-machines Create Upload Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-upload-generic.md
- Title: Prepare Linux for imaging
-description: Learn how to prepare a Linux system to be used for an image in Azure.
----- Previously updated : 12/14/2022---
-# Prepare Linux for imaging in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-The Azure platform service-level agreement (SLA) applies to virtual machines (VMs) running the Linux operating system only when you're using one of the endorsed distributions. For endorsed distributions, Azure Marketplace provides preconfigured Linux images. For more information, see:
-
-* [Endorsed Linux distributions on Azure](endorsed-distros.md)
-* [Support for Linux and open-source technology in Azure](https://support.microsoft.com/kb/2941892)
-
-All other distributions running on Azure, including community-supported and non-endorsed distributions, have some prerequisites.
-
-This article focuses on general guidance for running your Linux distribution on Azure. This article can't be comprehensive, because every distribution is different. Even if you meet all the criteria that this article describes, you might need to significantly tweak your Linux system for it to run properly.
-
-## General Linux installation notes
-
-* Azure doesn't support the Hyper-V virtual hard disk (VHDX) format. Azure supports only *fixed VHD*. You can convert the disk to VHD format by using Hyper-V Manager or the [Convert-VHD](/powershell/module/hyper-v/convert-vhd) cmdlet. If you're using VirtualBox, select **Fixed size** rather than the default (**Dynamically allocated**) when you're creating the disk.
-
-* Azure supports Gen1 (BIOS boot) and Gen2 (UEFI boot) virtual machines.
-
-* The virtual file allocation table (VFAT) kernel module must be enabled in the kernel.
-
-* The maximum size allowed for the VHD is 1,023 GB.
-
-* When you're installing the Linux system, we recommend that you use standard partitions rather than Logical Volume Manager (LVM). LVM is the default for many installations.
-
- Using standard partitions will avoid LVM name conflicts with cloned VMs, particularly if an OS disk is ever attached to another identical VM for troubleshooting. You can use [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) on data disks.
-
-* Kernel support for mounting user-defined function (UDF) file systems is necessary. At first boot on Azure, the provisioning configuration is passed to the Linux VM via UDF-formatted media that are attached to the guest. The Azure Linux agent must mount the UDF file system to read its configuration and provision the VM.
-
-* Linux kernel versions earlier than 2.6.37 don't support Non-Uniform Memory Access (NUMA) on Hyper-V with larger VM sizes. This issue primarily affects older distributions that use the upstream Red Hat 2.6.32 kernel. It was fixed in Red Hat Enterprise Linux (RHEL) 6.6 (kernel-2.6.32-504).
-
- Systems running custom kernels older than 2.6.37, or RHEL-based kernels older than 2.6.32-504, must set the boot parameter `numa=off` on the kernel command line in *grub.conf*. For more information, see [Red Hat KB 436883](https://access.redhat.com/solutions/436883).
-
-* Don't configure a swap partition on the OS disk. You can configure the Linux agent to create a swap file on the temporary resource disk, as described later in this article.
-
-* All VHDs on Azure must have a virtual size aligned to 1 MB (1024 x 1024 bytes). When you're converting from a raw disk to VHD, ensure that the raw disk size is a multiple of 1 MB before conversion, as described later in this article.
-
-* Use the most up-to-date distribution version, packages, and software.
-
-* Remove users and system accounts, public keys, sensitive data, unnecessary software, and applications.
-
-> [!NOTE]
-> Cloud-init version 21.2 or later removes the UDF requirement. But without the `udf` module enabled, the CD-ROM won't mount during provisioning, which prevents the custom data from being applied. A workaround is to apply user data. However, unlike custom data, user data isn't encrypted. For more information, see [User data formats](https://cloudinit.readthedocs.io/en/latest/topics/format.html) in the cloud-init documentation.
-
-### Install kernel modules without Hyper-V
-
-Azure runs on the Hyper-V hypervisor, so Linux requires certain kernel modules to run in Azure. If you have a VM that was created outside Hyper-V, the Linux installers might not include the drivers for Hyper-V in the initial RAM disk (initrd or initramfs), unless the VM detects that it's running in a Hyper-V environment.
-
-When you're using a different virtualization system (such as VirtualBox or KVM) to prepare your Linux image, you might need to rebuild initrd so that at least the `hv_vmbus` and `hv_storvsc` kernel modules are available on the initial RAM disk. This known issue is for systems based on the upstream Red Hat distribution, and possibly others.
-
-The mechanism for rebuilding the initrd or initramfs image can vary, depending on the distribution. Consult your distribution's documentation or support for the proper procedure. Here's one example for rebuilding initrd by using the `mkinitrd` utility:
-
-1. Back up the existing initrd image:
-
- ```bash
- cd /boot
- sudo cp initrd-`uname -r`.img initrd-`uname -r`.img.bak
- ```
-
-2. Rebuild initrd by using the `hv_vmbus` and `hv_storvsc` kernel modules:
-
- ```bash
- sudo mkinitrd --preload=hv_storvsc --preload=hv_vmbus -v -f initrd-`uname -r`.img `uname -r`
- ```
-
-### Resize VHDs
-
-VHD images on Azure must have a virtual size aligned to 1 MB. Typically, VHDs created through Hyper-V are aligned correctly. If the VHD isn't aligned correctly, you might get an error message similar to the following example when you try to create an image from your VHD:
-
-```config
-The VHD http://<mystorageaccount>.blob.core.windows.net/vhds/MyLinuxVM.vhd has an unsupported virtual size of 21475270656 bytes. The size must be a whole number (in MBs).
-```
-
-In this case, resize the VM by using either the Hyper-V Manager console or the [Resize-VHD](/powershell/module/hyper-v/resize-vhd) PowerShell cmdlet. If you aren't running in a Windows environment, we recommend using `qemu-img` to convert (if needed) and resize the VHD.
-
-> [!NOTE]
-> There's a [known bug in qemu-img](https://bugs.launchpad.net/qemu/+bug/1490611) for QEMU version 2.2.1 and some later versions that results in an improperly formatted VHD. The issue was fixed in QEMU 2.6. We recommend using version 2.2.0 or earlier, or using version 2.6 or later.
-
-1. Resizing the VHD directly by using tools such as `qemu-img` or `vbox-manage` might result in an unbootable VHD. We recommend first converting the VHD to a raw disk image by using the following code.
-
- If the VM image was created as a raw disk image, you can skip this step. Creating the VM image as a raw disk image is the default in some hypervisors, such as KVM.
-
- ```bash
- sudo qemu-img convert -f vpc -O raw MyLinuxVM.vhd MyLinuxVM.raw
- ```
-
-2. Calculate the required size of the disk image so that the virtual size is aligned to 1 MB. The following Bash shell script uses `qemu-img info` to determine the virtual size of the disk image, and then calculates the size to the next 1 MB:
-
- ```bash
- rawdisk="MyLinuxVM.raw"
- vhddisk="MyLinuxVM.vhd"
-
- MB=$((1024*1024))
- size=$(qemu-img info -f raw --output json "$rawdisk" | \
- gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
-
- rounded_size=$(((($size+$MB-1)/$MB)*$MB))
-
- echo "Rounded Size = $rounded_size"
- ```
-
-3. Resize the raw disk by using `$rounded_size`:
-
- ```bash
- sudo qemu-img resize MyLinuxVM.raw $rounded_size
- ```
-
-4. Convert the raw disk back to a fixed-size VHD:
-
- ```bash
- sudo qemu-img convert -f raw -o subformat=fixed,force_size -O vpc MyLinuxVM.raw MyLinuxVM.vhd
- ```
-
- Or, with QEMU versions before 2.6, remove the `force_size` option:
-
- ```bash
- sudo qemu-img convert -f raw -o subformat=fixed -O vpc MyLinuxVM.raw MyLinuxVM.vhd
- ```
-
-## Linux kernel requirements
-
-The Linux Integration Services (LIS) drivers for Hyper-V and Azure are contributed directly to the upstream Linux kernel. Many distributions that include a recent Linux kernel version (such as 3.x) have these drivers available already, or otherwise provide backported versions of these drivers with their kernels.
-
-LIS drivers are constantly being updated in the upstream kernel with new fixes and features. When possible, we recommend running an [endorsed distribution](endorsed-distros.md) that includes these fixes and updates.
-
-If you're running a variant of RHEL versions 6.0 to 6.3, you need to install the [latest LIS drivers for Hyper-V](https://go.microsoft.com/fwlink/p/?LinkID=254263&clcid=0x409). Beginning with RHEL 6.4+ (and derivatives), the LIS drivers are already included with the kernel, so you don't need additional installation packages.
-
-If a custom kernel is required, we recommend a recent kernel version (such as 3.8+). For distributions or vendors that maintain their own kernel, you need to regularly backport the LIS drivers from the upstream kernel to your custom kernel.
-
-Even if you're already running a relatively recent kernel version, we highly recommend keeping track of any upstream fixes in the LIS drivers and backporting them as needed. The locations of the LIS driver source files are specified in the [MAINTAINERS](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS) file in the Linux kernel source tree:
-
-```config
- F: arch/x86/include/asm/mshyperv.h
- F: arch/x86/include/uapi/asm/hyperv.h
- F: arch/x86/kernel/cpu/mshyperv.c
- F: drivers/hid/hid-hyperv.c
- F: drivers/hv/
- F: drivers/input/serio/hyperv-keyboard.c
- F: drivers/net/hyperv/
- F: drivers/scsi/storvsc_drv.c
- F: drivers/video/fbdev/hyperv_fb.c
- F: include/linux/hyperv.h
- F: tools/hv/
-```
-
-The VM's active kernel must include the following patches. This list can't be complete for all distributions.
-
-* [ata_piix: defer disks to the Hyper-V drivers by default](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/ata/ata_piix.c?id=cd006086fa5d91414d8ff9ff2b78fbb593878e3c)
-* [storvsc: Account for in-transit packets in the RESET path](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/scsi/storvsc_drv.c?id=5c1b10ab7f93d24f29b5630286e323d1c5802d5c)
-* [storvsc: avoid usage of WRITE_SAME](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=3e8f4f4065901c8dfc51407e1984495e1748c090)
-* [storvsc: Disable WRITE SAME for RAID and virtual host adapter drivers](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=54b2b50c20a61b51199bedb6e5d2f8ec2568fb43)
-* [storvsc: NULL pointer dereference fix](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=b12bb60d6c350b348a4e1460cd68f97ccae9822e)
-* [storvsc: ring buffer failures may result in I/O freeze](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=e86fb5e8ab95f10ec5f2e9430119d5d35020c951)
-* [scsi_sysfs: protect against double execution of __scsi_remove_device](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/scsi_sysfs.c?id=be821fd8e62765de43cc4f0e2db363d0e30a7e9b)
-
-## Azure Linux Agent
-
-The [Azure Linux Agent](../extensions/agent-linux.md) (`waagent`) provisions a Linux virtual machine in Azure. You can get the latest version, report problems, or submit pull requests at the [Linux Agent GitHub repo](https://github.com/Azure/WALinuxAgent).
-
-Here are some considerations for using the Azure Linux Agent:
-
-* The Linux agent is released under the Apache 2.0 license. Many distributions already provide .rpm or .deb packages for the agent. You can easily install and update these packages.
-* The Azure Linux Agent requires Python v2.6+.
-* The agent also requires the `python-pyasn1` module. Most distributions provide this module as a separate package to be installed.
-* In some cases, the Azure Linux Agent might not be compatible with NetworkManager. Many of the packages (.rpm or .deb) provided by distributions configure NetworkManager as a conflict to the `waagent` package. In these cases, the agent will uninstall NetworkManager when you install the Linux agent package.
-* The Azure Linux Agent must be at or above the [minimum supported version](https://support.microsoft.com/en-us/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support).
-
-> [!NOTE]
-> Make sure the `udf` and `vfat` modules are enabled. Disabling the `udf` module will cause a provisioning failure. Disabling the `vfat` module will cause both provisioning and boot failures. Cloud-init version 21.2 or later can provision VMs without requiring UDF if both of these conditions exist:
->
-> * You created the VM by using SSH public keys and not passwords.
-> * You didn't provide any custom data.
-
-## General Linux system requirements
-
-1. Modify the kernel boot line in GRUB or GRUB2 to include the following parameters, so that all console messages are sent to the first serial port. These messages can assist Azure support with debugging any issues.
-
- ```config
- GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
- ```
-
- We also recommend *removing* the following parameters if they exist:
-
- ```config
- rhgb quiet crashkernel=auto
- ```
-
- Graphical and quiet boot aren't useful in a cloud environment, where you want all logs sent to the serial port. You can leave the `crashkernel` option configured if needed, but this parameter reduces the amount of available memory in the VM by at least 128 MB. Reducing available memory might be problematic for smaller VM sizes.
-
-2. After you finish editing */etc/default/grub*, run the following command to rebuild the GRUB configuration:
-
- ```bash
- sudo grub2-mkconfig -o /boot/grub2/grub.cfg
- ```
-
-3. Add the Hyper-V module for initramfs by using `dracut`:
-
- ```bash
- cd /boot
- sudo cp initramfs-<kernel-version>.img <kernel-version>.img.bak
- sudo dracut -f -v initramfs-<kernel-version>.img <kernel-version> --add-drivers "hv_vmbus hv_netvsc hv_storvsc"
- sudo grub-mkconfig -o /boot/grub/grub.cfg
- sudo grub2-mkconfig -o /boot/grub2/grub.cfg
- ```
-
- Add the Hyper-V module for initrd by using `mkinitramfs`:
-
- ```bash
- cd /boot
- sudo cp initrd.img-<kernel-version> initrd.img-<kernel-version>.bak
- sudo mkinitramfs -o initrd.img-<kernel-version> <kernel-version> --with=hv_vmbus,hv_netvsc,hv_storvsc
- sudo update-grub
- ```
-
-4. Ensure that the SSH server is installed and configured to start at boot time. This configuration is usually the default.
-
-5. Install the Azure Linux Agent.
-
- The Azure Linux Agent is required for provisioning a Linux image on Azure. Many distributions provide the agent as an .rpm or .deb package. The package is typically called `WALinuxAgent` or `walinuxagent`. You can also install the agent manually by following the steps in the [Azure Linux Agent guide](../extensions/agent-linux.md).
-
- > [!NOTE]
- > Make sure the `udf` and `vfat` modules are enabled. Removing or disabling them will cause a provisioning or boot failure. Cloud-init version 21.2 or later removes the UDF requirement.
-
- Install the Azure Linux Agent, cloud-init, and other necessary utilities by running one of the following commands.
-
- Use this command for Red Hat or CentOS:
-
- ```bash
- sudo yum install -y WALinuxAgent cloud-init cloud-utils-growpart gdisk hyperv-daemons
- ```
-
- Use this command for Ubuntu/Debian:
-
- ```bash
- sudo apt install walinuxagent cloud-init cloud-utils-growpart gdisk hyperv-daemons
- ```
-
- Use this command for SUSE:
-
- ```bash
- sudo zypper install python-azure-agent cloud-init cloud-utils-growpart gdisk hyperv-daemons
- ```
-
- Then enable the agent and cloud-init on all distributions:
-
- ```bash
- sudo systemctl enable waagent.service
- sudo systemctl enable cloud-init.service
- ```
-
-6. Don't create swap space on the OS disk.
-
- You can use the Azure Linux Agent or cloud-init to configure swap space via the local resource disk. This resource disk is attached to the VM after provisioning on Azure. The local resource disk is a temporary disk and might be emptied when the VM is deprovisioned. The following blocks show how to configure this swap.
-
- If you choose Azure Linux Agent, modify the following parameters in */etc/waagent.conf*:
-
- ```config
- ResourceDisk.Format=y
- ResourceDisk.Filesystem=ext4
- ResourceDisk.MountPoint=/mnt/resource
- ResourceDisk.EnableSwap=y
- ResourceDisk.SwapSizeMB=2048 ## NOTE: Set this to your desired size.
- ```
-
- If you choose cloud-init, configure cloud-init to handle the provisioning:
-
- ```bash
- sudo sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=cloud-auto/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
- ```
-
- To configure cloud-init to format and create swap space, you have two options:
-
- * Pass in a cloud-init configuration every time you create a VM through `customdata`. We recommend this method.
- * Use a cloud-init directive in the image to configure swap space every time the VM is created.
-
- Create a .cfg file to configure swap space by using cloud-init:
-
- ```bash
- sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
- sudo cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
- #cloud-config
- # Generated by Azure cloud image build
- disk_setup:
- ephemeral0:
- table_type: mbr
- layout: [66, [33, 82]]
- overwrite: True
- fs_setup:
- - device: ephemeral0.1
- filesystem: ext4
- - device: ephemeral0.2
- filesystem: swap
- mounts:
- - ["ephemeral0.1", "/mnt/resource"]
- - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
- EOF
- ```
-
-7. Configure cloud-init to handle the provisioning:
- 1. Configure `waagent` for cloud-init:
-
- ```bash
- sudo sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=cloud-init/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
- sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
- ```
-
- If you're migrating a specific virtual machine and don't want to create a generalized image, set `Provisioning.Agent=disabled` in the */etc/waagent.conf* configuration.
-
- 1. Configure mounts:
-
- ```bash
- sudo echo "Adding mounts and disk_setup to init stage"
- sudo sed -i '/ - mounts/d' /etc/cloud/cloud.cfg
- sudo sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg
- sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg
- sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg
-
- 1. Configure the Azure data source:
-
- ```bash
- sudo echo "Allow only Azure datasource, disable fetching network setting via IMDS"
- sudo cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg <<EOF
- datasource_list: [ Azure ]
- datasource:
- Azure:
- apply_network_config: False
- EOF
- ```
-
- 1. Remove the existing swap file if you configured one:
-
- ```bash
- if [[ -f /mnt/resource/swapfile ]]; then
- echo "Removing swapfile" #RHEL uses a swap file by default
- swapoff /mnt/resource/swapfile
- rm /mnt/resource/swapfile -f
- fi
- ```
-
- 1. Configure cloud-init logging:
-
- ```bash
- sudo echo "Add console log file"
- sudo cat >> /etc/cloud/cloud.cfg.d/05_logging.cfg <<EOF
-
- # This tells cloud-init to redirect its stdout and stderr to
- # 'tee -a /var/log/cloud-init-output.log' so the user can see output
- # there without needing to look on the console.
- output: {all: '| tee -a /var/log/cloud-init-output.log'}
- EOF
- ```
-
-8. Run the following commands to deprovision the virtual machine.
-
- > [!CAUTION]
- > If you're migrating a specific virtual machine and don't want to create a generalized image, skip the deprovisioning step. Running the command `waagent -force -deprovision+user` will render the source machine unusable. This step is intended only to create a generalized image.
-
- ```bash
- sudo rm -f /var/log/waagent.log
- sudo cloud-init clean
- sudo waagent -force -deprovision+user
- sudo rm -f ~/.bash_history
- sudo export HISTSIZE=0
- ```
-
- On VirtualBox, you might see an error message after you run `waagent -force -deprovision` that says `[Errno 5] Input/output error`. This error message is not critical, and you can ignore it.
-
-9. Shut down the virtual machine and upload the VHD to Azure.
-
-## Next steps
-
-[Create a Linux VM from a custom disk by using the Azure CLI](upload-vhd.md)
+
+ Title: Prepare Linux for imaging
+description: Learn how to prepare a Linux system to be used for an image in Azure.
+++++ Last updated : 12/14/2022+++
+# Prepare Linux for imaging in Azure
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+
+The Azure platform service-level agreement (SLA) applies to virtual machines (VMs) running the Linux operating system only when you're using one of the endorsed distributions. For endorsed distributions, Azure Marketplace provides preconfigured Linux images. For more information, see:
+
+* [Endorsed Linux distributions on Azure](endorsed-distros.md)
+* [Support for Linux and open-source technology in Azure](https://support.microsoft.com/kb/2941892)
+
+All other distributions running on Azure, including community-supported and non-endorsed distributions, have some prerequisites.
+
+This article focuses on general guidance for running your Linux distribution on Azure. This article can't be comprehensive, because every distribution is different. Even if you meet all the criteria that this article describes, you might need to significantly tweak your Linux system for it to run properly.
+
+## General Linux installation notes
+
+* Azure doesn't support the Hyper-V virtual hard disk (VHDX) format. Azure supports only *fixed VHD*. You can convert the disk to VHD format by using Hyper-V Manager or the [Convert-VHD](/powershell/module/hyper-v/convert-vhd) cmdlet. If you're using VirtualBox, select **Fixed size** rather than the default (**Dynamically allocated**) when you're creating the disk.
+
+* Azure supports Gen1 (BIOS boot) and Gen2 (UEFI boot) virtual machines.
+
+* The virtual file allocation table (VFAT) kernel module must be enabled in the kernel.
+
+* The maximum size allowed for the VHD is 1,023 GB.
+
+* When you're installing the Linux system, we recommend that you use standard partitions rather than Logical Volume Manager (LVM). LVM is the default for many installations.
+
+ Using standard partitions will avoid LVM name conflicts with cloned VMs, particularly if an OS disk is ever attached to another identical VM for troubleshooting. You can use [LVM](/previous-versions/azure/virtual-machines/linux/configure-lvm) or [RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) on data disks.
+
+* Kernel support for mounting user-defined function (UDF) file systems is necessary. At first boot on Azure, the provisioning configuration is passed to the Linux VM via UDF-formatted media that are attached to the guest. The Azure Linux agent must mount the UDF file system to read its configuration and provision the VM.
+
+* Linux kernel versions earlier than 2.6.37 don't support Non-Uniform Memory Access (NUMA) on Hyper-V with larger VM sizes. This issue primarily affects older distributions that use the upstream Red Hat 2.6.32 kernel. It was fixed in Red Hat Enterprise Linux (RHEL) 6.6 (kernel-2.6.32-504).
+
+ Systems running custom kernels older than 2.6.37, or RHEL-based kernels older than 2.6.32-504, must set the boot parameter `numa=off` on the kernel command line in *grub.conf*. For more information, see [Red Hat KB 436883](https://access.redhat.com/solutions/436883).
+
+* Don't configure a swap partition on the OS disk. You can configure the Linux agent to create a swap file on the temporary resource disk, as described later in this article.
+
+* All VHDs on Azure must have a virtual size aligned to 1 MB (1024 x 1024 bytes). When you're converting from a raw disk to VHD, ensure that the raw disk size is a multiple of 1 MB before conversion, as described later in this article.
+
+* Use the most up-to-date distribution version, packages, and software.
+
+* Remove users and system accounts, public keys, sensitive data, unnecessary software, and applications.
+
+> [!NOTE]
+> Cloud-init version 21.2 or later removes the UDF requirement. But without the `udf` module enabled, the CD-ROM won't mount during provisioning, which prevents the custom data from being applied. A workaround is to apply user data. However, unlike custom data, user data isn't encrypted. For more information, see [User data formats](https://cloudinit.readthedocs.io/en/latest/topics/format.html) in the cloud-init documentation.
+
+### Install kernel modules without Hyper-V
+
+Azure runs on the Hyper-V hypervisor, so Linux requires certain kernel modules to run in Azure. If you have a VM that was created outside Hyper-V, the Linux installers might not include the drivers for Hyper-V in the initial RAM disk (initrd or initramfs), unless the VM detects that it's running in a Hyper-V environment.
+
+When you're using a different virtualization system (such as VirtualBox or KVM) to prepare your Linux image, you might need to rebuild initrd so that at least the `hv_vmbus` and `hv_storvsc` kernel modules are available on the initial RAM disk. This known issue is for systems based on the upstream Red Hat distribution, and possibly others.
+
+The mechanism for rebuilding the initrd or initramfs image can vary, depending on the distribution. Consult your distribution's documentation or support for the proper procedure. Here's one example for rebuilding initrd by using the `mkinitrd` utility:
+
+1. Back up the existing initrd image:
+
+ ```bash
+ cd /boot
+ sudo cp initrd-`uname -r`.img initrd-`uname -r`.img.bak
+ ```
+
+2. Rebuild initrd by using the `hv_vmbus` and `hv_storvsc` kernel modules:
+
+ ```bash
+ sudo mkinitrd --preload=hv_storvsc --preload=hv_vmbus -v -f initrd-`uname -r`.img `uname -r`
+ ```
+
+### Resize VHDs
+
+VHD images on Azure must have a virtual size aligned to 1 MB. Typically, VHDs created through Hyper-V are aligned correctly. If the VHD isn't aligned correctly, you might get an error message similar to the following example when you try to create an image from your VHD:
+
+```config
+The VHD http://<mystorageaccount>.blob.core.windows.net/vhds/MyLinuxVM.vhd has an unsupported virtual size of 21475270656 bytes. The size must be a whole number (in MBs).
+```
+
+In this case, resize the VM by using either the Hyper-V Manager console or the [Resize-VHD](/powershell/module/hyper-v/resize-vhd) PowerShell cmdlet. If you aren't running in a Windows environment, we recommend using `qemu-img` to convert (if needed) and resize the VHD.
+
+> [!NOTE]
+> There's a [known bug in qemu-img](https://bugs.launchpad.net/qemu/+bug/1490611) for QEMU version 2.2.1 and some later versions that results in an improperly formatted VHD. The issue was fixed in QEMU 2.6. We recommend using version 2.2.0 or earlier, or using version 2.6 or later.
+
+1. Resizing the VHD directly by using tools such as `qemu-img` or `vbox-manage` might result in an unbootable VHD. We recommend first converting the VHD to a raw disk image by using the following code.
+
+ If the VM image was created as a raw disk image, you can skip this step. Creating the VM image as a raw disk image is the default in some hypervisors, such as KVM.
+
+ ```bash
+ sudo qemu-img convert -f vpc -O raw MyLinuxVM.vhd MyLinuxVM.raw
+ ```
+
+2. Calculate the required size of the disk image so that the virtual size is aligned to 1 MB. The following Bash shell script uses `qemu-img info` to determine the virtual size of the disk image, and then calculates the size to the next 1 MB:
+
+ ```bash
+ rawdisk="MyLinuxVM.raw"
+ vhddisk="MyLinuxVM.vhd"
+
+ MB=$((1024*1024))
+ size=$(qemu-img info -f raw --output json "$rawdisk" | \
+ gawk 'match($0, /"virtual-size": ([0-9]+),/, val) {print val[1]}')
+
+ rounded_size=$(((($size+$MB-1)/$MB)*$MB))
+
+ echo "Rounded Size = $rounded_size"
+ ```
+
+3. Resize the raw disk by using `$rounded_size`:
+
+ ```bash
+ sudo qemu-img resize MyLinuxVM.raw $rounded_size
+ ```
+
+4. Convert the raw disk back to a fixed-size VHD:
+
+ ```bash
+ sudo qemu-img convert -f raw -o subformat=fixed,force_size -O vpc MyLinuxVM.raw MyLinuxVM.vhd
+ ```
+
+ Or, with QEMU versions before 2.6, remove the `force_size` option:
+
+ ```bash
+ sudo qemu-img convert -f raw -o subformat=fixed -O vpc MyLinuxVM.raw MyLinuxVM.vhd
+ ```
+
+## Linux kernel requirements
+
+The Linux Integration Services (LIS) drivers for Hyper-V and Azure are contributed directly to the upstream Linux kernel. Many distributions that include a recent Linux kernel version (such as 3.x) have these drivers available already, or otherwise provide backported versions of these drivers with their kernels.
+
+LIS drivers are constantly being updated in the upstream kernel with new fixes and features. When possible, we recommend running an [endorsed distribution](endorsed-distros.md) that includes these fixes and updates.
+
+If you're running a variant of RHEL versions 6.0 to 6.3, you need to install the [latest LIS drivers for Hyper-V](https://go.microsoft.com/fwlink/p/?LinkID=254263&clcid=0x409). Beginning with RHEL 6.4+ (and derivatives), the LIS drivers are already included with the kernel, so you don't need additional installation packages.
+
+If a custom kernel is required, we recommend a recent kernel version (such as 3.8+). For distributions or vendors that maintain their own kernel, you need to regularly backport the LIS drivers from the upstream kernel to your custom kernel.
+
+Even if you're already running a relatively recent kernel version, we highly recommend keeping track of any upstream fixes in the LIS drivers and backporting them as needed. The locations of the LIS driver source files are specified in the [MAINTAINERS](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS) file in the Linux kernel source tree:
+
+```config
+ F: arch/x86/include/asm/mshyperv.h
+ F: arch/x86/include/uapi/asm/hyperv.h
+ F: arch/x86/kernel/cpu/mshyperv.c
+ F: drivers/hid/hid-hyperv.c
+ F: drivers/hv/
+ F: drivers/input/serio/hyperv-keyboard.c
+ F: drivers/net/hyperv/
+ F: drivers/scsi/storvsc_drv.c
+ F: drivers/video/fbdev/hyperv_fb.c
+ F: include/linux/hyperv.h
+ F: tools/hv/
+```
+
+The VM's active kernel must include the following patches. This list can't be complete for all distributions.
+
+* [ata_piix: defer disks to the Hyper-V drivers by default](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/ata/ata_piix.c?id=cd006086fa5d91414d8ff9ff2b78fbb593878e3c)
+* [storvsc: Account for in-transit packets in the RESET path](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/scsi/storvsc_drv.c?id=5c1b10ab7f93d24f29b5630286e323d1c5802d5c)
+* [storvsc: avoid usage of WRITE_SAME](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=3e8f4f4065901c8dfc51407e1984495e1748c090)
+* [storvsc: Disable WRITE SAME for RAID and virtual host adapter drivers](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=54b2b50c20a61b51199bedb6e5d2f8ec2568fb43)
+* [storvsc: NULL pointer dereference fix](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=b12bb60d6c350b348a4e1460cd68f97ccae9822e)
+* [storvsc: ring buffer failures may result in I/O freeze](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/storvsc_drv.c?id=e86fb5e8ab95f10ec5f2e9430119d5d35020c951)
+* [scsi_sysfs: protect against double execution of __scsi_remove_device](https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/drivers/scsi/scsi_sysfs.c?id=be821fd8e62765de43cc4f0e2db363d0e30a7e9b)
+
+## Azure Linux Agent
+
+The [Azure Linux Agent](../extensions/agent-linux.md) (`waagent`) provisions a Linux virtual machine in Azure. You can get the latest version, report problems, or submit pull requests at the [Linux Agent GitHub repo](https://github.com/Azure/WALinuxAgent).
+
+Here are some considerations for using the Azure Linux Agent:
+
+* The Linux agent is released under the Apache 2.0 license. Many distributions already provide .rpm or .deb packages for the agent. You can easily install and update these packages.
+* The Azure Linux Agent requires Python v2.6+.
+* The agent also requires the `python-pyasn1` module. Most distributions provide this module as a separate package to be installed.
+* In some cases, the Azure Linux Agent might not be compatible with NetworkManager. Many of the packages (.rpm or .deb) provided by distributions configure NetworkManager as a conflict to the `waagent` package. In these cases, the agent will uninstall NetworkManager when you install the Linux agent package.
+* The Azure Linux Agent must be at or above the [minimum supported version](https://support.microsoft.com/en-us/help/4049215/extensions-and-virtual-machine-agent-minimum-version-support).
+
+> [!NOTE]
+> Make sure the `udf` and `vfat` modules are enabled. Disabling the `udf` module will cause a provisioning failure. Disabling the `vfat` module will cause both provisioning and boot failures. Cloud-init version 21.2 or later can provision VMs without requiring UDF if both of these conditions exist:
+>
+> * You created the VM by using SSH public keys and not passwords.
+> * You didn't provide any custom data.
+
+## General Linux system requirements
+
+1. Modify the kernel boot line in GRUB or GRUB2 to include the following parameters, so that all console messages are sent to the first serial port. These messages can assist Azure support with debugging any issues.
+
+ ```config
+ GRUB_CMDLINE_LINUX="rootdelay=300 console=ttyS0 earlyprintk=ttyS0 net.ifnames=0"
+ ```
+
+ We also recommend *removing* the following parameters if they exist:
+
+ ```config
+ rhgb quiet crashkernel=auto
+ ```
+
+ Graphical and quiet boot aren't useful in a cloud environment, where you want all logs sent to the serial port. You can leave the `crashkernel` option configured if needed, but this parameter reduces the amount of available memory in the VM by at least 128 MB. Reducing available memory might be problematic for smaller VM sizes.
+
+2. After you finish editing */etc/default/grub*, run the following command to rebuild the GRUB configuration:
+
+ ```bash
+ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
+ ```
+
+3. Add the Hyper-V module for initramfs by using `dracut`:
+
+ ```bash
+ cd /boot
+ sudo cp initramfs-<kernel-version>.img <kernel-version>.img.bak
+ sudo dracut -f -v initramfs-<kernel-version>.img <kernel-version> --add-drivers "hv_vmbus hv_netvsc hv_storvsc"
+ sudo grub-mkconfig -o /boot/grub/grub.cfg
+ sudo grub2-mkconfig -o /boot/grub2/grub.cfg
+ ```
+
+ Add the Hyper-V module for initrd by using `mkinitramfs`:
+
+ ```bash
+ cd /boot
+ sudo cp initrd.img-<kernel-version> initrd.img-<kernel-version>.bak
+ sudo mkinitramfs -o initrd.img-<kernel-version> <kernel-version> --with=hv_vmbus,hv_netvsc,hv_storvsc
+ sudo update-grub
+ ```
+
+4. Ensure that the SSH server is installed and configured to start at boot time. This configuration is usually the default.
+
+5. Install the Azure Linux Agent.
+
+ The Azure Linux Agent is required for provisioning a Linux image on Azure. Many distributions provide the agent as an .rpm or .deb package. The package is typically called `WALinuxAgent` or `walinuxagent`. You can also install the agent manually by following the steps in the [Azure Linux Agent guide](../extensions/agent-linux.md).
+
+ > [!NOTE]
+ > Make sure the `udf` and `vfat` modules are enabled. Removing or disabling them will cause a provisioning or boot failure. Cloud-init version 21.2 or later removes the UDF requirement.
+
+ Install the Azure Linux Agent, cloud-init, and other necessary utilities by running one of the following commands.
+
+ Use this command for Red Hat or CentOS:
+
+ ```bash
+ sudo yum install -y WALinuxAgent cloud-init cloud-utils-growpart gdisk hyperv-daemons
+ ```
+
+ Use this command for Ubuntu/Debian:
+
+ ```bash
+ sudo apt install walinuxagent cloud-init cloud-utils-growpart gdisk hyperv-daemons
+ ```
+
+ Use this command for SUSE:
+
+ ```bash
+ sudo zypper install python-azure-agent cloud-init cloud-utils-growpart gdisk hyperv-daemons
+ ```
+
+ Then enable the agent and cloud-init on all distributions:
+
+ ```bash
+ sudo systemctl enable waagent.service
+ sudo systemctl enable cloud-init.service
+ ```
+
+6. Don't create swap space on the OS disk.
+
+ You can use the Azure Linux Agent or cloud-init to configure swap space via the local resource disk. This resource disk is attached to the VM after provisioning on Azure. The local resource disk is a temporary disk and might be emptied when the VM is deprovisioned. The following blocks show how to configure this swap.
+
+ If you choose Azure Linux Agent, modify the following parameters in */etc/waagent.conf*:
+
+ ```config
+ ResourceDisk.Format=y
+ ResourceDisk.Filesystem=ext4
+ ResourceDisk.MountPoint=/mnt/resource
+ ResourceDisk.EnableSwap=y
+ ResourceDisk.SwapSizeMB=2048 ## NOTE: Set this to your desired size.
+ ```
+
+ If you choose cloud-init, configure cloud-init to handle the provisioning:
+
+ ```bash
+ sudo sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=cloud-auto/g' /etc/waagent.conf
+ sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
+ sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
+ ```
+
+ To configure cloud-init to format and create swap space, you have two options:
+
+ * Pass in a cloud-init configuration every time you create a VM through `customdata`. We recommend this method.
+ * Use a cloud-init directive in the image to configure swap space every time the VM is created.
+
+ Create a .cfg file to configure swap space by using cloud-init:
+
+ ```bash
+ sudo echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf
+ sudo cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
+ #cloud-config
+ # Generated by Azure cloud image build
+ disk_setup:
+ ephemeral0:
+ table_type: mbr
+ layout: [66, [33, 82]]
+ overwrite: True
+ fs_setup:
+ - device: ephemeral0.1
+ filesystem: ext4
+ - device: ephemeral0.2
+ filesystem: swap
+ mounts:
+ - ["ephemeral0.1", "/mnt/resource"]
+ - ["ephemeral0.2", "none", "swap", "sw,nofail,x-systemd.requires=cloud-init.service,x-systemd.device-timeout=2", "0", "0"]
+ EOF
+ ```
+
+7. Configure cloud-init to handle the provisioning:
+ 1. Configure `waagent` for cloud-init:
+
+ ```bash
+ sudo sed -i 's/Provisioning.Agent=auto/Provisioning.Agent=cloud-init/g' /etc/waagent.conf
+ sudo sed -i 's/ResourceDisk.Format=y/ResourceDisk.Format=n/g' /etc/waagent.conf
+ sudo sed -i 's/ResourceDisk.EnableSwap=y/ResourceDisk.EnableSwap=n/g' /etc/waagent.conf
+ ```
+
+ If you're migrating a specific virtual machine and don't want to create a generalized image, set `Provisioning.Agent=disabled` in the */etc/waagent.conf* configuration.
+
+ 1. Configure mounts:
+
+ ```bash
+ sudo echo "Adding mounts and disk_setup to init stage"
+ sudo sed -i '/ - mounts/d' /etc/cloud/cloud.cfg
+ sudo sed -i '/ - disk_setup/d' /etc/cloud/cloud.cfg
+ sudo sed -i '/cloud_init_modules/a\\ - mounts' /etc/cloud/cloud.cfg
+ sudo sed -i '/cloud_init_modules/a\\ - disk_setup' /etc/cloud/cloud.cfg
+
+ 1. Configure the Azure data source:
+
+ ```bash
+ sudo echo "Allow only Azure datasource, disable fetching network setting via IMDS"
+ sudo cat > /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg <<EOF
+ datasource_list: [ Azure ]
+ datasource:
+ Azure:
+ apply_network_config: False
+ EOF
+ ```
+
+ 1. Remove the existing swap file if you configured one:
+
+ ```bash
+ if [[ -f /mnt/resource/swapfile ]]; then
+ echo "Removing swapfile" #RHEL uses a swap file by default
+ swapoff /mnt/resource/swapfile
+ rm /mnt/resource/swapfile -f
+ fi
+ ```
+
+ 1. Configure cloud-init logging:
+
+ ```bash
+ sudo echo "Add console log file"
+ sudo cat >> /etc/cloud/cloud.cfg.d/05_logging.cfg <<EOF
+
+ # This tells cloud-init to redirect its stdout and stderr to
+ # 'tee -a /var/log/cloud-init-output.log' so the user can see output
+ # there without needing to look on the console.
+ output: {all: '| tee -a /var/log/cloud-init-output.log'}
+ EOF
+ ```
+
+8. Run the following commands to deprovision the virtual machine.
+
+ > [!CAUTION]
+ > If you're migrating a specific virtual machine and don't want to create a generalized image, skip the deprovisioning step. Running the command `waagent -force -deprovision+user` will render the source machine unusable. This step is intended only to create a generalized image.
+
+ ```bash
+ sudo rm -f /var/log/waagent.log
+ sudo cloud-init clean
+ sudo waagent -force -deprovision+user
+ sudo rm -f ~/.bash_history
+ sudo export HISTSIZE=0
+ ```
+
+ On VirtualBox, you might see an error message after you run `waagent -force -deprovision` that says `[Errno 5] Input/output error`. This error message is not critical, and you can ignore it.
+
+9. Shut down the virtual machine and upload the VHD to Azure.
+
+## Next steps
+
+[Create a Linux VM from a custom disk by using the Azure CLI](upload-vhd.md)
virtual-machines Disk Encryption Isolated Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-isolated-network.md
Last updated 01/04/2023
# Azure Disk Encryption on an isolated network
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets. When connectivity is restricted by a firewall, proxy requirement, or network security group (NSG) settings, the ability of the extension to perform needed tasks might be disrupted. This disruption can result in status messages such as "Extension status not available on the VM."
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
# Azure Disk Encryption for Linux VMs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets Azure Disk Encryption helps protect and safeguard your data to meet your organizational security and compliance commitments. It uses the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide volume encryption for the OS and data disks of Azure virtual machines (VMs), and is integrated with [Azure Key Vault](../../key-vault/index.yml) to help you control and manage the disk encryption keys and secrets.
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
# Azure Disk Encryption sample scripts for Linux VMs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets This article provides sample scripts for preparing pre-encrypted VHDs and other tasks.
virtual-machines Endorsed Distros https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/endorsed-distros.md
- Title: Linux distributions endorsed on Azure
-description: Learn about Linux on Azure-endorsed distributions, including information about Ubuntu, CentOS, Oracle, Flatcar, Debian, Red Hat, and SUSE.
----- Previously updated : 08/02/2023-----
-# Endorsed Linux distributions on Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-
-In this article we will cover the following -
-- Types of Images -- Partners -- Image Update Cadence -- Azure-tuned Kernels -
-There are several different sources of Linux VM images for Azure. Each source provides a different expectation for quality, utility and support. This document will summarize each source (marketplace images, platform images, custom images, and community gallery images). It will further provide more details about platform images which are images provided in partnership between Microsoft and several mainstream Linux publishers such as Red Hat, Canonical, and SUSE.
--
-MicrosoftΓÇÖs Linux distribution partners provide a multitude of Linux images in the Azure Marketplace. For distributions that are not available from the Marketplace, you can always provide a custom built Linux image by following the guidelines found in ΓÇ»[Create and upload a virtual hard disk that contains the Linux operating system](create-upload-generic.md). For older versions seeΓÇ»[Linux Kernel Requirements](create-upload-generic.md#linux-kernel-requirements).
---
-The Azure Linux Agent is already pre-installed on Azure Marketplace images and is typically available from the distribution package repository. Source code can be found onΓÇ»[GitHub](https://github.com/azure/walinuxagent).
-
-For more information on support by distribution, seeΓÇ»[Support for Linux images in Microsoft Azure](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure).
--
-## Types of Images
-Azure Linux images can be grouped into three categories:
-
-### Marketplace Images
-Images published and maintained by either Microsoft or partners. There are a large variety of images from multiple publishers for various use cases (security hardened, full database / application stack, etc.), and can be available free, pay-as-you-go for BYOL (bring your own license/subscription).
-
-
-Platform Images are a type of Marketplace images for which Microsoft has partnered with several mainstream publishers (see table below about Partners) to create a set of ΓÇ£platform imagesΓÇ¥ that undergo additional testing and receive predictable updates (see section below on Image Update Cadence). These platform images can be used for building your own custom images and solution stacks. These images are published by the endorsed Linux distribution partners such as Canonical (Ubuntu), Red Hat (RHEL), and Credativ (Debian).
--
-Microsoft CSS provides commercially reasonable support for these images. Additionally, Red Hat, Canonical, and SUSE offer integrated vendor support capabilities for their platform images.
--
-### Custom Images
-These images are created and maintained by the customer, often based on platform images. These images can also be created from scratch and uploaded to Azure - [learn how to create custom images](tutorial-custom-images.md). Customers can host these images in [Azure Compute Gallery](../azure-compute-gallery.md) and they can share these images with others in their organization.
-
-
-Microsoft CSS provides commercially reasonable support for custom images.
-
-### Community Gallery Images
-These images are created and provided by open source projects, communities and teams. These images are provided using licensing terms set out by the publisher, often under an open source license. They do not appear as traditional marketplace listings, however, they do appear in the portal and via command line tools. More information on community galleries can be found here: [Azure Compute Gallery](../azure-compute-gallery.md#community-gallery).
--
-Microsoft CSS provides support for Community Gallery images.
---
-## Platform Image Partners
-
-|Linux Publisher / Distribution|Images (Offers)|Microsoft Support Policy|Description|
-|||||
-|**Canonical / Ubuntu**|[Ubuntu Server 20.04 LTS](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-server-focal?tab=Overview) <br/><br/> [Ubuntu Server 22.04 LTS](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-server-jammy?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|Canonical produces official Ubuntu images for Microsoft Azure and continuously tracks and delivers updates to these, ensuring security and stability are built from the moment your virtual machines launch. <br/><br/> Canonical works closely with Microsoft to optimize Ubuntu images on Azure and ensure Ubuntu supports the latest cloud features as they are released. Ubuntu powers more mission-critical workloads on Azure than any other operating system. <br/><br/> https://ubuntu.com/azure |
-|**Credativ / Debian**|[Debian 11 "Bullseye"](https://azuremarketplace.microsoft.com/marketplace/apps/debian.debian-11?tab=Overview) <br/><br/> [Debian 12 "Bookworm"](https://azuremarketplace.microsoft.com/marketplace/apps/debian.debian-12?tab=Overview)|Microsoft CSS provides support for these images.|Credativ is an independent consulting and services company that specializes in the development and implementation of professional solutions by using free software. As leading open-source specialists, Credativ has international recognition with many IT departments that use their support. In conjunction with Microsoft, Credativ is preparing Debian images. The images are specially designed to run on Azure and can be easily managed via the platform. credativ will also support the long-term maintenance and updating of the Debian images for Azure through its Open Source Support Centers. <br/><br/> https://www.credativ.de/en/portfolio/support/open-source-support-center |
-|**Kinvolk / Flatcar**|[Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-free) <br/><br/> [Flatcar Container Linux (BYOL)](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux) <br/><br/> [Flatcar Container Linux ARM64](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-corevm)|Microsoft CSS provides commercially reasonable support these images.|Kinvolk is the team behind Flatcar Container Linux, continuing the original CoreOS vision for a minimal, immutable, and auto-updating foundation for containerized applications. As a minimal distro, Flatcar contains just those packages required for deploying containers. Its immutable file system guarantees consistency and security, while its auto-update capabilities, enable you to be always up-to-date with the latest security fixes. Kinvolk was acquired by Microsoft in April 2021 and, post-acquisition, continues its mission to support the Flatcar Container Linux community. <br/><br/> https://www.flatcar-linux.org |
-|**Oracle Linux**|[Oracle Linux](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/oracle.oracle-linux)|Microsoft CSS provides commercially reasonable support these images.|Oracle's strategy is to offer a broad portfolio of solutions for public and private clouds. The strategy gives customers choice and flexibility in how they deploy Oracle software in Oracle clouds and other clouds. Oracle's partnership with Microsoft enables customers to deploy Oracle software to Microsoft public and private clouds with the confidence of certification and support from Oracle. Oracle's commitment and investment in Oracle public and private cloud solutions is unchanged. <br/><br/> https://www.oracle.com/cloud/azure |
-|**Red Hat / Red Hat Enterprise Linux (RHEL)**|[Red Hat Enterprise Linux](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-20190605) <br/><br/> [Red Hat Enterprise Linux RAW](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-raw) <br/><br/> [Red Hat Enterprise Linux ARM64](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-arm64) <br/><br/> [Red Hat Enterprise Linux for SAP Apps](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-sap-apps) <br/><br/> [Red Hat Enterprise Linux for SAP, HA, Updated Services](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-sap-ha) <br/><br/> [Red Hat Enterprise Linux with HA add-on](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-ha)|Microsoft CSS provides commercially reasonable support these images.|The world's leading provider of open-source solutions, Red Hat helps more than 90% of Fortune 500 companies solve business challenges, align their IT and business strategies, and prepare for the future of technology. Red Hat achieves this by providing secure solutions through an open business model and an affordable, predictable subscription model. <br/><br/> https://www.redhat.com/en/partners/microsoft |
-|**Rogue Wave / CentOS**|[CentOS Based Images/Offers](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|CentOS is currently on End-of-Life path scheduled to be deprecated in mid 2024.|
-|**SUSE / SUSE Linux Enterprise Server (SLES)**|[SUSE Enterprise Linux 15 SP4](https://azuremarketplace.microsoft.com/marketplace/apps/suse.sles-15-sp4-basic?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|SUSE Linux Enterprise Server on Azure is a proven platform that provides superior reliability and security for cloud computing. SUSE's versatile Linux platform seamlessly integrates with Azure cloud services to deliver an easily manageable cloud environment. With more than 9,200 certified applications from more than 1,800 independent software vendors for SUSE Linux Enterprise Server, SUSE ensures that workloads running supported in the data center can be confidently deployed on Azure. <br/><br/> https://www.suse.com/partners/alliance/microsoft |
--
-## Image Update Cadence
-Azure requires that the publishers of the endorsed Linux distributions regularly update their platform images in Azure Marketplace with the latest patches and security fixes, at a quarterly or faster cadence. Updated images in the Marketplace are available automatically to customers as new versions of an image SKU. More information about how to find Linux images: Find Linux VM images in Azure Marketplace.
-
-## Azure-tuned Kernels
-Azure works closely with various endorsed Linux distributions to optimize the images that they published to Azure Marketplace. One aspect of this collaboration is the development of "tuned" Linux kernels that are optimized for the Azure platform and delivered as fully supported components of the Linux distribution. The Azure-Tuned kernels incorporate new features and performance improvements, and at a faster (typically quarterly) cadence compared to the default or generic kernels that are available from the distribution.
-
-In most cases, you will find these kernels pre-installed on the default images in Azure Marketplace so customers will immediately get the benefit of these optimized kernels. More information about these Azure-Tuned kernels can be found in the following links:
-- [CentOS Azure-Tuned Kernel - Available via the CentOS Virtualization SIG](https://wiki.centos.org/SpecialInterestGroup/Virtualization)-- [Debian Cloud Kernel - Available with the Debian 10 and Debian 9 "backports" image on Azure](https://wiki.debian.org/Cloud/MicrosoftAzure)-- [SLES Azure-Tuned Kernel](https://www.suse.com/c/a-different-builtin-kernel-for-azure-on-demand-images)-- [Ubuntu Azure-Tuned Kernel](https://blog.ubuntu.com/2017/09/21/microsoft-and-canonical-increase-velocity-with-azure-tailored-kernel)-- [Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-corevm-amd64)--+
+ Title: Linux distributions endorsed on Azure
+description: Learn about Linux on Azure-endorsed distributions, including information about Ubuntu, CentOS, Oracle, Flatcar, Debian, Red Hat, and SUSE.
+++++ Last updated : 08/02/2023+++++
+# Endorsed Linux distributions on Azure
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+In this article we will cover the following -
+- Types of Images
+- Partners
+- Image Update Cadence
+- Azure-tuned Kernels
+
+There are several different sources of Linux VM images for Azure. Each source provides a different expectation for quality, utility and support. This document will summarize each source (marketplace images, platform images, custom images, and community gallery images). It will further provide more details about platform images which are images provided in partnership between Microsoft and several mainstream Linux publishers such as Red Hat, Canonical, and SUSE.
++
+MicrosoftΓÇÖs Linux distribution partners provide a multitude of Linux images in the Azure Marketplace. For distributions that are not available from the Marketplace, you can always provide a custom built Linux image by following the guidelines found in ΓÇ»[Create and upload a virtual hard disk that contains the Linux operating system](create-upload-generic.md). For older versions seeΓÇ»[Linux Kernel Requirements](create-upload-generic.md#linux-kernel-requirements).
+++
+The Azure Linux Agent is already pre-installed on Azure Marketplace images and is typically available from the distribution package repository. Source code can be found onΓÇ»[GitHub](https://github.com/azure/walinuxagent).
+
+For more information on support by distribution, seeΓÇ»[Support for Linux images in Microsoft Azure](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure).
++
+## Types of Images
+Azure Linux images can be grouped into three categories:
+
+### Marketplace Images
+Images published and maintained by either Microsoft or partners. There are a large variety of images from multiple publishers for various use cases (security hardened, full database / application stack, etc.), and can be available free, pay-as-you-go for BYOL (bring your own license/subscription).
+
+
+Platform Images are a type of Marketplace images for which Microsoft has partnered with several mainstream publishers (see table below about Partners) to create a set of ΓÇ£platform imagesΓÇ¥ that undergo additional testing and receive predictable updates (see section below on Image Update Cadence). These platform images can be used for building your own custom images and solution stacks. These images are published by the endorsed Linux distribution partners such as Canonical (Ubuntu), Red Hat (RHEL), and Credativ (Debian).
++
+Microsoft CSS provides commercially reasonable support for these images. Additionally, Red Hat, Canonical, and SUSE offer integrated vendor support capabilities for their platform images.
++
+### Custom Images
+These images are created and maintained by the customer, often based on platform images. These images can also be created from scratch and uploaded to Azure - [learn how to create custom images](tutorial-custom-images.md). Customers can host these images in [Azure Compute Gallery](../azure-compute-gallery.md) and they can share these images with others in their organization.
+
+
+Microsoft CSS provides commercially reasonable support for custom images.
+
+### Community Gallery Images
+These images are created and provided by open source projects, communities and teams. These images are provided using licensing terms set out by the publisher, often under an open source license. They do not appear as traditional marketplace listings, however, they do appear in the portal and via command line tools. More information on community galleries can be found here: [Azure Compute Gallery](../azure-compute-gallery.md#community-gallery).
++
+Microsoft CSS provides support for Community Gallery images.
+++
+## Platform Image Partners
+
+|Linux Publisher / Distribution|Images (Offers)|Microsoft Support Policy|Description|
+|||||
+|**Canonical / Ubuntu**|[Ubuntu Server 20.04 LTS](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-server-focal?tab=Overview) <br/><br/> [Ubuntu Server 22.04 LTS](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-server-jammy?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|Canonical produces official Ubuntu images for Microsoft Azure and continuously tracks and delivers updates to these, ensuring security and stability are built from the moment your virtual machines launch. <br/><br/> Canonical works closely with Microsoft to optimize Ubuntu images on Azure and ensure Ubuntu supports the latest cloud features as they are released. Ubuntu powers more mission-critical workloads on Azure than any other operating system. <br/><br/> https://ubuntu.com/azure |
+|**Credativ / Debian**|[Debian 11 "Bullseye"](https://azuremarketplace.microsoft.com/marketplace/apps/debian.debian-11?tab=Overview) <br/><br/> [Debian 12 "Bookworm"](https://azuremarketplace.microsoft.com/marketplace/apps/debian.debian-12?tab=Overview)|Microsoft CSS provides support for these images.|Credativ is an independent consulting and services company that specializes in the development and implementation of professional solutions by using free software. As leading open-source specialists, Credativ has international recognition with many IT departments that use their support. In conjunction with Microsoft, Credativ is preparing Debian images. The images are specially designed to run on Azure and can be easily managed via the platform. credativ will also support the long-term maintenance and updating of the Debian images for Azure through its Open Source Support Centers. <br/><br/> https://www.credativ.de/en/portfolio/support/open-source-support-center |
+|**Kinvolk / Flatcar**|[Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-free) <br/><br/> [Flatcar Container Linux (BYOL)](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux) <br/><br/> [Flatcar Container Linux ARM64](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-corevm)|Microsoft CSS provides commercially reasonable support these images.|Kinvolk is the team behind Flatcar Container Linux, continuing the original CoreOS vision for a minimal, immutable, and auto-updating foundation for containerized applications. As a minimal distro, Flatcar contains just those packages required for deploying containers. Its immutable file system guarantees consistency and security, while its auto-update capabilities, enable you to be always up-to-date with the latest security fixes. Kinvolk was acquired by Microsoft in April 2021 and, post-acquisition, continues its mission to support the Flatcar Container Linux community. <br/><br/> https://www.flatcar-linux.org |
+|**Oracle Linux**|[Oracle Linux](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/oracle.oracle-linux)|Microsoft CSS provides commercially reasonable support these images.|Oracle's strategy is to offer a broad portfolio of solutions for public and private clouds. The strategy gives customers choice and flexibility in how they deploy Oracle software in Oracle clouds and other clouds. Oracle's partnership with Microsoft enables customers to deploy Oracle software to Microsoft public and private clouds with the confidence of certification and support from Oracle. Oracle's commitment and investment in Oracle public and private cloud solutions is unchanged. <br/><br/> https://www.oracle.com/cloud/azure |
+|**Red Hat / Red Hat Enterprise Linux (RHEL)**|[Red Hat Enterprise Linux](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-20190605) <br/><br/> [Red Hat Enterprise Linux RAW](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-raw) <br/><br/> [Red Hat Enterprise Linux ARM64](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-arm64) <br/><br/> [Red Hat Enterprise Linux for SAP Apps](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-sap-apps) <br/><br/> [Red Hat Enterprise Linux for SAP, HA, Updated Services](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-sap-ha) <br/><br/> [Red Hat Enterprise Linux with HA add-on](https://azuremarketplace.microsoft.com/marketplace/apps/redhat.rhel-ha)|Microsoft CSS provides commercially reasonable support these images.|The world's leading provider of open-source solutions, Red Hat helps more than 90% of Fortune 500 companies solve business challenges, align their IT and business strategies, and prepare for the future of technology. Red Hat achieves this by providing secure solutions through an open business model and an affordable, predictable subscription model. <br/><br/> https://www.redhat.com/en/partners/microsoft |
+|**Rogue Wave / CentOS**|[CentOS Based Images/Offers](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|CentOS is currently on End-of-Life path scheduled to be deprecated in mid 2024.|
+|**SUSE / SUSE Linux Enterprise Server (SLES)**|[SUSE Enterprise Linux 15 SP4](https://azuremarketplace.microsoft.com/marketplace/apps/suse.sles-15-sp4-basic?tab=Overview)|Microsoft CSS provides commercially reasonable support these images.|SUSE Linux Enterprise Server on Azure is a proven platform that provides superior reliability and security for cloud computing. SUSE's versatile Linux platform seamlessly integrates with Azure cloud services to deliver an easily manageable cloud environment. With more than 9,200 certified applications from more than 1,800 independent software vendors for SUSE Linux Enterprise Server, SUSE ensures that workloads running supported in the data center can be confidently deployed on Azure. <br/><br/> https://www.suse.com/partners/alliance/microsoft |
++
+## Image Update Cadence
+Azure requires that the publishers of the endorsed Linux distributions regularly update their platform images in Azure Marketplace with the latest patches and security fixes, at a quarterly or faster cadence. Updated images in the Marketplace are available automatically to customers as new versions of an image SKU. More information about how to find Linux images: Find Linux VM images in Azure Marketplace.
+
+## Azure-tuned Kernels
+Azure works closely with various endorsed Linux distributions to optimize the images that they published to Azure Marketplace. One aspect of this collaboration is the development of "tuned" Linux kernels that are optimized for the Azure platform and delivered as fully supported components of the Linux distribution. The Azure-Tuned kernels incorporate new features and performance improvements, and at a faster (typically quarterly) cadence compared to the default or generic kernels that are available from the distribution.
+
+In most cases, you will find these kernels pre-installed on the default images in Azure Marketplace so customers will immediately get the benefit of these optimized kernels. More information about these Azure-Tuned kernels can be found in the following links:
+- [CentOS Azure-Tuned Kernel - Available via the CentOS Virtualization SIG](https://wiki.centos.org/SpecialInterestGroup/Virtualization)
+- [Debian Cloud Kernel - Available with the Debian 10 and Debian 9 "backports" image on Azure](https://wiki.debian.org/Cloud/MicrosoftAzure)
+- [SLES Azure-Tuned Kernel](https://www.suse.com/c/a-different-builtin-kernel-for-azure-on-demand-images)
+- [Ubuntu Azure-Tuned Kernel](https://blog.ubuntu.com/2017/09/21/microsoft-and-canonical-increase-velocity-with-azure-tailored-kernel)
+- [Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps/kinvolk.flatcar-container-linux-corevm-amd64)
++
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
- Title: Expand virtual hard disks on a Linux VM
-description: Learn how to expand virtual hard disks on a Linux VM with the Azure CLI.
---- Previously updated : 01/25/2024----
-# Expand virtual hard disks on a Linux VM
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks. You can't expand the size of striped volumes.
-
-An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach data disks and use them for data storage. If you need to store data on the OS disk and require the additional space, convert it to GUID Partition Table (GPT).
-
-> [!WARNING]
-> Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md).
-
-## <a id="identifyDisk"></a>Identify Azure data disk object within the operating system ##
-
-In the case of expanding a data disk when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it is clearly labeled in the Azure portal as the OS disk.
-
-Start by identifying the relationship between disk utilization, mount point, and device, with the ```df``` command.
-
-```bash
-df -Th
-```
-
-```output
-Filesystem Type Size Used Avail Use% Mounted on
-/dev/sda1 xfs 97G 1.8G 95G 2% /
-<truncated>
-/dev/sdd1 ext4 32G 30G 727M 98% /opt/db/data
-/dev/sde1 ext4 32G 49M 30G 1% /opt/db/log
-```
-
-Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` shows the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This is important later.
-
-Now locate the LUN that correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command shows that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
-
-```bash
-sudo ls -alF /dev/disk/azure/scsi1/
-```
-
-```output
-total 0
-drwxr-xr-x. 2 root root 140 Sep 9 21:54 ./
-drwxr-xr-x. 4 root root 80 Sep 9 21:48 ../
-lrwxrwxrwx. 1 root root 12 Sep 9 21:48 lun0 -> ../../../sdc
-lrwxrwxrwx. 1 root root 12 Sep 9 21:48 lun1 -> ../../../sdd
-lrwxrwxrwx. 1 root root 13 Sep 9 21:48 lun1-part1 -> ../../../sdd1
-lrwxrwxrwx. 1 root root 12 Sep 9 21:54 lun2 -> ../../../sde
-lrwxrwxrwx. 1 root root 13 Sep 9 21:54 lun2-part1 -> ../../../sde1
-```
-
-## Expand an Azure Managed Disk
-
-### Expand without downtime
-
-You can expand your managed disks without deallocating your VM. The host cache setting of your disk doesn't change whether or not you can expand a data disk without deallocating your VM.
-
-This feature has the following limitations:
--
-### Expand Azure Managed Disk
-
-Make sure that you have the latest [Azure CLI](/cli/azure/install-az-cli2) installed and are signed in to an Azure account by using [az login](/cli/azure/reference-index#az-login).
-
-This article requires an existing VM in Azure with at least one data disk attached and prepared. If you don't already have a VM that you can use, see [Create and prepare a VM with data disks](tutorial-manage-disks.md#create-and-attach-disks).
-
-In the following samples, replace example parameter names such as *myResourceGroup* and *myVM* with your own values.
-
-> [!IMPORTANT]
-> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
-
-1. Operations on virtual hard disks can't be performed with the VM running. Deallocate your VM with [az vm deallocate](/cli/azure/vm#az-vm-deallocate). The following example deallocates the VM named *myVM* in the resource group named *myResourceGroup*:
-
- ```azurecli
- az vm deallocate --resource-group myResourceGroup --name myVM
- ```
-
- > [!NOTE]
- > The VM must be deallocated to expand the virtual hard disk. Stopping the VM with `az vm stop` doesn't release the compute resources. To release compute resources, use `az vm deallocate`.
-
-1. View a list of managed disks in a resource group with [az disk list](/cli/azure/disk#az-disk-list). The following example displays a list of managed disks in the resource group named *myResourceGroup*:
-
- ```azurecli
- az disk list \
- --resource-group myResourceGroup \
- --query '[*].{Name:name,Gb:diskSizeGb,Tier:accountType}' \
- --output table
- ```
-
- Expand the required disk with [az disk update](/cli/azure/disk#az-disk-update). The following example expands the managed disk named *myDataDisk* to *200* GB:
-
- ```azurecli
- az disk update \
- --resource-group myResourceGroup \
- --name myDataDisk \
- --size-gb 200
- ```
-
- > [!NOTE]
- > When you expand a managed disk, the updated size is rounded up to the nearest managed disk size. For a table of the available managed disk sizes and tiers, see [Azure Managed Disks Overview - Pricing and Billing](../managed-disks-overview.md).
-
-1. Start your VM with [az vm start](/cli/azure/vm#az-vm-start). The following example starts the VM named *myVM* in the resource group named *myResourceGroup*:
-
- ```azurecli
- az vm start --resource-group myResourceGroup --name myVM
- ```
-
-## Expand a disk partition and filesystem
-> [!NOTE]
-> While there are many tools that may be used for performing the partition resizing, the tools detailed in the remainder of this document are the same tools used by certain automated processes such as cloud-init. As described here, the `growpart` tool with the `gdisk` package provides universal compatibility with GUID Partition Table (GPT) disks, as older versions of some tools such as `fdisk` did not support GPT.
-
-### Detecting a changed disk size
-
-If a data disk was expanded without downtime using the procedure mentioned previously, the disk size won't be changed until the device is rescanned, which normally only happens during the boot process. This rescan can be called on-demand with the following procedure. In this example we have detected using the methods in this document that the data disk is currently `/dev/sda` and has been resized from 256 GiB to 512 GiB.
-
-1. Identify the currently recognized size on the first line of output from `fdisk -l /dev/sda`
-
- ```bash
- sudo fdisk -l /dev/sda
- ```
-
- ```output
- Disk /dev/sda: 256 GiB, 274877906944 bytes, 536870912 sectors
- Disk model: Virtual Disk
- Units: sectors of 1 * 512 = 512 bytes
- Sector size (logical/physical): 512 bytes / 4096 bytes
- I/O size (minimum/optimal): 4096 bytes / 4096 bytes
- Disklabel type: dos
- Disk identifier: 0x43d10aad
-
- Device Boot Start End Sectors Size Id Type
- /dev/sda1 2048 536870878 536868831 256G 83 Linux
- ```
-
-1. Insert a `1` character into the rescan file for this device. Note the reference to sda, this would change if a different disk device was resized.
-
- ```bash
- echo 1 | sudo tee /sys/class/block/sda/device/rescan
- ```
-
-1. Verify that the new disk size has been recognized
-
- ```bash
- sudo fdisk -l /dev/sda
- ```
-
- ```output
- Disk /dev/sda: 512 GiB, 549755813888 bytes, 1073741824 sectors
- Disk model: Virtual Disk
- Units: sectors of 1 * 512 = 512 bytes
- Sector size (logical/physical): 512 bytes / 4096 bytes
- I/O size (minimum/optimal): 4096 bytes / 4096 bytes
- Disklabel type: dos
- Disk identifier: 0x43d10aad
-
- Device Boot Start End Sectors Size Id Type
- /dev/sda1 2048 536870878 536868831 256G 83 Linux
- ```
-
-The remainder of this article uses the OS disk for the examples of the procedure for increasing the size of a volume at the OS level. If the expanded disk is a data disk, use the [previous guidance for identifying the data disk device](#identifyDisk), and follow these instructions as a guideline, substituting the data disk device (for example `/dev/sda`), partition numbers, volume names, mount points, and filesystem formats, as necessary.
-
-All Linux OS guidance should be viewed as generic and may apply on any distribution, but generally matches the conventions of the named marketplace publisher. Reference the Red Hat documents for the package requirements on any distribution claiming Red Hat compatibility, such as CentOS and Oracle.
-
-### Increase the size of the OS disk
-
-The following instructions apply to endorsed Linux distributions.
-
-> [!NOTE]
-> Before you proceed, make a full backup copy of your VM, or at a minimum take a snapshot of your OS disk.
-
-# [Ubuntu](#tab/ubuntu)
-
-On Ubuntu 16.x and newer, the root partition of the OS disk and filesystems will be automatically expanded to utilize all free contiguous space on the root disk by cloud-init, provided there's a small bit of free space for the resize operation. For this circumstance the sequence is simply
-
-1. Increase the size of the OS disk as detailed previously
-1. Restart the VM, and then access the VM using the **root** user account.
-1. Verify that the OS disk now displays an increased file system size.
-
-As shown in the following example, the OS disk has been resized from the portal to 100 GB. The **/dev/sda1** file system mounted on **/** now displays 97 GB.
-
-```bash
-df -Th
-```
-
-```output
-Filesystem Type Size Used Avail Use% Mounted on
-udev devtmpfs 314M 0 314M 0% /dev
-tmpfs tmpfs 65M 2.3M 63M 4% /run
-/dev/sda1 ext4 97G 1.8G 95G 2% /
-tmpfs tmpfs 324M 0 324M 0% /dev/shm
-tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
-tmpfs tmpfs 324M 0 324M 0% /sys/fs/cgroup
-/dev/sda15 vfat 105M 3.6M 101M 4% /boot/efi
-/dev/sdb1 ext4 20G 44M 19G 1% /mnt
-tmpfs tmpfs 65M 0 65M 0% /run/user/1000
-user@ubuntu:~#
-```
-
-# [SUSE](#tab/suse)
-
-To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15, and SUSE SLES 15 for SAP:
-
-1. Follow the procedure above to expand the disk in the Azure infrastructure.
-
-1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
-
- ```bash
- sudo -i
- ```
-
-1. Use the following command to install the **growpart** package, which will be used to resize the partition, if it isn't already present:
-
- ```bash
- zypper install growpart
- ```
-
-1. Use the `lsblk` command to find the partition mounted on the root of the file system (**/**). In this case, we see that partition 4 of device **sda** is mounted on **/**:
-
- ```bash
- lsblk
- ```
-
- ```output
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda 8:0 0 48G 0 disk
- Γö£ΓöÇsda1 8:1 0 2M 0 part
- Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
- Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
- ΓööΓöÇsda4 8:4 0 28.5G 0 part /
- sdb 8:16 0 4G 0 disk
- ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
- ```
-
-1. Resize the required partition by using the `growpart` command and the partition number determined in the preceding step:
-
- ```bash
- growpart /dev/sda 4
- ```
-
- ```output
- CHANGED: partition=4 start=3151872 old: size=59762655 end=62914527 new: size=97511391 end=100663263
- ```
-
-1. Run the `lsblk` command again to check whether the partition has been increased.
-
- The following output shows that the **/dev/sda4** partition has been resized to 46.5 GB:
-
- ```bash
- lsblk
- ```
-
- ```output
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda 8:0 0 48G 0 disk
- Γö£ΓöÇsda1 8:1 0 2M 0 part
- Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
- Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
- ΓööΓöÇsda4 8:4 0 46.5G 0 part /
- sdb 8:16 0 4G 0 disk
- ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
- ```
-
-1. Identify the type of file system on the OS disk by using the `lsblk` command with the `-f` flag:
-
- ```bash
- lsblk -f
- ```
-
- ```output
- NAME FSTYPE LABEL UUID MOUNTPOINT
- sda
- Γö£ΓöÇsda1
- Γö£ΓöÇsda2 vfat EFI AC67-D22D /boot/efi
- Γö£ΓöÇsda3 xfs BOOT 5731a128-db36-4899-b3d2-eb5ae8126188 /boot
- ΓööΓöÇsda4 xfs ROOT 70f83359-c7f2-4409-bba5-37b07534af96 /
- sdb
- ΓööΓöÇsdb1 ext4 8c4ca904-cd93-4939-b240-fb45401e2ec6 /mnt/resource
- ```
-
-1. Based on the file system type, use the appropriate commands to resize the file system.
-
- For **xfs**, use this command:
-
- ```bash
- xfs_growfs /
- ```
-
- Example output:
-
- ```output
- meta-data=/dev/sda4 isize=512 agcount=4, agsize=1867583 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0 spinodes=0 rmapbt=0
- = reflink=0
- data = bsize=4096 blocks=7470331, imaxpct=25
- = sunit=0 swidth=0 blks
- naming =version 2 bsize=4096 ascii-ci=0 ftype=1
- log =internal bsize=4096 blocks=3647, version=2
- = sectsz=512 sunit=0 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- data blocks changed from 7470331 to 12188923
- ```
-
- For **ext4**, use this command:
-
- ```bash
- resize2fs /dev/sda4
- ```
-
-1. Verify the increased file system size for **df -Th** by using this command:
-
- ```bash
- df -Thl
- ```
-
- Example output:
-
- ```output
- Filesystem Type Size Used Avail Use% Mounted on
- devtmpfs devtmpfs 445M 4.0K 445M 1% /dev
- tmpfs tmpfs 458M 0 458M 0% /dev/shm
- tmpfs tmpfs 458M 14M 445M 3% /run
- tmpfs tmpfs 458M 0 458M 0% /sys/fs/cgroup
- /dev/sda4 xfs 47G 2.2G 45G 5% /
- /dev/sda3 xfs 1014M 86M 929M 9% /boot
- /dev/sda2 vfat 512M 1.1M 511M 1% /boot/efi
- /dev/sdb1 ext4 3.9G 16M 3.7G 1% /mnt/resource
- tmpfs tmpfs 92M 0 92M 0% /run/user/1000
- tmpfs tmpfs 92M 0 92M 0% /run/user/490
- ```
-
- In the preceding example, we can see that the file system size for the OS disk has been increased.
-
-# [Red Hat/CentOS with LVM](#tab/rhellvm)
-
-1. Follow the procedure above to expand the disk in the Azure infrastructure.
-
-1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
-
- ```bash
- sudo -i
- ```
-
-1. Use the `lsblk` command to determine which logical volume (LV) is mounted on the root of the file system (**/**). In this case, we see that **rootvg-rootlv** is mounted on **/**. If a different filesystem is in need of resizing, substitute the LV and mount point throughout this section.
-
- ```bash
- lsblk -f
- ```
-
- ```output
- NAME FSTYPE LABEL UUID MOUNTPOINT
- fd0
- sda
- Γö£ΓöÇsda1 vfat C13D-C339 /boot/efi
- Γö£ΓöÇsda2 xfs 8cc4c23c-fa7b-4a4d-bba8-4108b7ac0135 /boot
- Γö£ΓöÇsda3
- ΓööΓöÇsda4 LVM2_member zx0Lio-2YsN-ukmz-BvAY-LCKb-kRU0-ReRBzh
- Γö£ΓöÇrootvg-tmplv xfs 174c3c3a-9e65-409a-af59-5204a5c00550 /tmp
- Γö£ΓöÇrootvg-usrlv xfs a48dbaac-75d4-4cf6-a5e6-dcd3ffed9af1 /usr
- Γö£ΓöÇrootvg-optlv xfs 85fe8660-9acb-48b8-98aa-bf16f14b9587 /opt
- Γö£ΓöÇrootvg-homelv xfs b22432b1-c905-492b-a27f-199c1a6497e7 /home
- Γö£ΓöÇrootvg-varlv xfs 24ad0b4e-1b6b-45e7-9605-8aca02d20d22 /var
- ΓööΓöÇrootvg-rootlv xfs 4f3e6f40-61bf-4866-a7ae-5c6a94675193 /
- ```
-
-1. Check whether there's free space in the LVM volume group (VG) containing the root partition. If there's free space, skip to step 12.
-
- ```bash
- vgdisplay rootvg
- ```
-
- ```output
- Volume group
- VG Name rootvg
- System ID
- Format lvm2
- Metadata Areas 1
- Metadata Sequence No 7
- VG Access read/write
- VG Status resizable
- MAX LV 0
- Cur LV 6
- Open LV 6
- Max PV 0
- Cur PV 1
- Act PV 1
- VG Size <63.02 GiB
- PE Size 4.00 MiB
- Total PE 16132
- Alloc PE / Size 6400 / 25.00 GiB
- Free PE / Size 9732 / <38.02 GiB
- VG UUID lPUfnV-3aYT-zDJJ-JaPX-L2d7-n8sL-A9AgJb
- ```
-
- In this example, the line **Free PE / Size** shows that there's 38.02 GB free in the volume group, as the disk has already been resized.
-
-1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts This package is preinstalled on most marketplace images
-
- ```bash
- yum install cloud-utils-growpart gdisk
- ```
-
- In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
-
-1. Determine which disk and partition holds the LVM physical volume (PV) or volumes in the volume group named **rootvg** by using the **pvscan** command. Note the size and free space listed between the brackets (**[** and **]**).
-
- ```bash
- pvscan
- ```
-
- ```output
- PV /dev/sda4 VG rootvg lvm2 [<63.02 GiB / <38.02 GiB free]
- ```
-
-1. Verify the size of the partition by using `lsblk`.
-
- ```bash
- lsblk /dev/sda4
- ```
-
- ```output
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda4 8:4 0 63G 0 part
- Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
- Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
- Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
- Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
- Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
- ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
- ```
-
-1. Expand the partition containing this PV using *growpart*, the device name, and partition number. Doing so expands the specified partition to use all the free contiguous space on the device.
-
- ```bash
- growpart /dev/sda 4
- ```
-
- ```output
- CHANGED: partition=4 start=2054144 old: size=132161536 end=134215680 new: size=199272414 end=201326558
- ```
-
-1. Verify that the partition has resized to the expected size by using the `lsblk` command again. Notice that in the example **sda4** has changed from 63G to 95G.
-
- ```bash
- lsblk /dev/sda4
- ```
-
- ```output
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda4 8:4 0 95G 0 part
- Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
- Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
- Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
- Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
- Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
- ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
- ```
-
-1. Expand the PV to use the rest of the newly expanded partition
-
- ```bash
- pvresize /dev/sda4
- ```
-
- ```output
- Physical volume "/dev/sda4" changed
- 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
- ```
-
-1. Verify the new size of the PV is the expected size, comparing to original **[size / free]** values.
-
- ```bash
- pvscan
- ```
-
- ```output
- PV /dev/sda4 VG rootvg lvm2 [<95.02 GiB / <70.02 GiB free]
- ```
-
-1. Expand the LV by the required amount, which doesn't need to be all the free space in the volume group. In the following example, **/dev/mapper/rootvg-rootlv** is resized from 2 GB to 12 GB (an increase of 10 GB) through the following command. This command will also resize the file system on the LV.
-
- ```bash
- lvresize -r -L +10G /dev/mapper/rootvg-rootlv
- ```
-
- Example output:
-
- ```output
- Size of logical volume rootvg/rootlv changed from 2.00 GiB (512 extents) to 12.00 GiB (3072 extents).
- Logical volume rootvg/rootlv successfully resized.
- meta-data=/dev/mapper/rootvg-rootlv isize=512 agcount=4, agsize=131072 blks
- = sectsz=4096 attr=2, projid32bit=1
- = crc=1 finobt=0 spinodes=0
- data = bsize=4096 blocks=524288, imaxpct=25
- = sunit=0 swidth=0 blks
- naming =version 2 bsize=4096 ascii-ci=0 ftype=1
- log =internal bsize=4096 blocks=2560, version=2
- = sectsz=4096 sunit=1 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- data blocks changed from 524288 to 3145728
- ```
-
-1. The `lvresize` command automatically calls the appropriate resize command for the filesystem in the LV. Verify whether **/dev/mapper/rootvg-rootlv**, which is mounted on **/**, has an increased file system size by using the `df -Th` command:
-
- Example output:
-
- ```bash
- df -Th /
- ```
-
- ```output
- Filesystem Type Size Used Avail Use% Mounted on
- /dev/mapper/rootvg-rootlv xfs 12G 71M 12G 1% /
- ```
-
-> [!NOTE]
-> To use the same procedure to resize any other logical volume, change the **lv** name in step **12**.
-
-# [Red Hat/CentOS without LVM](#tab/rhelraw)
-
-1. Follow the procedure above to expand the disk in the Azure infrastructure.
-
-1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
-
- ```bash
- sudo -i
- ```
-
-1. When the VM has restarted, perform the following steps:
-
- 1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts. This package is preinstalled on most marketplace images
-
- ```bash
- yum install cloud-utils-growpart gdisk
- ```
-
- In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
-
-1. Use the **lsblk -f** command to verify the partition and filesystem type holding the root (**/**) partition
-
- ```bash
- lsblk -f
- ```
-
- ```output
- NAME FSTYPE LABEL UUID MOUNTPOINT
- sda
- Γö£ΓöÇsda1 xfs 2a7bb59d-6a71-4841-a3c6-cba23413a5d2 /boot
- Γö£ΓöÇsda2 xfs 148be922-e3ec-43b5-8705-69786b522b05 /
- Γö£ΓöÇsda14
- ΓööΓöÇsda15 vfat 788D-DC65 /boot/efi
- sdb
- ΓööΓöÇsdb1 ext4 923f51ff-acbd-4b91-b01b-c56140920098 /mnt/resource
- ```
-
-1. For verification, start by listing the partition table of the sda disk with **gdisk**. In this example, we see a 48.0 GiB disk with partition #2 sized 29.0 GiB. The disk was expanded from 30 GB to 48 GB in the Azure portal.
-
- ```bash
- gdisk -l /dev/sda
- ```
-
- ```output
- GPT fdisk (gdisk) version 0.8.10
-
- Partition table scan:
- MBR: protective
- BSD: not present
- APM: not present
- GPT: present
-
- Found valid GPT with protective MBR; using GPT.
- Disk /dev/sda: 100663296 sectors, 48.0 GiB
- Logical sector size: 512 bytes
- Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
- Partition table holds up to 128 entries
- First usable sector is 34, last usable sector is 62914526
- Partitions will be aligned on 2048-sector boundaries
- Total free space is 6076 sectors (3.0 MiB)
-
- Number Start (sector) End (sector) Size Code Name
- 1 1026048 2050047 500.0 MiB 0700
- 2 2050048 62912511 29.0 GiB 0700
- 14 2048 10239 4.0 MiB EF02
- 15 10240 1024000 495.0 MiB EF00 EFI System Partition
- ```
-
-1. Expand the partition for root, in this case sda2 by using the **growpart** command. Using this command expands the partition to use all of the contiguous space on the disk.
-
- ```bash
- growpart /dev/sda 2
- ```
-
- ```output
- CHANGED: partition=2 start=2050048 old: size=60862464 end=62912512 new: size=98613214 end=100663262
- ```
-
-1. Now print the new partition table with **gdisk** again. Notice that partition 2 has is now sized 47.0 GiB
-
- ```bash
- gdisk -l /dev/sda
- ```
-
- ```output
- GPT fdisk (gdisk) version 0.8.10
-
- Partition table scan:
- MBR: protective
- BSD: not present
- APM: not present
- GPT: present
-
- Found valid GPT with protective MBR; using GPT.
- Disk /dev/sda: 100663296 sectors, 48.0 GiB
- Logical sector size: 512 bytes
- Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
- Partition table holds up to 128 entries
- First usable sector is 34, last usable sector is 100663262
- Partitions will be aligned on 2048-sector boundaries
- Total free space is 4062 sectors (2.0 MiB)
-
- Number Start (sector) End (sector) Size Code Name
- 1 1026048 2050047 500.0 MiB 0700
- 2 2050048 100663261 47.0 GiB 0700
- 14 2048 10239 4.0 MiB EF02
- 15 10240 1024000 495.0 MiB EF00 EFI System Partition
- ```
-
-1. Expand the filesystem on the partition with **xfs_growfs**, which is appropriate for a standard marketplace-generated RedHat system:
-
- ```bash
- xfs_growfs /
- ```
-
- ```output
- meta-data=/dev/sda2 isize=512 agcount=4, agsize=1901952 blks
- = sectsz=4096 attr=2, projid32bit=1
- = crc=1 finobt=0 spinodes=0
- data = bsize=4096 blocks=7607808, imaxpct=25
- = sunit=0 swidth=0 blks
- naming =version 2 bsize=4096 ascii-ci=0 ftype=1
- log =internal bsize=4096 blocks=3714, version=2
- = sectsz=4096 sunit=1 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- data blocks changed from 7607808 to 12326651
- ```
-
-1. Verify the new size is reflected with the **df** command
-
- ```bash
- df -hl
- ```
-
- ```output
- Filesystem Size Used Avail Use% Mounted on
- devtmpfs 452M 0 452M 0% /dev
- tmpfs 464M 0 464M 0% /dev/shm
- tmpfs 464M 6.8M 457M 2% /run
- tmpfs 464M 0 464M 0% /sys/fs/cgroup
- /dev/sda2 48G 2.1G 46G 5% /
- /dev/sda1 494M 65M 430M 13% /boot
- /dev/sda15 495M 12M 484M 3% /boot/efi
- /dev/sdb1 3.9G 16M 3.7G 1% /mnt/resource
- tmpfs 93M 0 93M 0% /run/user/1000
- ```
---
-## Expanding without downtime classic VM SKU support
-
-If you're using a classic VM SKU, it might not support expanding disks without downtime.
-
-Use the following PowerShell script to determine which VM SKUs it's available with:
-
-```azurepowershell
-Connect-AzAccount
-$subscriptionId="yourSubID"
-$location="desiredRegion"
-Set-AzContext -Subscription $subscriptionId
-$vmSizes=Get-AzComputeResourceSku -Location $location | where{$_.ResourceType -eq 'virtualMachines'}
-
-foreach($vmSize in $vmSizes){
- foreach($capability in $vmSize.Capabilities)
- {
- if(($capability.Name -eq "EphemeralOSDiskSupported" -and $capability.Value -eq "True") -or ($capability.Name -eq "PremiumIO" -and $capability.Value -eq "True") -or ($capability.Name -eq "HyperVGenerations" -and $capability.Value -match "V2"))
- {
- $vmSize.Name
- }
- }
-}
-```
+
+ Title: Expand virtual hard disks on a Linux VM
+description: Learn how to expand virtual hard disks on a Linux VM with the Azure CLI.
++++ Last updated : 01/25/2024++++
+# Expand virtual hard disks on a Linux VM
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+
+This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks. You can't expand the size of striped volumes.
+
+An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach data disks and use them for data storage. If you need to store data on the OS disk and require the additional space, convert it to GUID Partition Table (GPT).
+
+> [!WARNING]
+> Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md).
+
+## <a id="identifyDisk"></a>Identify Azure data disk object within the operating system ##
+
+In the case of expanding a data disk when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it is clearly labeled in the Azure portal as the OS disk.
+
+Start by identifying the relationship between disk utilization, mount point, and device, with the ```df``` command.
+
+```bash
+df -Th
+```
+
+```output
+Filesystem Type Size Used Avail Use% Mounted on
+/dev/sda1 xfs 97G 1.8G 95G 2% /
+<truncated>
+/dev/sdd1 ext4 32G 30G 727M 98% /opt/db/data
+/dev/sde1 ext4 32G 49M 30G 1% /opt/db/log
+```
+
+Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` shows the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This is important later.
+
+Now locate the LUN that correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command shows that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
+
+```bash
+sudo ls -alF /dev/disk/azure/scsi1/
+```
+
+```output
+total 0
+drwxr-xr-x. 2 root root 140 Sep 9 21:54 ./
+drwxr-xr-x. 4 root root 80 Sep 9 21:48 ../
+lrwxrwxrwx. 1 root root 12 Sep 9 21:48 lun0 -> ../../../sdc
+lrwxrwxrwx. 1 root root 12 Sep 9 21:48 lun1 -> ../../../sdd
+lrwxrwxrwx. 1 root root 13 Sep 9 21:48 lun1-part1 -> ../../../sdd1
+lrwxrwxrwx. 1 root root 12 Sep 9 21:54 lun2 -> ../../../sde
+lrwxrwxrwx. 1 root root 13 Sep 9 21:54 lun2-part1 -> ../../../sde1
+```
+
+## Expand an Azure Managed Disk
+
+### Expand without downtime
+
+You can expand your managed disks without deallocating your VM. The host cache setting of your disk doesn't change whether or not you can expand a data disk without deallocating your VM.
+
+This feature has the following limitations:
++
+### Expand Azure Managed Disk
+
+Make sure that you have the latest [Azure CLI](/cli/azure/install-az-cli2) installed and are signed in to an Azure account by using [az login](/cli/azure/reference-index#az-login).
+
+This article requires an existing VM in Azure with at least one data disk attached and prepared. If you don't already have a VM that you can use, see [Create and prepare a VM with data disks](tutorial-manage-disks.md#create-and-attach-disks).
+
+In the following samples, replace example parameter names such as *myResourceGroup* and *myVM* with your own values.
+
+> [!IMPORTANT]
+> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
+
+1. Operations on virtual hard disks can't be performed with the VM running. Deallocate your VM with [az vm deallocate](/cli/azure/vm#az-vm-deallocate). The following example deallocates the VM named *myVM* in the resource group named *myResourceGroup*:
+
+ ```azurecli
+ az vm deallocate --resource-group myResourceGroup --name myVM
+ ```
+
+ > [!NOTE]
+ > The VM must be deallocated to expand the virtual hard disk. Stopping the VM with `az vm stop` doesn't release the compute resources. To release compute resources, use `az vm deallocate`.
+
+1. View a list of managed disks in a resource group with [az disk list](/cli/azure/disk#az-disk-list). The following example displays a list of managed disks in the resource group named *myResourceGroup*:
+
+ ```azurecli
+ az disk list \
+ --resource-group myResourceGroup \
+ --query '[*].{Name:name,Gb:diskSizeGb,Tier:accountType}' \
+ --output table
+ ```
+
+ Expand the required disk with [az disk update](/cli/azure/disk#az-disk-update). The following example expands the managed disk named *myDataDisk* to *200* GB:
+
+ ```azurecli
+ az disk update \
+ --resource-group myResourceGroup \
+ --name myDataDisk \
+ --size-gb 200
+ ```
+
+ > [!NOTE]
+ > When you expand a managed disk, the updated size is rounded up to the nearest managed disk size. For a table of the available managed disk sizes and tiers, see [Azure Managed Disks Overview - Pricing and Billing](../managed-disks-overview.md).
+
+1. Start your VM with [az vm start](/cli/azure/vm#az-vm-start). The following example starts the VM named *myVM* in the resource group named *myResourceGroup*:
+
+ ```azurecli
+ az vm start --resource-group myResourceGroup --name myVM
+ ```
+
+## Expand a disk partition and filesystem
+> [!NOTE]
+> While there are many tools that may be used for performing the partition resizing, the tools detailed in the remainder of this document are the same tools used by certain automated processes such as cloud-init. As described here, the `growpart` tool with the `gdisk` package provides universal compatibility with GUID Partition Table (GPT) disks, as older versions of some tools such as `fdisk` did not support GPT.
+
+### Detecting a changed disk size
+
+If a data disk was expanded without downtime using the procedure mentioned previously, the disk size won't be changed until the device is rescanned, which normally only happens during the boot process. This rescan can be called on-demand with the following procedure. In this example we have detected using the methods in this document that the data disk is currently `/dev/sda` and has been resized from 256 GiB to 512 GiB.
+
+1. Identify the currently recognized size on the first line of output from `fdisk -l /dev/sda`
+
+ ```bash
+ sudo fdisk -l /dev/sda
+ ```
+
+ ```output
+ Disk /dev/sda: 256 GiB, 274877906944 bytes, 536870912 sectors
+ Disk model: Virtual Disk
+ Units: sectors of 1 * 512 = 512 bytes
+ Sector size (logical/physical): 512 bytes / 4096 bytes
+ I/O size (minimum/optimal): 4096 bytes / 4096 bytes
+ Disklabel type: dos
+ Disk identifier: 0x43d10aad
+
+ Device Boot Start End Sectors Size Id Type
+ /dev/sda1 2048 536870878 536868831 256G 83 Linux
+ ```
+
+1. Insert a `1` character into the rescan file for this device. Note the reference to sda, this would change if a different disk device was resized.
+
+ ```bash
+ echo 1 | sudo tee /sys/class/block/sda/device/rescan
+ ```
+
+1. Verify that the new disk size has been recognized
+
+ ```bash
+ sudo fdisk -l /dev/sda
+ ```
+
+ ```output
+ Disk /dev/sda: 512 GiB, 549755813888 bytes, 1073741824 sectors
+ Disk model: Virtual Disk
+ Units: sectors of 1 * 512 = 512 bytes
+ Sector size (logical/physical): 512 bytes / 4096 bytes
+ I/O size (minimum/optimal): 4096 bytes / 4096 bytes
+ Disklabel type: dos
+ Disk identifier: 0x43d10aad
+
+ Device Boot Start End Sectors Size Id Type
+ /dev/sda1 2048 536870878 536868831 256G 83 Linux
+ ```
+
+The remainder of this article uses the OS disk for the examples of the procedure for increasing the size of a volume at the OS level. If the expanded disk is a data disk, use the [previous guidance for identifying the data disk device](#identifyDisk), and follow these instructions as a guideline, substituting the data disk device (for example `/dev/sda`), partition numbers, volume names, mount points, and filesystem formats, as necessary.
+
+All Linux OS guidance should be viewed as generic and may apply on any distribution, but generally matches the conventions of the named marketplace publisher. Reference the Red Hat documents for the package requirements on any distribution claiming Red Hat compatibility, such as CentOS and Oracle.
+
+### Increase the size of the OS disk
+
+The following instructions apply to endorsed Linux distributions.
+
+> [!NOTE]
+> Before you proceed, make a full backup copy of your VM, or at a minimum take a snapshot of your OS disk.
+
+# [Ubuntu](#tab/ubuntu)
+
+On Ubuntu 16.x and newer, the root partition of the OS disk and filesystems will be automatically expanded to utilize all free contiguous space on the root disk by cloud-init, provided there's a small bit of free space for the resize operation. For this circumstance the sequence is simply
+
+1. Increase the size of the OS disk as detailed previously
+1. Restart the VM, and then access the VM using the **root** user account.
+1. Verify that the OS disk now displays an increased file system size.
+
+As shown in the following example, the OS disk has been resized from the portal to 100 GB. The **/dev/sda1** file system mounted on **/** now displays 97 GB.
+
+```bash
+df -Th
+```
+
+```output
+Filesystem Type Size Used Avail Use% Mounted on
+udev devtmpfs 314M 0 314M 0% /dev
+tmpfs tmpfs 65M 2.3M 63M 4% /run
+/dev/sda1 ext4 97G 1.8G 95G 2% /
+tmpfs tmpfs 324M 0 324M 0% /dev/shm
+tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
+tmpfs tmpfs 324M 0 324M 0% /sys/fs/cgroup
+/dev/sda15 vfat 105M 3.6M 101M 4% /boot/efi
+/dev/sdb1 ext4 20G 44M 19G 1% /mnt
+tmpfs tmpfs 65M 0 65M 0% /run/user/1000
+user@ubuntu:~#
+```
+
+# [SUSE](#tab/suse)
+
+To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15, and SUSE SLES 15 for SAP:
+
+1. Follow the procedure above to expand the disk in the Azure infrastructure.
+
+1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
+
+ ```bash
+ sudo -i
+ ```
+
+1. Use the following command to install the **growpart** package, which will be used to resize the partition, if it isn't already present:
+
+ ```bash
+ zypper install growpart
+ ```
+
+1. Use the `lsblk` command to find the partition mounted on the root of the file system (**/**). In this case, we see that partition 4 of device **sda** is mounted on **/**:
+
+ ```bash
+ lsblk
+ ```
+
+ ```output
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda 8:0 0 48G 0 disk
+ Γö£ΓöÇsda1 8:1 0 2M 0 part
+ Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
+ Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
+ ΓööΓöÇsda4 8:4 0 28.5G 0 part /
+ sdb 8:16 0 4G 0 disk
+ ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
+ ```
+
+1. Resize the required partition by using the `growpart` command and the partition number determined in the preceding step:
+
+ ```bash
+ growpart /dev/sda 4
+ ```
+
+ ```output
+ CHANGED: partition=4 start=3151872 old: size=59762655 end=62914527 new: size=97511391 end=100663263
+ ```
+
+1. Run the `lsblk` command again to check whether the partition has been increased.
+
+ The following output shows that the **/dev/sda4** partition has been resized to 46.5 GB:
+
+ ```bash
+ lsblk
+ ```
+
+ ```output
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda 8:0 0 48G 0 disk
+ Γö£ΓöÇsda1 8:1 0 2M 0 part
+ Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
+ Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
+ ΓööΓöÇsda4 8:4 0 46.5G 0 part /
+ sdb 8:16 0 4G 0 disk
+ ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
+ ```
+
+1. Identify the type of file system on the OS disk by using the `lsblk` command with the `-f` flag:
+
+ ```bash
+ lsblk -f
+ ```
+
+ ```output
+ NAME FSTYPE LABEL UUID MOUNTPOINT
+ sda
+ Γö£ΓöÇsda1
+ Γö£ΓöÇsda2 vfat EFI AC67-D22D /boot/efi
+ Γö£ΓöÇsda3 xfs BOOT 5731a128-db36-4899-b3d2-eb5ae8126188 /boot
+ ΓööΓöÇsda4 xfs ROOT 70f83359-c7f2-4409-bba5-37b07534af96 /
+ sdb
+ ΓööΓöÇsdb1 ext4 8c4ca904-cd93-4939-b240-fb45401e2ec6 /mnt/resource
+ ```
+
+1. Based on the file system type, use the appropriate commands to resize the file system.
+
+ For **xfs**, use this command:
+
+ ```bash
+ xfs_growfs /
+ ```
+
+ Example output:
+
+ ```output
+ meta-data=/dev/sda4 isize=512 agcount=4, agsize=1867583 blks
+ = sectsz=512 attr=2, projid32bit=1
+ = crc=1 finobt=0 spinodes=0 rmapbt=0
+ = reflink=0
+ data = bsize=4096 blocks=7470331, imaxpct=25
+ = sunit=0 swidth=0 blks
+ naming =version 2 bsize=4096 ascii-ci=0 ftype=1
+ log =internal bsize=4096 blocks=3647, version=2
+ = sectsz=512 sunit=0 blks, lazy-count=1
+ realtime =none extsz=4096 blocks=0, rtextents=0
+ data blocks changed from 7470331 to 12188923
+ ```
+
+ For **ext4**, use this command:
+
+ ```bash
+ resize2fs /dev/sda4
+ ```
+
+1. Verify the increased file system size for **df -Th** by using this command:
+
+ ```bash
+ df -Thl
+ ```
+
+ Example output:
+
+ ```output
+ Filesystem Type Size Used Avail Use% Mounted on
+ devtmpfs devtmpfs 445M 4.0K 445M 1% /dev
+ tmpfs tmpfs 458M 0 458M 0% /dev/shm
+ tmpfs tmpfs 458M 14M 445M 3% /run
+ tmpfs tmpfs 458M 0 458M 0% /sys/fs/cgroup
+ /dev/sda4 xfs 47G 2.2G 45G 5% /
+ /dev/sda3 xfs 1014M 86M 929M 9% /boot
+ /dev/sda2 vfat 512M 1.1M 511M 1% /boot/efi
+ /dev/sdb1 ext4 3.9G 16M 3.7G 1% /mnt/resource
+ tmpfs tmpfs 92M 0 92M 0% /run/user/1000
+ tmpfs tmpfs 92M 0 92M 0% /run/user/490
+ ```
+
+ In the preceding example, we can see that the file system size for the OS disk has been increased.
+
+# [Red Hat/CentOS with LVM](#tab/rhellvm)
+
+1. Follow the procedure above to expand the disk in the Azure infrastructure.
+
+1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
+
+ ```bash
+ sudo -i
+ ```
+
+1. Use the `lsblk` command to determine which logical volume (LV) is mounted on the root of the file system (**/**). In this case, we see that **rootvg-rootlv** is mounted on **/**. If a different filesystem is in need of resizing, substitute the LV and mount point throughout this section.
+
+ ```bash
+ lsblk -f
+ ```
+
+ ```output
+ NAME FSTYPE LABEL UUID MOUNTPOINT
+ fd0
+ sda
+ Γö£ΓöÇsda1 vfat C13D-C339 /boot/efi
+ Γö£ΓöÇsda2 xfs 8cc4c23c-fa7b-4a4d-bba8-4108b7ac0135 /boot
+ Γö£ΓöÇsda3
+ ΓööΓöÇsda4 LVM2_member zx0Lio-2YsN-ukmz-BvAY-LCKb-kRU0-ReRBzh
+ Γö£ΓöÇrootvg-tmplv xfs 174c3c3a-9e65-409a-af59-5204a5c00550 /tmp
+ Γö£ΓöÇrootvg-usrlv xfs a48dbaac-75d4-4cf6-a5e6-dcd3ffed9af1 /usr
+ Γö£ΓöÇrootvg-optlv xfs 85fe8660-9acb-48b8-98aa-bf16f14b9587 /opt
+ Γö£ΓöÇrootvg-homelv xfs b22432b1-c905-492b-a27f-199c1a6497e7 /home
+ Γö£ΓöÇrootvg-varlv xfs 24ad0b4e-1b6b-45e7-9605-8aca02d20d22 /var
+ ΓööΓöÇrootvg-rootlv xfs 4f3e6f40-61bf-4866-a7ae-5c6a94675193 /
+ ```
+
+1. Check whether there's free space in the LVM volume group (VG) containing the root partition. If there's free space, skip to step 12.
+
+ ```bash
+ vgdisplay rootvg
+ ```
+
+ ```output
+ Volume group
+ VG Name rootvg
+ System ID
+ Format lvm2
+ Metadata Areas 1
+ Metadata Sequence No 7
+ VG Access read/write
+ VG Status resizable
+ MAX LV 0
+ Cur LV 6
+ Open LV 6
+ Max PV 0
+ Cur PV 1
+ Act PV 1
+ VG Size <63.02 GiB
+ PE Size 4.00 MiB
+ Total PE 16132
+ Alloc PE / Size 6400 / 25.00 GiB
+ Free PE / Size 9732 / <38.02 GiB
+ VG UUID lPUfnV-3aYT-zDJJ-JaPX-L2d7-n8sL-A9AgJb
+ ```
+
+ In this example, the line **Free PE / Size** shows that there's 38.02 GB free in the volume group, as the disk has already been resized.
+
+1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts This package is preinstalled on most marketplace images
+
+ ```bash
+ yum install cloud-utils-growpart gdisk
+ ```
+
+ In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
+
+1. Determine which disk and partition holds the LVM physical volume (PV) or volumes in the volume group named **rootvg** by using the **pvscan** command. Note the size and free space listed between the brackets (**[** and **]**).
+
+ ```bash
+ pvscan
+ ```
+
+ ```output
+ PV /dev/sda4 VG rootvg lvm2 [<63.02 GiB / <38.02 GiB free]
+ ```
+
+1. Verify the size of the partition by using `lsblk`.
+
+ ```bash
+ lsblk /dev/sda4
+ ```
+
+ ```output
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda4 8:4 0 63G 0 part
+ Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
+ Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
+ Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
+ Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
+ Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
+ ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
+ ```
+
+1. Expand the partition containing this PV using *growpart*, the device name, and partition number. Doing so expands the specified partition to use all the free contiguous space on the device.
+
+ ```bash
+ growpart /dev/sda 4
+ ```
+
+ ```output
+ CHANGED: partition=4 start=2054144 old: size=132161536 end=134215680 new: size=199272414 end=201326558
+ ```
+
+1. Verify that the partition has resized to the expected size by using the `lsblk` command again. Notice that in the example **sda4** has changed from 63G to 95G.
+
+ ```bash
+ lsblk /dev/sda4
+ ```
+
+ ```output
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda4 8:4 0 95G 0 part
+ Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
+ Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
+ Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
+ Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
+ Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
+ ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
+ ```
+
+1. Expand the PV to use the rest of the newly expanded partition
+
+ ```bash
+ pvresize /dev/sda4
+ ```
+
+ ```output
+ Physical volume "/dev/sda4" changed
+ 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
+ ```
+
+1. Verify the new size of the PV is the expected size, comparing to original **[size / free]** values.
+
+ ```bash
+ pvscan
+ ```
+
+ ```output
+ PV /dev/sda4 VG rootvg lvm2 [<95.02 GiB / <70.02 GiB free]
+ ```
+
+1. Expand the LV by the required amount, which doesn't need to be all the free space in the volume group. In the following example, **/dev/mapper/rootvg-rootlv** is resized from 2 GB to 12 GB (an increase of 10 GB) through the following command. This command will also resize the file system on the LV.
+
+ ```bash
+ lvresize -r -L +10G /dev/mapper/rootvg-rootlv
+ ```
+
+ Example output:
+
+ ```output
+ Size of logical volume rootvg/rootlv changed from 2.00 GiB (512 extents) to 12.00 GiB (3072 extents).
+ Logical volume rootvg/rootlv successfully resized.
+ meta-data=/dev/mapper/rootvg-rootlv isize=512 agcount=4, agsize=131072 blks
+ = sectsz=4096 attr=2, projid32bit=1
+ = crc=1 finobt=0 spinodes=0
+ data = bsize=4096 blocks=524288, imaxpct=25
+ = sunit=0 swidth=0 blks
+ naming =version 2 bsize=4096 ascii-ci=0 ftype=1
+ log =internal bsize=4096 blocks=2560, version=2
+ = sectsz=4096 sunit=1 blks, lazy-count=1
+ realtime =none extsz=4096 blocks=0, rtextents=0
+ data blocks changed from 524288 to 3145728
+ ```
+
+1. The `lvresize` command automatically calls the appropriate resize command for the filesystem in the LV. Verify whether **/dev/mapper/rootvg-rootlv**, which is mounted on **/**, has an increased file system size by using the `df -Th` command:
+
+ Example output:
+
+ ```bash
+ df -Th /
+ ```
+
+ ```output
+ Filesystem Type Size Used Avail Use% Mounted on
+ /dev/mapper/rootvg-rootlv xfs 12G 71M 12G 1% /
+ ```
+
+> [!NOTE]
+> To use the same procedure to resize any other logical volume, change the **lv** name in step **12**.
+
+# [Red Hat/CentOS without LVM](#tab/rhelraw)
+
+1. Follow the procedure above to expand the disk in the Azure infrastructure.
+
+1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
+
+ ```bash
+ sudo -i
+ ```
+
+1. When the VM has restarted, perform the following steps:
+
+ 1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts. This package is preinstalled on most marketplace images
+
+ ```bash
+ yum install cloud-utils-growpart gdisk
+ ```
+
+ In RHEL/CentOS 8.x VMs you can use `dnf` command instead of `yum`.
+
+1. Use the **lsblk -f** command to verify the partition and filesystem type holding the root (**/**) partition
+
+ ```bash
+ lsblk -f
+ ```
+
+ ```output
+ NAME FSTYPE LABEL UUID MOUNTPOINT
+ sda
+ Γö£ΓöÇsda1 xfs 2a7bb59d-6a71-4841-a3c6-cba23413a5d2 /boot
+ Γö£ΓöÇsda2 xfs 148be922-e3ec-43b5-8705-69786b522b05 /
+ Γö£ΓöÇsda14
+ ΓööΓöÇsda15 vfat 788D-DC65 /boot/efi
+ sdb
+ ΓööΓöÇsdb1 ext4 923f51ff-acbd-4b91-b01b-c56140920098 /mnt/resource
+ ```
+
+1. For verification, start by listing the partition table of the sda disk with **gdisk**. In this example, we see a 48.0 GiB disk with partition #2 sized 29.0 GiB. The disk was expanded from 30 GB to 48 GB in the Azure portal.
+
+ ```bash
+ gdisk -l /dev/sda
+ ```
+
+ ```output
+ GPT fdisk (gdisk) version 0.8.10
+
+ Partition table scan:
+ MBR: protective
+ BSD: not present
+ APM: not present
+ GPT: present
+
+ Found valid GPT with protective MBR; using GPT.
+ Disk /dev/sda: 100663296 sectors, 48.0 GiB
+ Logical sector size: 512 bytes
+ Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
+ Partition table holds up to 128 entries
+ First usable sector is 34, last usable sector is 62914526
+ Partitions will be aligned on 2048-sector boundaries
+ Total free space is 6076 sectors (3.0 MiB)
+
+ Number Start (sector) End (sector) Size Code Name
+ 1 1026048 2050047 500.0 MiB 0700
+ 2 2050048 62912511 29.0 GiB 0700
+ 14 2048 10239 4.0 MiB EF02
+ 15 10240 1024000 495.0 MiB EF00 EFI System Partition
+ ```
+
+1. Expand the partition for root, in this case sda2 by using the **growpart** command. Using this command expands the partition to use all of the contiguous space on the disk.
+
+ ```bash
+ growpart /dev/sda 2
+ ```
+
+ ```output
+ CHANGED: partition=2 start=2050048 old: size=60862464 end=62912512 new: size=98613214 end=100663262
+ ```
+
+1. Now print the new partition table with **gdisk** again. Notice that partition 2 has is now sized 47.0 GiB
+
+ ```bash
+ gdisk -l /dev/sda
+ ```
+
+ ```output
+ GPT fdisk (gdisk) version 0.8.10
+
+ Partition table scan:
+ MBR: protective
+ BSD: not present
+ APM: not present
+ GPT: present
+
+ Found valid GPT with protective MBR; using GPT.
+ Disk /dev/sda: 100663296 sectors, 48.0 GiB
+ Logical sector size: 512 bytes
+ Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
+ Partition table holds up to 128 entries
+ First usable sector is 34, last usable sector is 100663262
+ Partitions will be aligned on 2048-sector boundaries
+ Total free space is 4062 sectors (2.0 MiB)
+
+ Number Start (sector) End (sector) Size Code Name
+ 1 1026048 2050047 500.0 MiB 0700
+ 2 2050048 100663261 47.0 GiB 0700
+ 14 2048 10239 4.0 MiB EF02
+ 15 10240 1024000 495.0 MiB EF00 EFI System Partition
+ ```
+
+1. Expand the filesystem on the partition with **xfs_growfs**, which is appropriate for a standard marketplace-generated RedHat system:
+
+ ```bash
+ xfs_growfs /
+ ```
+
+ ```output
+ meta-data=/dev/sda2 isize=512 agcount=4, agsize=1901952 blks
+ = sectsz=4096 attr=2, projid32bit=1
+ = crc=1 finobt=0 spinodes=0
+ data = bsize=4096 blocks=7607808, imaxpct=25
+ = sunit=0 swidth=0 blks
+ naming =version 2 bsize=4096 ascii-ci=0 ftype=1
+ log =internal bsize=4096 blocks=3714, version=2
+ = sectsz=4096 sunit=1 blks, lazy-count=1
+ realtime =none extsz=4096 blocks=0, rtextents=0
+ data blocks changed from 7607808 to 12326651
+ ```
+
+1. Verify the new size is reflected with the **df** command
+
+ ```bash
+ df -hl
+ ```
+
+ ```output
+ Filesystem Size Used Avail Use% Mounted on
+ devtmpfs 452M 0 452M 0% /dev
+ tmpfs 464M 0 464M 0% /dev/shm
+ tmpfs 464M 6.8M 457M 2% /run
+ tmpfs 464M 0 464M 0% /sys/fs/cgroup
+ /dev/sda2 48G 2.1G 46G 5% /
+ /dev/sda1 494M 65M 430M 13% /boot
+ /dev/sda15 495M 12M 484M 3% /boot/efi
+ /dev/sdb1 3.9G 16M 3.7G 1% /mnt/resource
+ tmpfs 93M 0 93M 0% /run/user/1000
+ ```
+++
+## Expanding without downtime classic VM SKU support
+
+If you're using a classic VM SKU, it might not support expanding disks without downtime.
+
+Use the following PowerShell script to determine which VM SKUs it's available with:
+
+```azurepowershell
+Connect-AzAccount
+$subscriptionId="yourSubID"
+$location="desiredRegion"
+Set-AzContext -Subscription $subscriptionId
+$vmSizes=Get-AzComputeResourceSku -Location $location | where{$_.ResourceType -eq 'virtualMachines'}
+
+foreach($vmSize in $vmSizes){
+ foreach($capability in $vmSize.Capabilities)
+ {
+ if(($capability.Name -eq "EphemeralOSDiskSupported" -and $capability.Value -eq "True") -or ($capability.Name -eq "PremiumIO" -and $capability.Value -eq "True") -or ($capability.Name -eq "HyperVGenerations" -and $capability.Value -match "V2"))
+ {
+ $vmSize.Name
+ }
+ }
+}
+```
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder.md
az sig image-definition create \
--gallery-image-definition $imageDefName \ --publisher myIbPublisher \ --offer myOffer \
- --sku 18.04-LTS \
- --os-type Linux
+ --sku 20_04-lts-gen2 \
+ --os-type Linux \
+ --hyper-v-generation V2 \
+ --features SecurityType=TrustedLaunchSupported
``` ## Download and configure the JSON file
az vm create \
--admin-username aibuser \ --location $location \ --image "/subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup/providers/Microsoft.Compute/galleries/$sigName/images/$imageDefName/versions/latest" \
+ --security-type TrustedLaunch \
--generate-ssh-keys ```
virtual-machines Imaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/imaging.md
# Bringing and creating Linux images in Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets This overview covers the basic concepts around imaging and how to successfully build and use Linux images in Azure. Before you bring a custom image to Azure, you need to be aware of the types and options available to you.
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
# Install NVIDIA GPU drivers on N-series VMs running Linux
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install NVIDIA GPU drivers. The [NVIDIA GPU Driver Extension](../extensions/hpccompute-gpu-linux.md) installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as the Azure CLI or Azure Resource Manager templates. See the [NVIDIA GPU Driver Extension documentation](../extensions/hpccompute-gpu-linux.md) for supported distributions and deployment steps.
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
# Run scripts in your Linux VM by using managed Run Commands
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets > [!IMPORTANT]
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
ms.devlang: azurecli
# Run scripts in your Linux VM by using action Run Commands
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets The Run Command feature uses the virtual machine (VM) agent to run shell scripts within an Azure Linux VM. You can use these scripts for general machine or application management. They can help you to quickly diagnose and remediate VM access and network issues and get the VM back to a good state.
virtual-machines Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/storage-performance.md
# Optimize performance on Lsv3, Lasv3, and Lsv2-series Linux VMs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Uniform scale sets Lsv3, Lasv3, and Lsv2-series Azure Virtual Machines (Azure VMs) support various workloads that need high I/O and throughput on local storage across a wide range of applications and industries. The L-series is ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases, including Cassandra, MongoDB, Cloudera, and Redis.
virtual-machines Time Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/time-sync.md
# Time sync for Linux VMs in Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets Time sync is important for security and event correlation. Sometimes it's used for distributed transactions implementation. Time accuracy between multiple computer systems is achieved through synchronization. Synchronization can be affected by multiple things, including reboots and network traffic between the time source and the computer fetching the time.
virtual-machines Tutorial Devops Azure Pipelines Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-devops-azure-pipelines-classic.md
Title: Configure rolling deployments for Azure Linux virtual machines
description: Learn how to set up a classic release pipeline and deploy your application to Linux virtual machines using the rolling deployment strategy.
-tags: azure-devops-pipelines
virtual-machines Tutorial Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-manage-vm.md
- Title: Tutorial - Create and manage Linux VMs with the Azure CLI
-description: In this tutorial, you learn how to use the Azure CLI to create and manage Linux VMs in Azure
---- Previously updated : 03/23/2023--
-#Customer intent: As an IT administrator, I want to learn about common maintenance tasks so that I can create and manage Linux VMs in Azure
--
-# Tutorial: Create and Manage Linux VMs with the Azure CLI
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic Azure virtual machine deployment items such as selecting a VM size, selecting a VM image, and deploying a VM. You learn how to:
-
-> [!div class="checklist"]
-> * Create and connect to a VM
-> * Select and use VM images
-> * View and use specific VM sizes
-> * Resize a VM
-> * View and understand VM state
-
-This tutorial uses the CLI within the [Azure Cloud Shell](../../cloud-shell/overview.md), which is constantly updated to the latest version.
-
-If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
-
-## Create resource group
-
-Create a resource group with the [az group create](/cli/azure/group) command.
-
-An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a virtual machine. In this example, a resource group named *myResourceGroupVM* is created in the *eastus2* region.
--
-```azurecli-interactive
-az group create --name myResourceGroupVM --location eastus2
-```
-
-The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
-
-## Create virtual machine
-
-Create a virtual machine with the [az vm create](/cli/azure/vm) command.
-
-When you create a virtual machine, several options are available such as operating system image, disk sizing, and administrative credentials. The following example creates a VM named *myVM* that runs SUSE Linux Enterprise Server (SLES). A user account named *azureuser* is created on the VM, and SSH keys are generated if they do not exist in the default key location (*~/.ssh*):
-
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroupVM \
- --name myVM \
- --image SuseSles15SP3 \
- --public-ip-sku Standard \
- --admin-username azureuser \
- --generate-ssh-keys
-```
-
-It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information about the VM. Take note of the `publicIpAddress`, this address can be used to access the virtual machine.
-
-```output
-{
- "fqdns": "",
- "id": "/subscriptions/d5b9d4b7-6fc1-0000-0000-000000000000/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
- "location": "eastus2",
- "macAddress": "00-0D-3A-23-9A-49",
- "powerState": "VM running",
- "privateIpAddress": "10.0.0.4",
- "publicIpAddress": "52.174.34.95",
- "resourceGroup": "myResourceGroupVM"
-}
-```
-
-## Connect to VM
-
-You can now connect to the VM with SSH in the Azure Cloud Shell or from your local computer. Replace the example IP address with the `publicIpAddress` noted in the previous step.
-
-```bash
-ssh azureuser@52.174.34.95
-```
-
-Once logged in to the VM, you can install and configure applications. When you are finished, you close the SSH session as normal:
-
-```bash
-exit
-```
-
-## Understand VM images
-
-The Azure Marketplace includes many images that can be used to create VMs. In the previous steps, a virtual machine was created using an Ubuntu image. In this step, the Azure CLI is used to search the marketplace for a CentOS image, which is then used to deploy a second virtual machine.
-
-To see a list of the most commonly used images, use the [az vm image list](/cli/azure/vm/image) command.
-
-```azurecli-interactive
-az vm image list --output table
-```
-
-The command output returns the most popular VM images on Azure.
-
-```output
-Architecture Offer Publisher Sku Urn UrnAlias Version
- - - --
-x64 CentOS OpenLogic 7.5 OpenLogic:CentOS:7.5:latest CentOS latest
-x64 debian-10 Debian 10 Debian:debian-10:10:latest Debian latest
-x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest Flatcar latest
-x64 opensuse-leap-15-3 SUSE gen2 SUSE:opensuse-leap-15-3:gen2:latest openSUSE-Leap latest
-x64 RHEL RedHat 7-LVM RedHat:RHEL:7-LVM:latest RHEL latest
-x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest SLES latest
-x64 UbuntuServer Canonical 18.04-LTS Canonical:UbuntuServer:18.04-LTS:latest UbuntuLTS latest
-x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest
-x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest Win2016Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest Win2012R2Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest Win2012Datacenter latest
-x64 WindowsServer MicrosoftWindowsServer 2008-R2-SP1 MicrosoftWindowsServer:WindowsServer:2008-R2-SP1:latest Win2008R2SP1 latest
-```
-
-A full list can be seen by adding the `--all` parameter. The image list can also be filtered by `--publisher` or `ΓÇô-offer`. In this example, the list is filtered for all images, published by OpenLogic, with an offer that matches *CentOS*.
-
-```azurecli-interactive
-az vm image list --offer CentOS --publisher OpenLogic --all --output table
-```
-
-Example partial output:
-
-```output
-Architecture Offer Publisher Sku Urn Version
- -- --
-x64 CentOS OpenLogic 8_2 OpenLogic:CentOS:8_2:8.2.2020111800 8.2.2020111800
-x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020062401 8.2.2020062401
-x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020100601 8.2.2020100601
-x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020111801 8.2.2020111801
-x64 CentOS OpenLogic 8_3 OpenLogic:CentOS:8_3:8.3.2020120900 8.3.2020120900
-x64 CentOS OpenLogic 8_3 OpenLogic:CentOS:8_3:8.3.2021020400 8.3.2021020400
-x64 CentOS OpenLogic 8_3-gen2 OpenLogic:CentOS:8_3-gen2:8.3.2020120901 8.3.2020120901
-x64 CentOS OpenLogic 8_3-gen2 OpenLogic:CentOS:8_3-gen2:8.3.2021020401 8.3.2021020401
-x64 CentOS OpenLogic 8_4 OpenLogic:CentOS:8_4:8.4.2021071900 8.4.2021071900
-x64 CentOS OpenLogic 8_4-gen2 OpenLogic:CentOS:8_4-gen2:8.4.2021071901 8.4.2021071901
-x64 CentOS OpenLogic 8_5 OpenLogic:CentOS:8_5:8.5.2022012100 8.5.2022012100
-x64 CentOS OpenLogic 8_5 OpenLogic:CentOS:8_5:8.5.2022101800 8.5.2022101800
-x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:8.5.2022012101 8.5.2022012101
-```
---
-> [!NOTE]
-> Canonical has changed the **Offer** names they use for the most recent versions. Before Ubuntu 20.04, the **Offer** name is UbuntuServer. For Ubuntu 20.04 the **Offer** name is `0001-com-ubuntu-server-focal` and for Ubuntu 22.04 it's `0001-com-ubuntu-server-jammy`.
-
-To deploy a VM using a specific image, take note of the value in the *Urn* column, which consists of the publisher, offer, SKU, and optionally a version number to [identify](cli-ps-findimage.md#terminology) the image. When specifying the image, the image version number can be replaced with `latest`, which selects the latest version of the distribution. In this example, the `--image` parameter is used to specify the latest version of a CentOS 8.5.
-
-```azurecli-interactive
-az vm create --resource-group myResourceGroupVM --name myVM2 --image OpenLogic:CentOS:8_5:latest --generate-ssh-keys
-```
-
-## Understand VM sizes
-
-A virtual machine size determines the amount of compute resources such as CPU, GPU, and memory that are made available to the virtual machine. Virtual machines need to be sized appropriately for the expected work load. If workload increases, an existing virtual machine can be resized.
-
-### VM Sizes
-
-The following table categorizes sizes into use cases.
-
-| Type | Description |
-|--||
-| [General purpose](../sizes-general.md) | Balanced CPU-to-memory. Ideal for dev / test and small to medium applications and data solutions. |
-| [Compute optimized](../sizes-compute.md) | High CPU-to-memory. Good for medium traffic applications, network appliances, and batch processes. |
-| [Memory optimized](../sizes-memory.md) | High memory-to-core. Great for relational databases, medium to large caches, and in-memory analytics. |
-| [Storage optimized](../sizes-storage.md) | High disk throughput and IO. Ideal for Big Data, SQL, and NoSQL databases. |
-| [GPU](../sizes-gpu.md) | Specialized VMs targeted for heavy graphic rendering and video editing. |
-| [High performance](../sizes-hpc.md) | Our most powerful CPU VMs with optional high-throughput network interfaces (RDMA). |
---
-### Find available VM sizes
-
-To see a list of VM sizes available in a particular region, use the [az vm list-sizes](/cli/azure/vm) command.
-
-```azurecli-interactive
-az vm list-sizes --location eastus2 --output table
-```
-
-Example partial output:
-
-```output
- MaxDataDiskCount MemoryInMb Name NumberOfCores OsDiskSizeInMb ResourceDiskSizeInMb
- - - -
-4 8192 Standard_D2ds_v4 2 1047552 76800
-8 16384 Standard_D4ds_v4 4 1047552 153600
-16 32768 Standard_D8ds_v4 8 1047552 307200
-32 65536 Standard_D16ds_v4 16 1047552 614400
-32 131072 Standard_D32ds_v4 32 1047552 1228800
-32 196608 Standard_D48ds_v4 48 1047552 1843200
-32 262144 Standard_D64ds_v4 64 1047552 2457600
-4 8192 Standard_D2ds_v5 2 1047552 76800
-8 16384 Standard_D4ds_v5 4 1047552 153600
-16 32768 Standard_D8ds_v5 8 1047552 307200
-32 65536 Standard_D16ds_v5 16 1047552 614400
-32 131072 Standard_D32ds_v5 32 1047552 1228800
-32 196608 Standard_D48ds_v5 48 1047552 1843200
-32 262144 Standard_D64ds_v5 64 1047552 2457600
-32 393216 Standard_D96ds_v5 96 1047552 3686400
-```
-
-### Create VM with specific size
-
-In the previous VM creation example, a size was not provided, which results in a default size. A VM size can be selected at creation time using [az vm create](/cli/azure/vm) and the `--size` parameter.
-
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroupVM \
- --name myVM3 \
- --image SuseSles15SP3 \
- --size Standard_D2ds_v4 \
- --generate-ssh-keys
-```
-
-### Resize a VM
-
-After a VM has been deployed, it can be resized to increase or decrease resource allocation. You can view the current of size of a VM with [az vm show](/cli/azure/vm):
-
-```azurecli-interactive
-az vm show --resource-group myResourceGroupVM --name myVM --query hardwareProfile.vmSize
-```
-
-Before resizing a VM, check if the desired size is available on the current Azure cluster. The [az vm list-vm-resize-options](/cli/azure/vm) command returns the list of sizes.
-
-```azurecli-interactive
-az vm list-vm-resize-options --resource-group myResourceGroupVM --name myVM --query [].name
-```
-
-If the desired size is available, the VM can be resized from a powered-on state, however it is rebooted during the operation. Use the [az vm resize]( /cli/azure/vm) command to perform the resize.
-
-```azurecli-interactive
-az vm resize --resource-group myResourceGroupVM --name myVM --size Standard_D4s_v3
-```
-
-If the desired size is not on the current cluster, the VM needs to be deallocated before the resize operation can occur. Use the [az vm deallocate]( /cli/azure/vm) command to stop and deallocate the VM. Note, when the VM is powered back on, any data on the temp disk may be removed. The public IP address also changes unless a static IP address is being used.
-
-```azurecli-interactive
-az vm deallocate --resource-group myResourceGroupVM --name myVM
-```
-
-Once deallocated, the resize can occur.
-
-```azurecli-interactive
-az vm resize --resource-group myResourceGroupVM --name myVM --size Standard_GS1
-```
-
-After the resize, the VM can be started.
-
-```azurecli-interactive
-az vm start --resource-group myResourceGroupVM --name myVM
-```
-
-## VM power states
-
-An Azure VM can have one of many power states. This state represents the current state of the VM from the standpoint of the hypervisor.
-
-### Power states
-
-| Power State | Description
-|-|-|
-| Starting | Indicates the virtual machine is being started. |
-| Running | Indicates that the virtual machine is running. |
-| Stopping | Indicates that the virtual machine is being stopped. |
-| Stopped | Indicates that the virtual machine is stopped. Virtual machines in the stopped state still incur compute charges. |
-| Deallocating | Indicates that the virtual machine is being deallocated. |
-| Deallocated | Indicates that the virtual machine is removed from the hypervisor but still available in the control plane. Virtual machines in the Deallocated state do not incur compute charges. |
-| - | Indicates that the power state of the virtual machine is unknown. |
-
-### Find the power state
-
-To retrieve the state of a particular VM, use the [az vm get-instance-view](/cli/azure/vm) command. Be sure to specify a valid name for a virtual machine and resource group.
-
-```azurecli-interactive
-az vm get-instance-view \
- --name myVM \
- --resource-group myResourceGroupVM \
- --query instanceView.statuses[1] --output table
-```
-
-Output:
-
-```output
-Code Level DisplayStatus
- -
-PowerState/running Info VM running
-```
-
-To retrieve the power state of all the VMs in your subscription, use the [Virtual Machines - List All API](/rest/api/compute/virtualmachines/listall) with parameter **statusOnly** set to *true*.
-
-## Management tasks
-
-During the life-cycle of a virtual machine, you may want to run management tasks such as starting, stopping, or deleting a virtual machine. Additionally, you may want to create scripts to automate repetitive or complex tasks. Using the Azure CLI, many common management tasks can be run from the command line or in scripts.
-
-### Get IP address
-
-This command returns the private and public IP addresses of a virtual machine.
-
-```azurecli-interactive
-az vm list-ip-addresses --resource-group myResourceGroupVM --name myVM --output table
-```
-
-### Stop virtual machine
-
-```azurecli-interactive
-az vm stop --resource-group myResourceGroupVM --name myVM
-```
-
-### Start virtual machine
-
-```azurecli-interactive
-az vm start --resource-group myResourceGroupVM --name myVM
-```
-
-### Deleting VM resources
-
-Depending on how you delete a VM, it may only delete the VM resource, not the networking and disk resources. You can change the default behavior to delete other resources when you delete the VM. For more information, see [Delete a VM and attached resources](../delete.md).
-
-Deleting a resource group also deletes all resources in the resource group, like the VM, virtual network, and disk. The `--no-wait` parameter returns control to the prompt without waiting for the operation to complete. The `--yes` parameter confirms that you wish to delete the resources without an additional prompt to do so.
-
-```azurecli-interactive
-az group delete --name myResourceGroupVM --no-wait --yes
-```
-
-## Next steps
-
-In this tutorial, you learned about basic VM creation and management such as how to:
-
-> [!div class="checklist"]
-> * Create and connect to a VM
-> * Select and use VM images
-> * View and use specific VM sizes
-> * Resize a VM
-> * View and understand VM state
-
-Advance to the next tutorial to learn about VM disks.
-
-> [!div class="nextstepaction"]
-> [Create and Manage VM disks](./tutorial-manage-disks.md)
+
+ Title: Tutorial - Create and manage Linux VMs with the Azure CLI
+description: In this tutorial, you learn how to use the Azure CLI to create and manage Linux VMs in Azure
++++ Last updated : 03/23/2023++
+#Customer intent: As an IT administrator, I want to learn about common maintenance tasks so that I can create and manage Linux VMs in Azure
+++
+# Tutorial: Create and Manage Linux VMs with the Azure CLI
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+
+Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers basic Azure virtual machine deployment items such as selecting a VM size, selecting a VM image, and deploying a VM. You learn how to:
+
+> [!div class="checklist"]
+> * Create and connect to a VM
+> * Select and use VM images
+> * View and use specific VM sizes
+> * Resize a VM
+> * View and understand VM state
+
+This tutorial uses the CLI within the [Azure Cloud Shell](../../cloud-shell/overview.md), which is constantly updated to the latest version.
+
+If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+
+## Create resource group
+
+Create a resource group with the [az group create](/cli/azure/group) command.
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a virtual machine. In this example, a resource group named *myResourceGroupVM* is created in the *eastus2* region.
++
+```azurecli-interactive
+az group create --name myResourceGroupVM --location eastus2
+```
+
+The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
+
+## Create virtual machine
+
+Create a virtual machine with the [az vm create](/cli/azure/vm) command.
+
+When you create a virtual machine, several options are available such as operating system image, disk sizing, and administrative credentials. The following example creates a VM named *myVM* that runs SUSE Linux Enterprise Server (SLES). A user account named *azureuser* is created on the VM, and SSH keys are generated if they do not exist in the default key location (*~/.ssh*):
+
+```azurecli-interactive
+az vm create \
+ --resource-group myResourceGroupVM \
+ --name myVM \
+ --image SuseSles15SP3 \
+ --public-ip-sku Standard \
+ --admin-username azureuser \
+ --generate-ssh-keys
+```
+
+It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information about the VM. Take note of the `publicIpAddress`, this address can be used to access the virtual machine.
+
+```output
+{
+ "fqdns": "",
+ "id": "/subscriptions/d5b9d4b7-6fc1-0000-0000-000000000000/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
+ "location": "eastus2",
+ "macAddress": "00-0D-3A-23-9A-49",
+ "powerState": "VM running",
+ "privateIpAddress": "10.0.0.4",
+ "publicIpAddress": "52.174.34.95",
+ "resourceGroup": "myResourceGroupVM"
+}
+```
+
+## Connect to VM
+
+You can now connect to the VM with SSH in the Azure Cloud Shell or from your local computer. Replace the example IP address with the `publicIpAddress` noted in the previous step.
+
+```bash
+ssh azureuser@52.174.34.95
+```
+
+Once logged in to the VM, you can install and configure applications. When you are finished, you close the SSH session as normal:
+
+```bash
+exit
+```
+
+## Understand VM images
+
+The Azure Marketplace includes many images that can be used to create VMs. In the previous steps, a virtual machine was created using an Ubuntu image. In this step, the Azure CLI is used to search the marketplace for a CentOS image, which is then used to deploy a second virtual machine.
+
+To see a list of the most commonly used images, use the [az vm image list](/cli/azure/vm/image) command.
+
+```azurecli-interactive
+az vm image list --output table
+```
+
+The command output returns the most popular VM images on Azure.
+
+```output
+Architecture Offer Publisher Sku Urn UrnAlias Version
+-- - - - --
+x64 CentOS OpenLogic 7.5 OpenLogic:CentOS:7.5:latest CentOS latest
+x64 debian-10 Debian 10 Debian:debian-10:10:latest Debian latest
+x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest Flatcar latest
+x64 opensuse-leap-15-3 SUSE gen2 SUSE:opensuse-leap-15-3:gen2:latest openSUSE-Leap latest
+x64 RHEL RedHat 7-LVM RedHat:RHEL:7-LVM:latest RHEL latest
+x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest SLES latest
+x64 UbuntuServer Canonical 18.04-LTS Canonical:UbuntuServer:18.04-LTS:latest UbuntuLTS latest
+x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest Win2022Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest Win2022AzureEditionCore latest
+x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest Win2019Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest Win2016Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest Win2012R2Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest Win2012Datacenter latest
+x64 WindowsServer MicrosoftWindowsServer 2008-R2-SP1 MicrosoftWindowsServer:WindowsServer:2008-R2-SP1:latest Win2008R2SP1 latest
+```
+
+A full list can be seen by adding the `--all` parameter. The image list can also be filtered by `--publisher` or `ΓÇô-offer`. In this example, the list is filtered for all images, published by OpenLogic, with an offer that matches *CentOS*.
+
+```azurecli-interactive
+az vm image list --offer CentOS --publisher OpenLogic --all --output table
+```
+
+Example partial output:
+
+```output
+Architecture Offer Publisher Sku Urn Version
+-- - -- --
+x64 CentOS OpenLogic 8_2 OpenLogic:CentOS:8_2:8.2.2020111800 8.2.2020111800
+x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020062401 8.2.2020062401
+x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020100601 8.2.2020100601
+x64 CentOS OpenLogic 8_2-gen2 OpenLogic:CentOS:8_2-gen2:8.2.2020111801 8.2.2020111801
+x64 CentOS OpenLogic 8_3 OpenLogic:CentOS:8_3:8.3.2020120900 8.3.2020120900
+x64 CentOS OpenLogic 8_3 OpenLogic:CentOS:8_3:8.3.2021020400 8.3.2021020400
+x64 CentOS OpenLogic 8_3-gen2 OpenLogic:CentOS:8_3-gen2:8.3.2020120901 8.3.2020120901
+x64 CentOS OpenLogic 8_3-gen2 OpenLogic:CentOS:8_3-gen2:8.3.2021020401 8.3.2021020401
+x64 CentOS OpenLogic 8_4 OpenLogic:CentOS:8_4:8.4.2021071900 8.4.2021071900
+x64 CentOS OpenLogic 8_4-gen2 OpenLogic:CentOS:8_4-gen2:8.4.2021071901 8.4.2021071901
+x64 CentOS OpenLogic 8_5 OpenLogic:CentOS:8_5:8.5.2022012100 8.5.2022012100
+x64 CentOS OpenLogic 8_5 OpenLogic:CentOS:8_5:8.5.2022101800 8.5.2022101800
+x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:8.5.2022012101 8.5.2022012101
+```
+++
+> [!NOTE]
+> Canonical has changed the **Offer** names they use for the most recent versions. Before Ubuntu 20.04, the **Offer** name is UbuntuServer. For Ubuntu 20.04 the **Offer** name is `0001-com-ubuntu-server-focal` and for Ubuntu 22.04 it's `0001-com-ubuntu-server-jammy`.
+
+To deploy a VM using a specific image, take note of the value in the *Urn* column, which consists of the publisher, offer, SKU, and optionally a version number to [identify](cli-ps-findimage.md#terminology) the image. When specifying the image, the image version number can be replaced with `latest`, which selects the latest version of the distribution. In this example, the `--image` parameter is used to specify the latest version of a CentOS 8.5.
+
+```azurecli-interactive
+az vm create --resource-group myResourceGroupVM --name myVM2 --image OpenLogic:CentOS:8_5:latest --generate-ssh-keys
+```
+
+## Understand VM sizes
+
+A virtual machine size determines the amount of compute resources such as CPU, GPU, and memory that are made available to the virtual machine. Virtual machines need to be sized appropriately for the expected work load. If workload increases, an existing virtual machine can be resized.
+
+### VM Sizes
+
+The following table categorizes sizes into use cases.
+
+| Type | Description |
+|--||
+| [General purpose](../sizes-general.md) | Balanced CPU-to-memory. Ideal for dev / test and small to medium applications and data solutions. |
+| [Compute optimized](../sizes-compute.md) | High CPU-to-memory. Good for medium traffic applications, network appliances, and batch processes. |
+| [Memory optimized](../sizes-memory.md) | High memory-to-core. Great for relational databases, medium to large caches, and in-memory analytics. |
+| [Storage optimized](../sizes-storage.md) | High disk throughput and IO. Ideal for Big Data, SQL, and NoSQL databases. |
+| [GPU](../sizes-gpu.md) | Specialized VMs targeted for heavy graphic rendering and video editing. |
+| [High performance](../sizes-hpc.md) | Our most powerful CPU VMs with optional high-throughput network interfaces (RDMA). |
+++
+### Find available VM sizes
+
+To see a list of VM sizes available in a particular region, use the [az vm list-sizes](/cli/azure/vm) command.
+
+```azurecli-interactive
+az vm list-sizes --location eastus2 --output table
+```
+
+Example partial output:
+
+```output
+ MaxDataDiskCount MemoryInMb Name NumberOfCores OsDiskSizeInMb ResourceDiskSizeInMb
+ - - -
+4 8192 Standard_D2ds_v4 2 1047552 76800
+8 16384 Standard_D4ds_v4 4 1047552 153600
+16 32768 Standard_D8ds_v4 8 1047552 307200
+32 65536 Standard_D16ds_v4 16 1047552 614400
+32 131072 Standard_D32ds_v4 32 1047552 1228800
+32 196608 Standard_D48ds_v4 48 1047552 1843200
+32 262144 Standard_D64ds_v4 64 1047552 2457600
+4 8192 Standard_D2ds_v5 2 1047552 76800
+8 16384 Standard_D4ds_v5 4 1047552 153600
+16 32768 Standard_D8ds_v5 8 1047552 307200
+32 65536 Standard_D16ds_v5 16 1047552 614400
+32 131072 Standard_D32ds_v5 32 1047552 1228800
+32 196608 Standard_D48ds_v5 48 1047552 1843200
+32 262144 Standard_D64ds_v5 64 1047552 2457600
+32 393216 Standard_D96ds_v5 96 1047552 3686400
+```
+
+### Create VM with specific size
+
+In the previous VM creation example, a size was not provided, which results in a default size. A VM size can be selected at creation time using [az vm create](/cli/azure/vm) and the `--size` parameter.
+
+```azurecli-interactive
+az vm create \
+ --resource-group myResourceGroupVM \
+ --name myVM3 \
+ --image SuseSles15SP3 \
+ --size Standard_D2ds_v4 \
+ --generate-ssh-keys
+```
+
+### Resize a VM
+
+After a VM has been deployed, it can be resized to increase or decrease resource allocation. You can view the current of size of a VM with [az vm show](/cli/azure/vm):
+
+```azurecli-interactive
+az vm show --resource-group myResourceGroupVM --name myVM --query hardwareProfile.vmSize
+```
+
+Before resizing a VM, check if the desired size is available on the current Azure cluster. The [az vm list-vm-resize-options](/cli/azure/vm) command returns the list of sizes.
+
+```azurecli-interactive
+az vm list-vm-resize-options --resource-group myResourceGroupVM --name myVM --query [].name
+```
+
+If the desired size is available, the VM can be resized from a powered-on state, however it is rebooted during the operation. Use the [az vm resize]( /cli/azure/vm) command to perform the resize.
+
+```azurecli-interactive
+az vm resize --resource-group myResourceGroupVM --name myVM --size Standard_D4s_v3
+```
+
+If the desired size is not on the current cluster, the VM needs to be deallocated before the resize operation can occur. Use the [az vm deallocate]( /cli/azure/vm) command to stop and deallocate the VM. Note, when the VM is powered back on, any data on the temp disk may be removed. The public IP address also changes unless a static IP address is being used.
+
+```azurecli-interactive
+az vm deallocate --resource-group myResourceGroupVM --name myVM
+```
+
+Once deallocated, the resize can occur.
+
+```azurecli-interactive
+az vm resize --resource-group myResourceGroupVM --name myVM --size Standard_GS1
+```
+
+After the resize, the VM can be started.
+
+```azurecli-interactive
+az vm start --resource-group myResourceGroupVM --name myVM
+```
+
+## VM power states
+
+An Azure VM can have one of many power states. This state represents the current state of the VM from the standpoint of the hypervisor.
+
+### Power states
+
+| Power State | Description
+|-|-|
+| Starting | Indicates the virtual machine is being started. |
+| Running | Indicates that the virtual machine is running. |
+| Stopping | Indicates that the virtual machine is being stopped. |
+| Stopped | Indicates that the virtual machine is stopped. Virtual machines in the stopped state still incur compute charges. |
+| Deallocating | Indicates that the virtual machine is being deallocated. |
+| Deallocated | Indicates that the virtual machine is removed from the hypervisor but still available in the control plane. Virtual machines in the Deallocated state do not incur compute charges. |
+| - | Indicates that the power state of the virtual machine is unknown. |
+
+### Find the power state
+
+To retrieve the state of a particular VM, use the [az vm get-instance-view](/cli/azure/vm) command. Be sure to specify a valid name for a virtual machine and resource group.
+
+```azurecli-interactive
+az vm get-instance-view \
+ --name myVM \
+ --resource-group myResourceGroupVM \
+ --query instanceView.statuses[1] --output table
+```
+
+Output:
+
+```output
+Code Level DisplayStatus
+ -
+PowerState/running Info VM running
+```
+
+To retrieve the power state of all the VMs in your subscription, use the [Virtual Machines - List All API](/rest/api/compute/virtualmachines/listall) with parameter **statusOnly** set to *true*.
+
+## Management tasks
+
+During the life-cycle of a virtual machine, you may want to run management tasks such as starting, stopping, or deleting a virtual machine. Additionally, you may want to create scripts to automate repetitive or complex tasks. Using the Azure CLI, many common management tasks can be run from the command line or in scripts.
+
+### Get IP address
+
+This command returns the private and public IP addresses of a virtual machine.
+
+```azurecli-interactive
+az vm list-ip-addresses --resource-group myResourceGroupVM --name myVM --output table
+```
+
+### Stop virtual machine
+
+```azurecli-interactive
+az vm stop --resource-group myResourceGroupVM --name myVM
+```
+
+### Start virtual machine
+
+```azurecli-interactive
+az vm start --resource-group myResourceGroupVM --name myVM
+```
+
+### Deleting VM resources
+
+Depending on how you delete a VM, it may only delete the VM resource, not the networking and disk resources. You can change the default behavior to delete other resources when you delete the VM. For more information, see [Delete a VM and attached resources](../delete.md).
+
+Deleting a resource group also deletes all resources in the resource group, like the VM, virtual network, and disk. The `--no-wait` parameter returns control to the prompt without waiting for the operation to complete. The `--yes` parameter confirms that you wish to delete the resources without an additional prompt to do so.
+
+```azurecli-interactive
+az group delete --name myResourceGroupVM --no-wait --yes
+```
+
+## Next steps
+
+In this tutorial, you learned about basic VM creation and management such as how to:
+
+> [!div class="checklist"]
+> * Create and connect to a VM
+> * Select and use VM images
+> * View and use specific VM sizes
+> * Resize a VM
+> * View and understand VM state
+
+Advance to the next tutorial to learn about VM disks.
+
+> [!div class="nextstepaction"]
+> [Create and Manage VM disks](./tutorial-manage-disks.md)
virtual-machines Using Cloud Init https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/using-cloud-init.md
- Title: Overview of cloud-init support for Linux VMs in Azure
-description: Overview of cloud-init capabilities to configure a VM at provisioning time in Azure.
----- Previously updated : 12/21/2022--
-# cloud-init support for virtual machines in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-This article explains the support that exists for [cloud-init](https://cloudinit.readthedocs.io) to configure a virtual machine (VM) or Virtual Machine Scale Sets at provisioning time in Azure. These cloud-init configurations are run on first boot once the resources have been provisioned by Azure.
-
-VM Provisioning is the process where the Azure will pass down your VM Create parameter values, such as hostname, username, and password, and make them available to the VM as it boots up. A 'provisioning agent' will consume those values, configure the VM, and report back when completed.
-
-Azure supports two provisioning agents [cloud-init](https://cloudinit.readthedocs.io), and the [Azure Linux Agent (WALA)](../extensions/agent-linux.md).
-
-## cloud-init overview
-[cloud-init](https://cloudinit.readthedocs.io) is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. Because cloud-init is called during the initial boot process, there are no additional steps or required agents to apply your configuration. For more information on how to properly format your `#cloud-config` files or other inputs, see the [cloud-init documentation site](https://cloudinit.readthedocs.io/en/latest/topics/format.html#cloud-config-data). `#cloud-config` files are text files encoded in base64.
-
-cloud-init also works across distributions. For example, you don't use **apt-get install** or **yum install** to install a package. Instead you can define a list of packages to install. cloud-init automatically uses the native package management tool for the distro you select.
-
-We're actively working with our endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure Marketplace.
-These images will make your cloud-init deployments and configurations work seamlessly with VMs and virtual machine scale sets. Initially we collaborate with the endorsed Linux distro partners and upstream to ensure cloud-init functions with the OS on Azure, then the packages are updated and made publicly available in the distro package repositories.
-
-There are two stages to making cloud-init available to the supported Linux distributions on Azure, package support, and then image support:
-* 'cloud-init package support on Azure' documents, which cloud-init packages onwards are supported or in preview, so you can use these packages with the OS in a custom image.
-* 'image cloud-init ready' documents if the image is already configured to use cloud-init.
-
-### Canonical
-| Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
-|: |: |: |: |: |: |
-|Canonical 22.04 |UbuntuServer |22.04-LTS |latest |yes | yes |
-|Canonical 20.04 |UbuntuServer |20.04-LTS |latest |yes | yes |
-|Canonical 18.04 |UbuntuServer |18.04-LTS |latest |yes | yes |
--
-### RHEL
-| Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
-|: |: |: |: |: |: |
-|RedHat 7 |RHEL |7.7, 7.8, 7_9 |latest |yes | yes |
-|RedHat 8 |RHEL |8.1, 8.2, 8_3, 8_4 |latest |yes | yes |
-|RedHat 9 |RHEL |9_0, 9_1 |latest |yes | yes |
-
-* All other RedHat SKUs starting from RHEL 7 (version 7.7) and RHEL 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. Cloud-init is not supported on RHEL 6.
--
-### CentOS
- Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
-|: |: |: |: |: |: |
-|OpenLogic 7 |CentOS |7.7, 7.8, 7.9 |latest |yes | yes |
-|OpenLogic 8 |CentOS |8.1, 8.2, 8.3 |latest |yes | yes |
-
-* All other CentOS SKUs starting from CentOS 7 (version 7.7) and CentOS 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. CentOS 6.10, 7.4, 7.5, and 7.6 images don't support cloud-init.
-
-> [!NOTE]
-> OpenLogic is now Rogue Wave Software
---
-### Oracle
-
- Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
-|: |: |: |: |: |: |
-|Oracle 7 |Oracle Linux |77, 78, ol79 |latest |yes | yes |
-|Oracle 8 |Oracle Linux |81, ol82, ol83-lvm, ol84-lvm |latest |yes | yes |
-
-* All other Oracle SKUs starting from Oracle 7 (version 7.7) and Oracle 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init.
--
-### SUSE SLES
-
- Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
-|: |: |: |: |: |: |
-|SUSE 15 |SLES (SUSE Linux Enterprise Server) |sp1, sp2, sp3 |latest |yes | yes |
-|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sp5 |latest |yes | yes |
-
-* All other SUSE SKUs starting from SLES 15 (sp1) and SLES 12 (sp5) including both Gen1 and Gen2 images are provisioned using cloud-init.
-* Additionally these images are also provisioned with cloud-init -
--
- Publisher / Version| Offer | SKU / Version
-|: |: |:
-|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sles-{byos/sap/sap-byos}:12-sp4:2020.06.10
-|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sles-{byos/sap/sap-byos}:12-sp3:2020.06.10
-|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sles-{byos/sap/sap-byos}:12-sp2:2020.06.10
-|SUSE 15 |SLES (SUSE Linux Enterprise Server) |manager-proxy-4-byosgen1:2020.06.10
-|SUSE 15 |SLES (SUSE Linux Enterprise Server) |manager-server-4-byos:gen1:2020.06.10
--
-### Debian
-| Publisher / Version | Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
-|: |: |: |: |: |: |
-| debian (Gen1) |debian-10 | 10-cloudinit |10:0.20201013.422| yes | yes - support from package version: `20.2-2~deb10u1` |
-| debian (Gen2) |debian-10 | 10-cloudinit-gen2 |0.20201013.422| yes | yes - support from package version: `20.2-2~deb10u1` |
--
-Currently Azure Stack will support the provisioning of cloud-init enabled images.
-
-## What is the difference between cloud-init and the Linux Agent (WALA)?
-WALA is an Azure platform-specific agent used to provision and configure VMs, and handle [Azure extensions](../extensions/features-linux.md).
-
-We're enhancing the task of configuring VMs to use cloud-init instead of the Linux Agent in order to allow existing cloud-init customers to use their current cloud-init scripts, or new customers to take advantage of the rich cloud-init configuration functionality. If you have existing investments in cloud-init scripts for configuring Linux systems, there are **no additional settings required** to enable cloud-init process them.
-
-cloud-init cannot process Azure extensions, so WALA is still required in the image to process extensions, but will need to have its provisioning code disabled, for endorsed Linux distros images that are being converted to provision by cloud-init, they will have WALA installed, and setup correctly.
-
-When creating a VM, if you don't include the Azure CLI `--custom-data` switch at provisioning time, cloud-init or WALA takes the minimal VM provisioning parameters required to provision the VM and complete the deployment with the defaults. If you reference the cloud-init configuration with the `--custom-data` switch, whatever is contained in your custom data will be available to cloud-init when the VM boots.
-
-cloud-init configurations applied to VMs do not have time constraints and will not cause a deployment to fail by timing out. This isn't true for WALA, if you change the WALA defaults to process custom-data, it can't exceed the total VM provisioning time allowance of 40 minutes, if so, the VM Create will fail.
-
-## cloud-init VM provisioning without a UDF driver
-Beginning with cloud-init 21.2, you can use cloud-init to provision a VM in Azure without a UDF driver. If a UDF driver isn't available in the image, cloud-init uses the metadata that's available in the Azure Instance Metadata Service to provision the VM. This option works only for SSH key and [user data](../user-data.md). To pass in a password or custom data to a VM during provisioning, you must use a UDF driver.
-
-## Deploying a cloud-init enabled Virtual Machine
-Deploying a cloud-init enabled virtual machine is as simple as referencing a cloud-init enabled distribution during deployment. Linux distribution maintainers have to choose to enable and integrate cloud-init into their base Azure published images. Once you've confirmed the image you want to deploy is cloud-init enabled, you can use the Azure CLI to deploy the image.
-
-The first step in deploying this image is to create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
-
-The following example creates a resource group named *myResourceGroup* in the *eastus* location.
-
-```azurecli-interactive
-az group create --name myResourceGroup --location eastus
-```
-
-The next step is to create a file in your current shell, named *cloud-init.txt* and paste the following configuration. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor of your choice. Enter `sensible-editor cloud-init.txt` to create the file and see a list of available editors. In this example, we're using the **nano** editor. Choose #1 to use the **nano** editor. Make sure that the whole cloud-init file is copied correctly, especially the first line:
-
- SLES| Ubuntu | RHEL
-|: |: |:
-| ` # cloud-config `<br>` package_upgrade: true `<br>` packages: `<br>` - apache2 ` | ` # cloud-config `<br>` package_upgrade: true `<br>` packages: `<br>` - httpd ` | ` # cloud-config `<br>` package_upgrade: true `<br>` packages: `<br>` - httpd ` |
-
-
-> [!NOTE]
-> cloud-init has multiple [input types](https://cloudinit.readthedocs.io/en/latest/topics/format.html), cloud-init will use first line of the customData/userData to indicate how it should process the input, for example `#cloud-config` indicates that the content should be processed as a cloud-init config.
-
-Press <kbd>Ctrl + X</kbd> to exit the file, type <kbd>y</kbd> to save the file, and press <kbd>Enter</kbd> to confirm the file name on exit.
-
-The final step is to create a VM with the [az vm create](/cli/azure/vm) command.
-
-The following example creates a VM named `centos74` and creates SSH keys if they don't already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option. Use the `--custom-data` parameter to pass in your cloud-init config file. Provide the full path to the *cloud-init.txt* config if you saved the file outside of your present working directory.
-
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroup \
- --name centos74 \
- --image OpenLogic:CentOS-CI:7-CI:latest \
- --custom-data cloud-init.txt \
- --generate-ssh-keys
-```
-
-When the VM has been created, the Azure CLI shows information specific to your deployment. Take note of the `publicIpAddress`. This address is used to access the VM. It takes some time for the VM to be created, the packages to install, and the app to start. There are background tasks that continue to run after the Azure CLI returns you to the prompt. You can SSH into the VM and use the steps outlined in the Troubleshooting section to view the cloud-init logs.
-
-You can also deploy a cloud-init enabled VM by passing the [parameters in ARM template](../../azure-resource-manager/templates/deploy-cli.md#inline-parameters).
-
-## Troubleshooting cloud-init
-Once the VM has been provisioned, cloud-init will run through all the modules and script defined in `--custom-data` in order to configure the VM. If you need to troubleshoot any errors or omissions from the configuration, you need to search for the module name (`disk_setup` or `runcmd` for example) in the cloud-init log - located in **/var/log/cloud-init.log**.
-
-> [!NOTE]
-> Not every module failure results in a fatal cloud-init overall configuration failure. For example, using the `runcmd` module, if the script fails, cloud-init will still report provisioning succeeded because the runcmd module executed.
-
-For more details of cloud-init logging, see the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/development/logging.html)
-
-## Telemetry
-cloud-init collects usage data and sends it to Microsoft to help improve our products and services. Telemetry is only collected during the provisioning process (first boot of the VM). The data collected helps us investigate provisioning failures and monitor performance and reliability. Data collected doesn't include any identifiers (personal identifiers). Read our [privacy statement](https://go.microsoft.com/fwlink/?LinkId=521839) to learn more. Some examples of telemetry being collected are (this isn't an exhaustive list): OS-related information (cloud-init version, distro version, kernel version), performance metrics of essential VM provisioning actions (time to obtain DHCP lease, time to retrieve metadata necessary to configure the VM, etc.), cloud-init log, and dmesg log.
-
-Telemetry collection is currently enabled for most of our marketplace images that uses cloud-init. It is enabled by specifying KVP telemetry reporter for cloud-init. In most Azure Marketplace images, this configuration can be found in the file /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg. Removing this file during image preparation will disable telemetry collection for any VM created from this image.
-
-Sample content of 10-azure-kvp.cfg
-```
-reporting:
- logging:
- type: log
- telemetry:
- type: hyperv
-```
-## Next steps
-
-[Troubleshoot issues with cloud-init](cloud-init-troubleshooting.md).
--
-For cloud-init examples of configuration changes, see the following documents:
-
-- [Add an additional Linux user to a VM](cloudinit-add-user.md)-- [Run a package manager to update existing packages on first boot](cloudinit-update-vm.md)-- [Change VM local hostname](cloudinit-update-vm-hostname.md) -- [Install an application package, update configuration files and inject keys](tutorial-automate-vm-deployment.md)+
+ Title: Overview of cloud-init support for Linux VMs in Azure
+description: Overview of cloud-init capabilities to configure a VM at provisioning time in Azure.
+++++ Last updated : 12/21/2022++
+# cloud-init support for virtual machines in Azure
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+
+This article explains the support that exists for [cloud-init](https://cloudinit.readthedocs.io) to configure a virtual machine (VM) or Virtual Machine Scale Sets at provisioning time in Azure. These cloud-init configurations are run on first boot once the resources have been provisioned by Azure.
+
+VM Provisioning is the process where the Azure will pass down your VM Create parameter values, such as hostname, username, and password, and make them available to the VM as it boots up. A 'provisioning agent' will consume those values, configure the VM, and report back when completed.
+
+Azure supports two provisioning agents [cloud-init](https://cloudinit.readthedocs.io), and the [Azure Linux Agent (WALA)](../extensions/agent-linux.md).
+
+## cloud-init overview
+[cloud-init](https://cloudinit.readthedocs.io) is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to install packages and write files, or to configure users and security. Because cloud-init is called during the initial boot process, there are no additional steps or required agents to apply your configuration. For more information on how to properly format your `#cloud-config` files or other inputs, see the [cloud-init documentation site](https://cloudinit.readthedocs.io/en/latest/topics/format.html#cloud-config-data). `#cloud-config` files are text files encoded in base64.
+
+cloud-init also works across distributions. For example, you don't use **apt-get install** or **yum install** to install a package. Instead you can define a list of packages to install. cloud-init automatically uses the native package management tool for the distro you select.
+
+We're actively working with our endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure Marketplace.
+These images will make your cloud-init deployments and configurations work seamlessly with VMs and virtual machine scale sets. Initially we collaborate with the endorsed Linux distro partners and upstream to ensure cloud-init functions with the OS on Azure, then the packages are updated and made publicly available in the distro package repositories.
+
+There are two stages to making cloud-init available to the supported Linux distributions on Azure, package support, and then image support:
+* 'cloud-init package support on Azure' documents, which cloud-init packages onwards are supported or in preview, so you can use these packages with the OS in a custom image.
+* 'image cloud-init ready' documents if the image is already configured to use cloud-init.
+
+### Canonical
+| Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
+|: |: |: |: |: |: |
+|Canonical 22.04 |UbuntuServer |22.04-LTS |latest |yes | yes |
+|Canonical 20.04 |UbuntuServer |20.04-LTS |latest |yes | yes |
+|Canonical 18.04 |UbuntuServer |18.04-LTS |latest |yes | yes |
++
+### RHEL
+| Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
+|: |: |: |: |: |: |
+|RedHat 7 |RHEL |7.7, 7.8, 7_9 |latest |yes | yes |
+|RedHat 8 |RHEL |8.1, 8.2, 8_3, 8_4 |latest |yes | yes |
+|RedHat 9 |RHEL |9_0, 9_1 |latest |yes | yes |
+
+* All other RedHat SKUs starting from RHEL 7 (version 7.7) and RHEL 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. Cloud-init is not supported on RHEL 6.
++
+### CentOS
+ Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
+|: |: |: |: |: |: |
+|OpenLogic 7 |CentOS |7.7, 7.8, 7.9 |latest |yes | yes |
+|OpenLogic 8 |CentOS |8.1, 8.2, 8.3 |latest |yes | yes |
+
+* All other CentOS SKUs starting from CentOS 7 (version 7.7) and CentOS 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init. CentOS 6.10, 7.4, 7.5, and 7.6 images don't support cloud-init.
+
+> [!NOTE]
+> OpenLogic is now Rogue Wave Software
+++
+### Oracle
+
+ Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
+|: |: |: |: |: |: |
+|Oracle 7 |Oracle Linux |77, 78, ol79 |latest |yes | yes |
+|Oracle 8 |Oracle Linux |81, ol82, ol83-lvm, ol84-lvm |latest |yes | yes |
+
+* All other Oracle SKUs starting from Oracle 7 (version 7.7) and Oracle 8 (version 8.1) including both Gen1 and Gen2 images are provisioned using cloud-init.
++
+### SUSE SLES
+
+ Publisher / Version| Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
+|: |: |: |: |: |: |
+|SUSE 15 |SLES (SUSE Linux Enterprise Server) |sp1, sp2, sp3 |latest |yes | yes |
+|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sp5 |latest |yes | yes |
+
+* All other SUSE SKUs starting from SLES 15 (sp1) and SLES 12 (sp5) including both Gen1 and Gen2 images are provisioned using cloud-init.
+* Additionally these images are also provisioned with cloud-init -
++
+ Publisher / Version| Offer | SKU / Version
+|: |: |:
+|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sles-{byos/sap/sap-byos}:12-sp4:2020.06.10
+|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sles-{byos/sap/sap-byos}:12-sp3:2020.06.10
+|SUSE 12 |SLES (SUSE Linux Enterprise Server) |sles-{byos/sap/sap-byos}:12-sp2:2020.06.10
+|SUSE 15 |SLES (SUSE Linux Enterprise Server) |manager-proxy-4-byosgen1:2020.06.10
+|SUSE 15 |SLES (SUSE Linux Enterprise Server) |manager-server-4-byos:gen1:2020.06.10
++
+### Debian
+| Publisher / Version | Offer | SKU | Version | image cloud-init ready | cloud-init package support on Azure|
+|: |: |: |: |: |: |
+| debian (Gen1) |debian-10 | 10-cloudinit |10:0.20201013.422| yes | yes - support from package version: `20.2-2~deb10u1` |
+| debian (Gen2) |debian-10 | 10-cloudinit-gen2 |0.20201013.422| yes | yes - support from package version: `20.2-2~deb10u1` |
++
+Currently Azure Stack will support the provisioning of cloud-init enabled images.
+
+## What is the difference between cloud-init and the Linux Agent (WALA)?
+WALA is an Azure platform-specific agent used to provision and configure VMs, and handle [Azure extensions](../extensions/features-linux.md).
+
+We're enhancing the task of configuring VMs to use cloud-init instead of the Linux Agent in order to allow existing cloud-init customers to use their current cloud-init scripts, or new customers to take advantage of the rich cloud-init configuration functionality. If you have existing investments in cloud-init scripts for configuring Linux systems, there are **no additional settings required** to enable cloud-init process them.
+
+cloud-init cannot process Azure extensions, so WALA is still required in the image to process extensions, but will need to have its provisioning code disabled, for endorsed Linux distros images that are being converted to provision by cloud-init, they will have WALA installed, and setup correctly.
+
+When creating a VM, if you don't include the Azure CLI `--custom-data` switch at provisioning time, cloud-init or WALA takes the minimal VM provisioning parameters required to provision the VM and complete the deployment with the defaults. If you reference the cloud-init configuration with the `--custom-data` switch, whatever is contained in your custom data will be available to cloud-init when the VM boots.
+
+cloud-init configurations applied to VMs do not have time constraints and will not cause a deployment to fail by timing out. This isn't true for WALA, if you change the WALA defaults to process custom-data, it can't exceed the total VM provisioning time allowance of 40 minutes, if so, the VM Create will fail.
+
+## cloud-init VM provisioning without a UDF driver
+Beginning with cloud-init 21.2, you can use cloud-init to provision a VM in Azure without a UDF driver. If a UDF driver isn't available in the image, cloud-init uses the metadata that's available in the Azure Instance Metadata Service to provision the VM. This option works only for SSH key and [user data](../user-data.md). To pass in a password or custom data to a VM during provisioning, you must use a UDF driver.
+
+## Deploying a cloud-init enabled Virtual Machine
+Deploying a cloud-init enabled virtual machine is as simple as referencing a cloud-init enabled distribution during deployment. Linux distribution maintainers have to choose to enable and integrate cloud-init into their base Azure published images. Once you've confirmed the image you want to deploy is cloud-init enabled, you can use the Azure CLI to deploy the image.
+
+The first step in deploying this image is to create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+The following example creates a resource group named *myResourceGroup* in the *eastus* location.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location eastus
+```
+
+The next step is to create a file in your current shell, named *cloud-init.txt* and paste the following configuration. For this example, create the file in the Cloud Shell not on your local machine. You can use any editor of your choice. Enter `sensible-editor cloud-init.txt` to create the file and see a list of available editors. In this example, we're using the **nano** editor. Choose #1 to use the **nano** editor. Make sure that the whole cloud-init file is copied correctly, especially the first line:
+
+ SLES| Ubuntu | RHEL
+|: |: |:
+| ` # cloud-config `<br>` package_upgrade: true `<br>` packages: `<br>` - apache2 ` | ` # cloud-config `<br>` package_upgrade: true `<br>` packages: `<br>` - httpd ` | ` # cloud-config `<br>` package_upgrade: true `<br>` packages: `<br>` - httpd ` |
+
+
+> [!NOTE]
+> cloud-init has multiple [input types](https://cloudinit.readthedocs.io/en/latest/topics/format.html), cloud-init will use first line of the customData/userData to indicate how it should process the input, for example `#cloud-config` indicates that the content should be processed as a cloud-init config.
+
+Press <kbd>Ctrl + X</kbd> to exit the file, type <kbd>y</kbd> to save the file, and press <kbd>Enter</kbd> to confirm the file name on exit.
+
+The final step is to create a VM with the [az vm create](/cli/azure/vm) command.
+
+The following example creates a VM named `centos74` and creates SSH keys if they don't already exist in a default key location. To use a specific set of keys, use the `--ssh-key-value` option. Use the `--custom-data` parameter to pass in your cloud-init config file. Provide the full path to the *cloud-init.txt* config if you saved the file outside of your present working directory.
+
+```azurecli-interactive
+az vm create \
+ --resource-group myResourceGroup \
+ --name centos74 \
+ --image OpenLogic:CentOS-CI:7-CI:latest \
+ --custom-data cloud-init.txt \
+ --generate-ssh-keys
+```
+
+When the VM has been created, the Azure CLI shows information specific to your deployment. Take note of the `publicIpAddress`. This address is used to access the VM. It takes some time for the VM to be created, the packages to install, and the app to start. There are background tasks that continue to run after the Azure CLI returns you to the prompt. You can SSH into the VM and use the steps outlined in the Troubleshooting section to view the cloud-init logs.
+
+You can also deploy a cloud-init enabled VM by passing the [parameters in ARM template](../../azure-resource-manager/templates/deploy-cli.md#inline-parameters).
+
+## Troubleshooting cloud-init
+Once the VM has been provisioned, cloud-init will run through all the modules and script defined in `--custom-data` in order to configure the VM. If you need to troubleshoot any errors or omissions from the configuration, you need to search for the module name (`disk_setup` or `runcmd` for example) in the cloud-init log - located in **/var/log/cloud-init.log**.
+
+> [!NOTE]
+> Not every module failure results in a fatal cloud-init overall configuration failure. For example, using the `runcmd` module, if the script fails, cloud-init will still report provisioning succeeded because the runcmd module executed.
+
+For more details of cloud-init logging, see the [cloud-init documentation](https://cloudinit.readthedocs.io/en/latest/development/logging.html)
+
+## Telemetry
+cloud-init collects usage data and sends it to Microsoft to help improve our products and services. Telemetry is only collected during the provisioning process (first boot of the VM). The data collected helps us investigate provisioning failures and monitor performance and reliability. Data collected doesn't include any identifiers (personal identifiers). Read our [privacy statement](https://go.microsoft.com/fwlink/?LinkId=521839) to learn more. Some examples of telemetry being collected are (this isn't an exhaustive list): OS-related information (cloud-init version, distro version, kernel version), performance metrics of essential VM provisioning actions (time to obtain DHCP lease, time to retrieve metadata necessary to configure the VM, etc.), cloud-init log, and dmesg log.
+
+Telemetry collection is currently enabled for most of our marketplace images that uses cloud-init. It is enabled by specifying KVP telemetry reporter for cloud-init. In most Azure Marketplace images, this configuration can be found in the file /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg. Removing this file during image preparation will disable telemetry collection for any VM created from this image.
+
+Sample content of 10-azure-kvp.cfg
+```
+reporting:
+ logging:
+ type: log
+ telemetry:
+ type: hyperv
+```
+## Next steps
+
+[Troubleshoot issues with cloud-init](cloud-init-troubleshooting.md).
++
+For cloud-init examples of configuration changes, see the following documents:
+
+- [Add an additional Linux user to a VM](cloudinit-add-user.md)
+- [Run a package manager to update existing packages on first boot](cloudinit-update-vm.md)
+- [Change VM local hostname](cloudinit-update-vm-hostname.md)
+- [Install an application package, update configuration files and inject keys](tutorial-automate-vm-deployment.md)
virtual-machines M Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/m-series.md
# M-series
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The M-series offers a high vCPU count (up to 128 vCPUs) and a large amount of memory (up to 3.8 TiB). It's also ideal for extremely large databases or other applications that benefit from high vCPU counts and large amounts of memory. M-series sizes are supported both on the Intel&reg; Xeon&reg; CPU E7-8890 v3 @ 2.50GHz and on the Intel&reg; Xeon&reg; Platinum 8280M (Cascade Lake).
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
Azure managed disks are block-level storage volumes that are managed by Azure an
The available types of disks are ultra disks, premium solid-state drives (SSD), standard SSDs, and standard hard disk drives (HDD). For information about each individual disk type, see [Select a disk type for IaaS VMs](disks-types.md).
-Alternatively, you could use an Azure Elastic SAN Preview as your VM's storage. An Elastic SAN allows you to consolidate the storage for all your workloads into a single storage backend and can be more cost effective if you've a sizeable amount of large scale IO-intensive workloads and top tier databases. To learn more, see [What is Azure Elastic SAN? Preview](../storage/elastic-san/elastic-san-introduction.md)
+Alternatively, you could use an Azure Elastic SAN as your VM's storage. An Elastic SAN allows you to consolidate the storage for all your workloads into a single storage backend and can be more cost effective if you've a sizeable amount of large scale IO-intensive workloads and top tier databases. To learn more, see [What is Azure Elastic SAN?](../storage/elastic-san/elastic-san-introduction.md)
## Benefits of managed disks
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
Last updated 09/19/2023
# NC A100 v4-series
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The NC A100 v4 series virtual machine (VM) is a new addition to the Azure GPU family. You can use this series for real-world Azure Applied AI training and batch inference workloads.
virtual-machines Ndm A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ndm-a100-v4-series.md
Last updated 03/13/2023
# NDm A100 v4-series
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The NDm A100 v4 series virtual machine(VM) is a new flagship addition to the Azure GPU family. It's designed for high-end Deep Learning training and tightly coupled scale-up and scale-out HPC workloads.
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
# NP-series
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The NP-series virtual machines are powered by [Xilinx U250](https://www.xilinx.com/products/boards-and-kits/alveo/u250.html) FPGAs for accelerating workloads including machine learning inference, video transcoding, and database search & analytics. NP-series VMs are also powered by Intel Xeon 8171M (Skylake) CPUs with all core turbo clock speed of 3.2 GHz.
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
Title: Create managed disk from snapshot (Linux) - CLI sample
description: Azure CLI Script Sample - restore a disk from a snapshot and learn about the performance impact of restoring managed disk snapshots
-tags: azure-service-management
ms.devlang: azurecli
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
editor: ramankum
-tags: azure-service-management
ms.devlang: azurecli
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
editor: ramankum
-tags: azure-service-management
ms.devlang: azurecli
virtual-machines Virtual Machines Powershell Sample Copy Managed Disks Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-managed-disks-vhd.md
description: Azure PowerShell script sample - Export/Copy the VHD of a managed
-tags: azure-service-management
virtual-machines Virtual Machines Powershell Sample Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-snapshot-to-same-or-different-subscription.md
description: Azure PowerShell Script Sample - Copy (move) snapshot of a managed
-tags: azure-service-management
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-mpi.md
# Set up Message Passing Interface for HPC
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets The [Message Passing Interface (MPI)](https://en.wikipedia.org/wiki/Message_Passing_Interface) is an open library and de-facto standard for distributed memory parallelization. It is commonly used across many HPC workloads. HPC workloads on the [RDMA capable](sizes-hpc.md#rdma-capable-instances) [HB-series](sizes-hpc.md) and [N-series](sizes-gpu.md) VMs can use MPI to communicate over the low latency and high bandwidth InfiniBand network.
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-hpc.md
# High performance computing VM sizes
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets > [!TIP]
virtual-machines Trusted Launch Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-faq.md
# Trusted Launch FAQ
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Frequently asked questions about trusted launch. Feature use cases, support for other Azure features, and fixes for common errors. ## Use cases
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Azure offers trusted launch as a seamless way to improve the security of [genera
- All public regions - All Azure Government regions
+- All Azure China regions
**Pricing**: Trusted launch does not increase existing VM pricing costs.
virtual-machines Cli Ps Findimage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/cli-ps-findimage.md
- Title: Find and use marketplace purchase plan information using PowerShell
-description: Use Azure PowerShell to find image URNs and purchase plan parameters, like the publisher, offer, SKU, and version, for Marketplace VM images.
--- Previously updated : 03/17/2021----
-# Find and use Azure Marketplace VM images with Azure PowerShell
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
--
-This article describes how to use Azure PowerShell to find VM images in the Azure Marketplace. You can then specify a Marketplace image and plan information when you create a VM.
-
-You can also browse available images and offers using the [Azure Marketplace](https://azuremarketplace.microsoft.com/) or the [Azure CLI](../linux/cli-ps-findimage.md).
-
-## Terminology
-
-A Marketplace image in Azure has the following attributes:
-
-* **Publisher**: The organization that created the image. Examples: Canonical, MicrosoftWindowsServer
-* **Offer**: The name of a group of related images created by a publisher. Examples: UbuntuServer, WindowsServer
-* **SKU**: An instance of an offer, such as a major release of a distribution. Examples: 18.04-LTS, 2019-Datacenter
-* **Version**: The version number of an image SKU.
-
-These values can be passed individually or as an image *URN*, combining the values separated by the colon (:). For example: *Publisher*:*Offer*:*Sku*:*Version*. You can replace the version number in the URN with `latest` to use the latest version of the image.
-
-If the image publisher provides other license and purchase terms, then you must accept those before you can use the image. For more information, see [Accept purchase plan terms](#accept-purchase-plan-terms).
-
-## Default Images
-
-Powershell offers several pre-defined image aliases to make the resource creation process easier. There are different images for resources with either a Windows or Linux operating system. Several Powershell cmdlets, such as `New-AzVM` and `New-AzVmss`, allow you to input the alias name as a parameter.
-For example:
-
-```powershell
-$rgname = <Resource Group Name>
-$location = <Azure Region>
-$vmName = "v" + $rgname
-$domainNameLabel = "d" + $rgname
-$securePassword = <Password> | ConvertTo-SecureString -AsPlainText -Force
-$username = <Username>
-$credential = New-Object System.Management.Automation.PSCredential ($username, $securePassword)
-New-AzVM -ResourceGroupName $rgname -Location $location -Name $vmName -image CentOS85Gen285Gen2 -Credential $credential -DomainNameLabel $domainNameLabel
-```
-
-The Linux image alias names and their details are:
-```output
-Alias Architecture Offer Publisher Sku Urn Version
-- - - -
-CentOS85Gen2 x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:latest latest
-Debian11 x64 Debian-11 Debian 11-backports-gen2 Debian:debian-11:11-backports-gen2:latest latest
-FlatcarLinuxFreeGen2 x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest latest
-OpenSuseLeap154Gen2 x64 opensuse-leap-15-4 SUSE gen2 SUSE:opensuse-leap-15-4:gen2:latest latest
-RHELRaw8LVMGen2 x64 RHEL RedHat 8-lvm-gen2 RedHat:RHEL:8-lvm-gen2:latest latest
-SLES x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest latest
-Ubuntu2204 x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts-gen2 Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest latest
-```
-
-The Windows image alias names and their details are:
-```output
-Alias Architecture Offer Publisher Sku Urn Version
-- - - -
-Win2022Datacenter x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest latest
-Win2022AzureEditionCore x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest latest
-Win10 x64 Windows MicrosoftVisualStudio Windows-10-N-x64 MicrosoftVisualStudio:Windows:Windows-10-N-x64:latest latest
-Win2019Datacenter x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest latest
-Win2016Datacenter x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest latest
-Win2012R2Datacenter x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest latest
-Win2012Datacenter x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest latest
-```
-
-## List images
-
-You can use PowerShell to narrow down a list of images if you want to use a specific image that is not provided by default. Replace the values of the below variables to meet your needs.
-
-1. List the image publishers using [Get-AzVMImagePublisher](/powershell/module/az.compute/get-azvmimagepublisher).
-
- ```powershell
- $locName="<location>"
- Get-AzVMImagePublisher -Location $locName | Select PublisherName
- ```
-1. For a given publisher, list their offers using [Get-AzVMImageOffer](/powershell/module/az.compute/get-azvmimageoffer).
-
- ```powershell
- $pubName="<publisher>"
- Get-AzVMImageOffer -Location $locName -PublisherName $pubName | Select Offer
- ```
-1. For a given publisher and offer, list the SKUs available using [Get-AzVMImageSku](/powershell/module/az.compute/get-azvmimagesku).
-
- ```powershell
- $offerName="<offer>"
- Get-AzVMImageSku -Location $locName -PublisherName $pubName -Offer $offerName | Select Skus
- ```
-1. For a SKU, list the versions of the image using [Get-AzVMImage](/powershell/module/az.compute/get-azvmimage).
-
- ```powershell
- $skuName="<SKU>"
- Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Sku $skuName | Select Version
- ```
- You can also use `latest` if you want to use the latest image and not a specific older version.
--
-Now you can combine the selected publisher, offer, SKU, and version into a URN (values separated by :). Pass this URN with the `-Image` parameter when you create a VM with the [New-AzVM](/powershell/module/az.compute/new-azvm) cmdlet. You can also replace the version number in the URN with `latest` to get the latest version of the image.
-
-If you deploy a VM with a Resource Manager template, then you must set the image parameters individually in the `imageReference` properties. See the [template reference](/azure/templates/microsoft.compute/virtualmachines).
--
-## View purchase plan properties
-
-Some VM images in the Azure Marketplace have other license and purchase terms that you must accept before you can deploy them programmatically. You need to accept the image's terms once per subscription.
-
-To view an image's purchase plan information, run the `Get-AzVMImage` cmdlet. If the `PurchasePlan` property in the output is not `null`, the image has terms you need to accept before programmatic deployment.
-
-For example, the *Windows Server 2016 Datacenter* image doesn't have additional terms, so the `PurchasePlan` information is `null`:
-
-```powershell
-$version = "2016.127.20170406"
-Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version
-```
-
-The output looks similar to the following output:
-
-```output
-Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/MicrosoftWindowsServer/ArtifactTypes/VMImage/Offers/WindowsServer/Skus/2016-Datacenter/Versions/2019.0.20190115
-Location : westus
-PublisherName : MicrosoftWindowsServer
-Offer : WindowsServer
-Skus : 2019-Datacenter
-Version : 2019.0.20190115
-FilterExpression :
-Name : 2019.0.20190115
-OSDiskImage : {
- "operatingSystem": "Windows"
- }
-PurchasePlan : null
-DataDiskImages : []
-
-```
-
-The example below shows a similar command for the *Data Science Virtual Machine - Windows 2016* image, which has the following `PurchasePlan` properties: `name`, `product`, and `publisher`. Some images also have a `promotion code` property. To deploy this image, see the following sections to accept the terms and to enable programmatic deployment.
-
-```powershell
-Get-AzVMImage -Location "westus" -PublisherName "microsoft-ads" -Offer "windows-data-science-vm" -Skus "windows2016" -Version "0.2.02"
-```
-
-The output looks similar to the following output:
-
-```
-Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/microsoft-ads/ArtifactTypes/VMImage/Offers/windows-data-science-vm/Skus/windows2016/Versions/19.01.14
-Location : westus
-PublisherName : microsoft-ads
-Offer : windows-data-science-vm
-Skus : windows2016
-Version : 19.01.14
-FilterExpression :
-Name : 19.01.14
-OSDiskImage : {
- "operatingSystem": "Windows"
- }
-PurchasePlan : {
- "publisher": "microsoft-ads",
- "name": "windows2016",
- "product": "windows-data-science-vm"
- }
-DataDiskImages : []
-
-```
-
-To view the license terms, use the [Get-AzMarketplaceterms](/powershell/module/az.marketplaceordering/get-azmarketplaceterms) cmdlet and pass in the purchase plan parameters. The output provides a link to the terms for the Marketplace image and shows whether you previously accepted the terms. Be sure to use all lowercase letters in the parameter values.
-
-```powershell
-Get-AzMarketplaceterms -Publisher "microsoft-ads" -Product "windows-data-science-vm" -Name "windows2016"
-```
-
-The output will look similar to the following:
-
-```output
-Publisher : microsoft-ads
-Product : windows-data-science-vm
-Plan : windows2016
-LicenseTextLink : https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_MICROSOFT%253a2DADS%253a24WINDOWS%253a2DDATA%253a2DSCIENCE%253a2DVM%253a24WINDOWS2016%253a24OC5SKMQOXSED66BBSNTF4XRCS4XLOHP7QMPV54DQU7JCBZWYFP35IDPOWTUKXUC7ZAG7W6ZMDD6NHWNKUIVSYBZUTZ245F44SU5AD7Q.txt
-PrivacyPolicyLink : https://www.microsoft.com/EN-US/privacystatement/OnlineServices/Default.aspx
-Signature : 2UMWH6PHSAIM4U22HXPXW25AL2NHUJ7Y7GRV27EBL6SUIDURGMYG6IIDO3P47FFIBBDFHZHSQTR7PNK6VIIRYJRQ3WXSE6BTNUNENXA
-Accepted : False
-Signdate : 1/25/2019 7:43:00 PM
-```
-
-## Accept purchase plan terms
-
-Use the [Set-AzMarketplaceterms](/powershell/module/az.marketplaceordering/set-azmarketplaceterms) cmdlet to accept or reject the terms. You only need to accept terms once per subscription for the image. Be sure to use all lowercase letters in the parameter values.
-
-```powershell
-$agreementTerms=Get-AzMarketplaceterms -Publisher "microsoft-ads" -Product "windows-data-science-vm" -Name "windows2016"
-
-Set-AzMarketplaceTerms -Publisher "microsoft-ads" -Product "windows-data-science-vm" -Name "windows2016" -Terms $agreementTerms -Accept
-```
---
-```output
-Publisher : microsoft-ads
-Product : windows-data-science-vm
-Plan : windows2016
-LicenseTextLink : https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_MICROSOFT%253a2DADS%253a24WINDOWS%253a2DDATA%253a2DSCIENCE%253a2DV
- M%253a24WINDOWS2016%253a24OC5SKMQOXSED66BBSNTF4XRCS4XLOHP7QMPV54DQU7JCBZWYFP35IDPOWTUKXUC7ZAG7W6ZMDD6NHWNKUIVSYBZUTZ245F44SU5AD7Q.txt
-PrivacyPolicyLink : https://www.microsoft.com/EN-US/privacystatement/OnlineServices/Default.aspx
-Signature : XXXXXXK3MNJ5SROEG2BYDA2YGECU33GXTD3UFPLPC4BAVKAUL3PDYL3KBKBLG4ZCDJZVNSA7KJWTGMDSYDD6KRLV3LV274DLBXXXXXX
-Accepted : True
-Signdate : 2/23/2018 7:49:31 PM
-```
--
-## Create a new VM from a marketplace image
-
-If you already have the information about what image you want to use, you can pass that information into [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage) cmdlet to add image information to the VM configuration. See the next sections for searching and listing the images available in the marketplace.
-
-Some paid images also require that you provide purchase plan information using the [Set-AzVMPlan](/powershell/module/az.compute/set-azvmplan).
-
-```powershell
-...
-
-$vmConfig = New-AzVMConfig -VMName "myVM" -VMSize Standard_D1
-
-# Set the Marketplace image
-$offerName = "windows-data-science-vm"
-$skuName = "windows2016"
-$version = "19.01.14"
-$vmConfig = Set-AzVMSourceImage -VM $vmConfig -PublisherName $publisherName -Offer $offerName -Skus $skuName -Version $version
-
-# Set the Marketplace plan information, if needed
-$publisherName = "microsoft-ads"
-$productName = "windows-data-science-vm"
-$planName = "windows2016"
-$vmConfig = Set-AzVMPlan -VM $vmConfig -Publisher $publisherName -Product $productName -Name $planName
-
-...
-```
-
-You'll then pass the VM configuration along with the other configuration objects to the `New-AzVM` cmdlet. For a detailed example of using a VM configuration with PowerShell, see this [script](https://github.com/Azure/azure-docs-powershell-samples/blob/master/virtual-machine/create-vm-detailed/create-windows-vm-detailed.ps1).
-
-If you get a message about accepting the terms of the image, see the earlier section [Accept purchase plan terms](#accept-purchase-plan-terms).
-
-## Create a new VM from a VHD with purchase plan information
-
-If you have an existing VHD that was created using an Azure Marketplace image, you might need to supply the purchase plan information when you create a new VM from that VHD.
-
-If you still have the original VM, or another VM created from the same image, you can get the plan name, publisher, and product information from it using Get-AzVM. This example gets a VM named *myVM* in the *myResourceGroup* resource group and then displays the purchase plan information.
-
-```azurepowershell-interactive
-$vm = Get-azvm `
- -ResourceGroupName myResourceGroup `
- -Name myVM
-$vm.Plan
-```
-
-If you didn't get the plan information before the original VM was deleted, you can file a [support request](https://portal.azure.com/#create/Microsoft.Support). The support request needs at minimum the VM name, subscription ID and the time stamp of the delete operation.
-
-To create a VM using a VHD, refer to this article [Create a VM from a specialized VHD](create-vm-specialized.md) and add in a line to add the plan information to the VM configuration using [Set-AzVMPlan](/powershell/module/az.compute/set-azvmplan) similar to the following:
-
-```azurepowershell-interactive
-$vmConfig = Set-AzVMPlan `
- -VM $vmConfig `
- -Publisher "publisherName" `
- -Product "productName" `
- -Name "planName"
-```
--
-## Next steps
-
-To create a virtual machine quickly with the `New-AzVM` cmdlet by using basic image information, see [Create a Windows virtual machine with PowerShell](quick-create-powershell.md).
-
-For more information on using Azure Marketplace images to create custom images in an Azure Compute Gallery (formerly known as Shared Image Gallery), see [Supply Azure Marketplace purchase plan information when creating images](../marketplace-images.md).
+
+ Title: Find and use marketplace purchase plan information using PowerShell
+description: Use Azure PowerShell to find image URNs and purchase plan parameters, like the publisher, offer, SKU, and version, for Marketplace VM images.
+++ Last updated : 03/17/2021++++
+# Find and use Azure Marketplace VM images with Azure PowerShell
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
++
+This article describes how to use Azure PowerShell to find VM images in the Azure Marketplace. You can then specify a Marketplace image and plan information when you create a VM.
+
+You can also browse available images and offers using the [Azure Marketplace](https://azuremarketplace.microsoft.com/) or the [Azure CLI](../linux/cli-ps-findimage.md).
+
+## Terminology
+
+A Marketplace image in Azure has the following attributes:
+
+* **Publisher**: The organization that created the image. Examples: Canonical, MicrosoftWindowsServer
+* **Offer**: The name of a group of related images created by a publisher. Examples: UbuntuServer, WindowsServer
+* **SKU**: An instance of an offer, such as a major release of a distribution. Examples: 18.04-LTS, 2019-Datacenter
+* **Version**: The version number of an image SKU.
+
+These values can be passed individually or as an image *URN*, combining the values separated by the colon (:). For example: *Publisher*:*Offer*:*Sku*:*Version*. You can replace the version number in the URN with `latest` to use the latest version of the image.
+
+If the image publisher provides other license and purchase terms, then you must accept those before you can use the image. For more information, see [Accept purchase plan terms](#accept-purchase-plan-terms).
+
+## Default Images
+
+Powershell offers several pre-defined image aliases to make the resource creation process easier. There are different images for resources with either a Windows or Linux operating system. Several Powershell cmdlets, such as `New-AzVM` and `New-AzVmss`, allow you to input the alias name as a parameter.
+For example:
+
+```powershell
+$rgname = <Resource Group Name>
+$location = <Azure Region>
+$vmName = "v" + $rgname
+$domainNameLabel = "d" + $rgname
+$securePassword = <Password> | ConvertTo-SecureString -AsPlainText -Force
+$username = <Username>
+$credential = New-Object System.Management.Automation.PSCredential ($username, $securePassword)
+New-AzVM -ResourceGroupName $rgname -Location $location -Name $vmName -image CentOS85Gen285Gen2 -Credential $credential -DomainNameLabel $domainNameLabel
+```
+
+The Linux image alias names and their details are:
+```output
+Alias Architecture Offer Publisher Sku Urn Version
+-- -- - - -
+CentOS85Gen2 x64 CentOS OpenLogic 8_5-gen2 OpenLogic:CentOS:8_5-gen2:latest latest
+Debian11 x64 Debian-11 Debian 11-backports-gen2 Debian:debian-11:11-backports-gen2:latest latest
+FlatcarLinuxFreeGen2 x64 flatcar-container-linux-free kinvolk stable kinvolk:flatcar-container-linux-free:stable:latest latest
+OpenSuseLeap154Gen2 x64 opensuse-leap-15-4 SUSE gen2 SUSE:opensuse-leap-15-4:gen2:latest latest
+RHELRaw8LVMGen2 x64 RHEL RedHat 8-lvm-gen2 RedHat:RHEL:8-lvm-gen2:latest latest
+SLES x64 sles-15-sp3 SUSE gen2 SUSE:sles-15-sp3:gen2:latest latest
+Ubuntu2204 x64 0001-com-ubuntu-server-jammy Canonical 22_04-lts-gen2 Canonical:0001-com-ubuntu-server-jammy:22_04-lts-gen2:latest latest
+```
+
+The Windows image alias names and their details are:
+```output
+Alias Architecture Offer Publisher Sku Urn Version
+-- -- - - -
+Win2022Datacenter x64 WindowsServer MicrosoftWindowsServer 2022-Datacenter MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest latest
+Win2022AzureEditionCore x64 WindowsServer MicrosoftWindowsServer 2022-datacenter-azure-edition-core MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition-core:latest latest
+Win10 x64 Windows MicrosoftVisualStudio Windows-10-N-x64 MicrosoftVisualStudio:Windows:Windows-10-N-x64:latest latest
+Win2019Datacenter x64 WindowsServer MicrosoftWindowsServer 2019-Datacenter MicrosoftWindowsServer:WindowsServer:2019-Datacenter:latest latest
+Win2016Datacenter x64 WindowsServer MicrosoftWindowsServer 2016-Datacenter MicrosoftWindowsServer:WindowsServer:2016-Datacenter:latest latest
+Win2012R2Datacenter x64 WindowsServer MicrosoftWindowsServer 2012-R2-Datacenter MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:latest latest
+Win2012Datacenter x64 WindowsServer MicrosoftWindowsServer 2012-Datacenter MicrosoftWindowsServer:WindowsServer:2012-Datacenter:latest latest
+```
+
+## List images
+
+You can use PowerShell to narrow down a list of images if you want to use a specific image that is not provided by default. Replace the values of the below variables to meet your needs.
+
+1. List the image publishers using [Get-AzVMImagePublisher](/powershell/module/az.compute/get-azvmimagepublisher).
+
+ ```powershell
+ $locName="<location>"
+ Get-AzVMImagePublisher -Location $locName | Select PublisherName
+ ```
+1. For a given publisher, list their offers using [Get-AzVMImageOffer](/powershell/module/az.compute/get-azvmimageoffer).
+
+ ```powershell
+ $pubName="<publisher>"
+ Get-AzVMImageOffer -Location $locName -PublisherName $pubName | Select Offer
+ ```
+1. For a given publisher and offer, list the SKUs available using [Get-AzVMImageSku](/powershell/module/az.compute/get-azvmimagesku).
+
+ ```powershell
+ $offerName="<offer>"
+ Get-AzVMImageSku -Location $locName -PublisherName $pubName -Offer $offerName | Select Skus
+ ```
+1. For a SKU, list the versions of the image using [Get-AzVMImage](/powershell/module/az.compute/get-azvmimage).
+
+ ```powershell
+ $skuName="<SKU>"
+ Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Sku $skuName | Select Version
+ ```
+ You can also use `latest` if you want to use the latest image and not a specific older version.
++
+Now you can combine the selected publisher, offer, SKU, and version into a URN (values separated by :). Pass this URN with the `-Image` parameter when you create a VM with the [New-AzVM](/powershell/module/az.compute/new-azvm) cmdlet. You can also replace the version number in the URN with `latest` to get the latest version of the image.
+
+If you deploy a VM with a Resource Manager template, then you must set the image parameters individually in the `imageReference` properties. See the [template reference](/azure/templates/microsoft.compute/virtualmachines).
++
+## View purchase plan properties
+
+Some VM images in the Azure Marketplace have other license and purchase terms that you must accept before you can deploy them programmatically. You need to accept the image's terms once per subscription.
+
+To view an image's purchase plan information, run the `Get-AzVMImage` cmdlet. If the `PurchasePlan` property in the output is not `null`, the image has terms you need to accept before programmatic deployment.
+
+For example, the *Windows Server 2016 Datacenter* image doesn't have additional terms, so the `PurchasePlan` information is `null`:
+
+```powershell
+$version = "2016.127.20170406"
+Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version
+```
+
+The output looks similar to the following output:
+
+```output
+Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/MicrosoftWindowsServer/ArtifactTypes/VMImage/Offers/WindowsServer/Skus/2016-Datacenter/Versions/2019.0.20190115
+Location : westus
+PublisherName : MicrosoftWindowsServer
+Offer : WindowsServer
+Skus : 2019-Datacenter
+Version : 2019.0.20190115
+FilterExpression :
+Name : 2019.0.20190115
+OSDiskImage : {
+ "operatingSystem": "Windows"
+ }
+PurchasePlan : null
+DataDiskImages : []
+
+```
+
+The example below shows a similar command for the *Data Science Virtual Machine - Windows 2016* image, which has the following `PurchasePlan` properties: `name`, `product`, and `publisher`. Some images also have a `promotion code` property. To deploy this image, see the following sections to accept the terms and to enable programmatic deployment.
+
+```powershell
+Get-AzVMImage -Location "westus" -PublisherName "microsoft-ads" -Offer "windows-data-science-vm" -Skus "windows2016" -Version "0.2.02"
+```
+
+The output looks similar to the following output:
+
+```
+Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/microsoft-ads/ArtifactTypes/VMImage/Offers/windows-data-science-vm/Skus/windows2016/Versions/19.01.14
+Location : westus
+PublisherName : microsoft-ads
+Offer : windows-data-science-vm
+Skus : windows2016
+Version : 19.01.14
+FilterExpression :
+Name : 19.01.14
+OSDiskImage : {
+ "operatingSystem": "Windows"
+ }
+PurchasePlan : {
+ "publisher": "microsoft-ads",
+ "name": "windows2016",
+ "product": "windows-data-science-vm"
+ }
+DataDiskImages : []
+
+```
+
+To view the license terms, use the [Get-AzMarketplaceterms](/powershell/module/az.marketplaceordering/get-azmarketplaceterms) cmdlet and pass in the purchase plan parameters. The output provides a link to the terms for the Marketplace image and shows whether you previously accepted the terms. Be sure to use all lowercase letters in the parameter values.
+
+```powershell
+Get-AzMarketplaceterms -Publisher "microsoft-ads" -Product "windows-data-science-vm" -Name "windows2016"
+```
+
+The output will look similar to the following:
+
+```output
+Publisher : microsoft-ads
+Product : windows-data-science-vm
+Plan : windows2016
+LicenseTextLink : https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_MICROSOFT%253a2DADS%253a24WINDOWS%253a2DDATA%253a2DSCIENCE%253a2DVM%253a24WINDOWS2016%253a24OC5SKMQOXSED66BBSNTF4XRCS4XLOHP7QMPV54DQU7JCBZWYFP35IDPOWTUKXUC7ZAG7W6ZMDD6NHWNKUIVSYBZUTZ245F44SU5AD7Q.txt
+PrivacyPolicyLink : https://www.microsoft.com/EN-US/privacystatement/OnlineServices/Default.aspx
+Signature : 2UMWH6PHSAIM4U22HXPXW25AL2NHUJ7Y7GRV27EBL6SUIDURGMYG6IIDO3P47FFIBBDFHZHSQTR7PNK6VIIRYJRQ3WXSE6BTNUNENXA
+Accepted : False
+Signdate : 1/25/2019 7:43:00 PM
+```
+
+## Accept purchase plan terms
+
+Use the [Set-AzMarketplaceterms](/powershell/module/az.marketplaceordering/set-azmarketplaceterms) cmdlet to accept or reject the terms. You only need to accept terms once per subscription for the image. Be sure to use all lowercase letters in the parameter values.
+
+```powershell
+$agreementTerms=Get-AzMarketplaceterms -Publisher "microsoft-ads" -Product "windows-data-science-vm" -Name "windows2016"
+
+Set-AzMarketplaceTerms -Publisher "microsoft-ads" -Product "windows-data-science-vm" -Name "windows2016" -Terms $agreementTerms -Accept
+```
+++
+```output
+Publisher : microsoft-ads
+Product : windows-data-science-vm
+Plan : windows2016
+LicenseTextLink : https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_MICROSOFT%253a2DADS%253a24WINDOWS%253a2DDATA%253a2DSCIENCE%253a2DV
+ M%253a24WINDOWS2016%253a24OC5SKMQOXSED66BBSNTF4XRCS4XLOHP7QMPV54DQU7JCBZWYFP35IDPOWTUKXUC7ZAG7W6ZMDD6NHWNKUIVSYBZUTZ245F44SU5AD7Q.txt
+PrivacyPolicyLink : https://www.microsoft.com/EN-US/privacystatement/OnlineServices/Default.aspx
+Signature : XXXXXXK3MNJ5SROEG2BYDA2YGECU33GXTD3UFPLPC4BAVKAUL3PDYL3KBKBLG4ZCDJZVNSA7KJWTGMDSYDD6KRLV3LV274DLBXXXXXX
+Accepted : True
+Signdate : 2/23/2018 7:49:31 PM
+```
++
+## Create a new VM from a marketplace image
+
+If you already have the information about what image you want to use, you can pass that information into [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage) cmdlet to add image information to the VM configuration. See the next sections for searching and listing the images available in the marketplace.
+
+Some paid images also require that you provide purchase plan information using the [Set-AzVMPlan](/powershell/module/az.compute/set-azvmplan).
+
+```powershell
+...
+
+$vmConfig = New-AzVMConfig -VMName "myVM" -VMSize Standard_D1
+
+# Set the Marketplace image
+$offerName = "windows-data-science-vm"
+$skuName = "windows2016"
+$version = "19.01.14"
+$vmConfig = Set-AzVMSourceImage -VM $vmConfig -PublisherName $publisherName -Offer $offerName -Skus $skuName -Version $version
+
+# Set the Marketplace plan information, if needed
+$publisherName = "microsoft-ads"
+$productName = "windows-data-science-vm"
+$planName = "windows2016"
+$vmConfig = Set-AzVMPlan -VM $vmConfig -Publisher $publisherName -Product $productName -Name $planName
+
+...
+```
+
+You'll then pass the VM configuration along with the other configuration objects to the `New-AzVM` cmdlet. For a detailed example of using a VM configuration with PowerShell, see this [script](https://github.com/Azure/azure-docs-powershell-samples/blob/master/virtual-machine/create-vm-detailed/create-windows-vm-detailed.ps1).
+
+If you get a message about accepting the terms of the image, see the earlier section [Accept purchase plan terms](#accept-purchase-plan-terms).
+
+## Create a new VM from a VHD with purchase plan information
+
+If you have an existing VHD that was created using an Azure Marketplace image, you might need to supply the purchase plan information when you create a new VM from that VHD.
+
+If you still have the original VM, or another VM created from the same image, you can get the plan name, publisher, and product information from it using Get-AzVM. This example gets a VM named *myVM* in the *myResourceGroup* resource group and then displays the purchase plan information.
+
+```azurepowershell-interactive
+$vm = Get-azvm `
+ -ResourceGroupName myResourceGroup `
+ -Name myVM
+$vm.Plan
+```
+
+If you didn't get the plan information before the original VM was deleted, you can file a [support request](https://portal.azure.com/#create/Microsoft.Support). The support request needs at minimum the VM name, subscription ID and the time stamp of the delete operation.
+
+To create a VM using a VHD, refer to this article [Create a VM from a specialized VHD](create-vm-specialized.md) and add in a line to add the plan information to the VM configuration using [Set-AzVMPlan](/powershell/module/az.compute/set-azvmplan) similar to the following:
+
+```azurepowershell-interactive
+$vmConfig = Set-AzVMPlan `
+ -VM $vmConfig `
+ -Publisher "publisherName" `
+ -Product "productName" `
+ -Name "planName"
+```
++
+## Next steps
+
+To create a virtual machine quickly with the `New-AzVM` cmdlet by using basic image information, see [Create a Windows virtual machine with PowerShell](quick-create-powershell.md).
+
+For more information on using Azure Marketplace images to create custom images in an Azure Compute Gallery (formerly known as Shared Image Gallery), see [Supply Azure Marketplace purchase plan information when creating images](../marketplace-images.md).
virtual-machines Install Openframe Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
# Install TmaxSoft OpenFrame on Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Learn how to set up an OpenFrame environment on Azure suitable for development, demos, testing, or production workloads. This tutorial walks you through each step. OpenFrame includes multiple components that create the mainframe emulation environment on Azure. For example, OpenFrame online services replace the mainframe middleware such as IBM Customer Information Control System (CICS), and OpenFrame Batch, with its TJES component, replaces the IBM mainframe's Job Entry Subsystem (JES).
virtual-machines Oracle Database Backup Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md
# Backup strategies for Oracle Database on an Azure Linux VM
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ **Applies to:** :heavy_check_mark: Linux VMs Database backups help protect the database against data loss that's due to storage component failure and datacenter failure. They can also be a means of recovery from human error and a way to clone a database for development or testing purposes.
virtual-network Create Peering Different Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models.md
description: Learn how to create a virtual network peering between virtual netwo
-tags: azure-resource-manager
virtual-network Diagnose Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-routing-problem.md
description: Learn how to diagnose a virtual machine routing problem by viewing
-tags: azure-resource-manager
Last updated 05/30/2018
virtual-network Diagnose Network Traffic Filter Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/diagnose-network-traffic-filter-problem.md
description: Learn how to diagnose a virtual machine network traffic filter prob
-tags: azure-resource-manager
ms.assetid: a54feccf-0123-4e49-a743-eb8d0bdd1ebc
virtual-network Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-portal.md
Last updated 06/06/2023 + #Customer intent: I want to use the Azure portal to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
This quickstart shows you how to create a virtual network by using the Azure portal. You then create two virtual machines (VMs) in the network, deploy Azure Bastion to securely connect to the VMs from the internet, and communicate privately between the VMs. + A virtual network is the fundamental building block for private networks in Azure. Azure Virtual Network enables Azure resources like VMs to securely communicate with each other and the internet.
+>[!VIDEO https://learn-video.azurefd.net/vod/player?id=6b5b138e-8406-406e-8b34-40bdadf9fc6d]
++ ## Prerequisites
virtual-network Tutorial Connect Virtual Networks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-cli.md
Title: Connect virtual networks with VNet peering - Azure CLI
description: In this article, you learn how to connect virtual networks with virtual network peering, using the Azure CLI.
-tags: azure-resource-manager
ms.devlang: azurecli
virtual-network Tutorial Connect Virtual Networks Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-powershell.md
Title: Connect virtual networks with VNet peering - Azure PowerShell
description: In this article, you learn how to connect virtual networks with virtual network peering, using Azure PowerShell.
-tags: azure-resource-manager
virtual-network
virtual-network Tutorial Create Route Table Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md
description: In this article, learn how to route network traffic with a route ta
-tags: azure-resource-manager
ms.devlang: azurecli
virtual-network Tutorial Create Route Table Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-powershell.md
description: In this article, learn how to route network traffic with a route ta
-tags: azure-resource-manager
virtual-network
virtual-network Tutorial Filter Network Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-cli.md
description: In this article, you learn how to filter network traffic to a subne
-tags: azure-resource-manager
ms.devlang: azurecli
virtual-network Tutorial Filter Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-powershell.md
Title: Filter network traffic - Azure PowerShell
description: In this article, you learn how to filter network traffic to a subnet, with a network security group, using PowerShell.
-tags: azure-resource-manager
virtual-network
virtual-network Tutorial Restrict Network Access To Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md
description: In this article, you learn how to limit and restrict network access
-tags: azure-resource-manager
ms.devlang: azurecli
virtual-network Tutorial Restrict Network Access To Resources Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-powershell.md
description: In this article, you learn how to limit and restrict network access
-tags: azure-resource-manager
Last updated 03/14/2018
virtual-network Tutorial Tap Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-tap-virtual-network-cli.md
description: Learn how to create, change, or delete a virtual network TAP using
-tags: azure-resource-manager
Last updated 03/18/2018
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
Title: Create, change, or delete an Azure virtual network peering description: Learn how to create, change, or delete a virtual network peering. With virtual network peering, you connect virtual networks in the same region and across regions.
-tags: azure-resource-manager
Last updated 08/24/2023
virtual-network Virtual Network Network Interface Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface-vm.md
description: Learn how to add network interfaces to or remove network interfaces
-tags: azure-resource-manager
Last updated 11/16/2022
virtual-network Virtual Network Service Endpoint Policies Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-cli.md
Title: Restrict data exfiltration to Azure Storage - Azure CLI
description: In this article, you learn how to limit and restrict virtual network data exfiltration to Azure Storage resources with virtual network service endpoint policies using the Azure CLI.
-tags: azure-resource-manager
ms.devlang: azurecli
virtual-network Virtual Network Service Endpoint Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-powershell.md
description: In this article, you learn how to limit and restrict virtual networ
-tags: azure-resource-manager
Last updated 02/03/2020
virtual-network Virtual Network Troubleshoot Cannot Delete Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-cannot-delete-vnet.md
description: Learn how to troubleshoot the issue in which you cannot delete a vi
-tags: azure-resource-manager
Last updated 10/31/2018
virtual-network Virtual Network Troubleshoot Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-nva.md
description: Troubleshoot Network Virtual Appliance (NVA) issues in Azure and va
-tags: azure-resource-manager
Last updated 10/26/2018
virtual-network Virtual Network Troubleshoot Peering Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-troubleshoot-peering-issues.md
description: Steps to help resolve most virtual network peering issues.
-tags: virtual-network
ms.assetid: 1a3d1e84-f793-41b4-aa04-774a7e8f7719
virtual-network Virtual Networks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-overview.md
Azure Virtual Network is a service that provides the fundamental building block
A virtual network is similar to a traditional network that you'd operate in your own datacenter. But it brings extra benefits of the Azure infrastructure, such as scale, availability, and isolation.
+> [!VIDEO https://learn-video.azurefd.net/vod/player?id=6b5b138e-8406-406e-8b34-40bdadf9fc6d]
+ ## Why use an Azure virtual network? Key scenarios that you can accomplish with a virtual network include:
virtual-wan Global Hub Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md
By default, every hub that uses the same User VPN Configuration is included in t
### Global profile best practices
-#### Add multiple server validation certificates
+#### Add Multiple server validation certificates
This section pertains connections using the OpenVPN tunnel type and the Azure VPN Client version 2.1963.44.0 or higher. When you configure a hub P2S gateway, Azure assigns an internal certificate to the gateway. This is different than the root certificate information that you specify when you want to use Certificate Authentication as your authentication method. The internal certificate that is assigned to the hub is used for all authentication types. This value is represented in the profile configuration files that you generate as *servervalidation/cert/hash*. The VPN client uses this value as part of the connection process.
-If you have multiple hubs in different geographic regions, each hub can use a different Azure-level server validation certificate. However, the global profile only contains the server validation certificate hash value for 1 of the hubs. This means that if the certificate for that hub isn't working properly for any reason, the client doesn't have the necessary server validation certificate hash value for the other hubs.
+If you have multiple hubs in different geographic regions, each hub can use a different Azure-level server validation certificate. The global profile contains the server validation certificate hash value for all of the hubs. This means that if the certificate for that hub isn't working properly for any reason, the client will still have the necessary server validation certificate hash value for the other hubs.
-As a best practice, we recommend that you update your VPN client profile configuration file to include the certificate hash value of all the hubs that are attached to the global profile, and then configure the Azure VPN Client using the updated file.
+> [!IMPORTANT]
+> Configuring the Azure VPN client with certificate hash value of all the hubs is required only if the hubs have different server root issuers.
-1. To view the server validation certificate hash for each hub, generate and download the [hub profile](#hub) files for each of the hubs that are part of the global profile. Use a text editor to view profile information contained in the **azurevpnconfig.xml** file. This file is typically found in the **AzureVPN** folder. Note the server validation certificate hash for each hub.
+As a best practice, we recommend that you update your VPN client profile configuration file to include the certificate hash value of all the hubs that are attached to the global profile, and then configure the Azure VPN Client using the updated file.
1. Generate and download the [global profile](#global) files. Use a text editor to open the **azurevpnconfig.xml** file.
-1. Using the following xml example, configure the global profile file to include the server validation certificate hashes from the hubs that you want to include. Configure the Azure VPN Client using the edited profile configuration file.
+1. Given the following xml example, configure the Azure VPN Client using the global profile configuration file that contains server validation certificate hash for each hub.
```xml </protocolconfig>
virtual-wan Monitor Virtual Wan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan-reference.md
description: Learn about Azure Virtual WAN logs and metrics using Azure Monitor.
Previously updated : 06/08/2022 Last updated : 02/15/2024
-# Monitoring Virtual WAN data reference
+# Monitoring Virtual WAN - Data reference
This article provides a reference of log and metric data collected to analyze the performance and availability of Virtual WAN. See [Monitoring Virtual WAN](monitor-virtual-wan.md) for instructions and additional context on monitoring data for Virtual WAN.
The following metric is available for virtual hub router within a virtual hub:
| Metric | Description| | | |
-| **Virtual Hub Data Processed** | Data on how much traffic traverses the virtual hub router in a given time period. Note that only the following flows use the virtual hub router: VNet to VNet (same hub and interhub) and VPN/ExpressRoute branch to VNet (interhub). If a virtual hub is secured with routing intent, then these flows will traverse the firewall instead of the hub router. |
+| **Virtual Hub Data Processed** | Data on how much traffic traverses the virtual hub router in a given time period. Only the following flows use the virtual hub router: VNet to VNet (same hub and interhub) and VPN/ExpressRoute branch to VNet (interhub). If a virtual hub is secured with routing intent, then these flows traverse the firewall instead of the hub router. |
#### PowerShell steps
$MetricInformation.Data
* **Time Grain** - Refers to the frequency at which you want to see the aggregation. In the current command, you'll see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D.
-* **Start Time and End Time** - This time is based on UTC, so please ensure that you're entering UTC values when inputting these parameters. If these parameters aren't used, the past one hour's worth of data is shown by default.
+* **Start Time and End Time** - This time is based on UTC. Ensure that you're entering UTC values when inputting these parameters. If these parameters aren't used, the past one hour's worth of data is shown by default.
-* **Sum Aggregation Type** - This aggregation type will show you the total number of bytes that traversed the virtual hub router during a selected time period. The **Max** and **Min** aggregation types are not meaningful.
+* **Sum Aggregation Type** - This aggregation type shows you the total number of bytes that traversed the virtual hub router during a selected time period. The **Max** and **Min** aggregation types aren't meaningful.
### <a name="s2s-metrics"></a>Site-to-site VPN gateway metrics
The following metrics are available for Virtual WAN point-to-site VPN gateways:
| Metric | Description| | | | | **Gateway P2S Bandwidth** | Average point-to-site aggregate bandwidth of a gateway in bytes per second. |
-| **P2S Connection Count** |Point-to-site connection count of a gateway. To ensure you're viewing accurate Metrics in Azure Monitor, select the **Aggregation Type** for **P2S Connection Count** as **Sum**. You may also select **Max** if you split By **Instance**. |
+| **P2S Connection Count** |Point-to-site connection count of a gateway. To ensure you're viewing accurate Metrics in Azure Monitor, select the **Aggregation Type** for **P2S Connection Count** as **Sum**. You can also select **Max** if you split By **Instance**. |
| **User VPN Routes Count** | Number of User VPN Routes configured on the VPN gateway. This metric can be broken down into **Static** and **Dynamic** Routes. ### <a name="er-metrics"></a>Azure ExpressRoute gateway metrics
The following diagnostics are available for Virtual WAN site-to-site VPN gateway
| Metric | Description| | | | | **Gateway Diagnostic Logs** | Gateway-specific diagnostics such as health, configuration, service updates, and additional diagnostics.|
-| **Tunnel Diagnostic Logs** | These are IPsec tunnel-related logs such as connect and disconnect events for a site-to-site IPsec tunnel, negotiated SAs, disconnect reasons, and additional diagnostics. For connect and disconnect events, these logs will also display the remote IP address of the corresponding on-premises VPN device.|
+| **Tunnel Diagnostic Logs** | These are IPsec tunnel-related logs such as connect and disconnect events for a site-to-site IPsec tunnel, negotiated SAs, disconnect reasons, and additional diagnostics. For connect and disconnect events, these logs also display the remote IP address of the corresponding on-premises VPN device.|
| **Route Diagnostic Logs** | These are logs related to events for static routes, BGP, route updates, and additional diagnostics. | | **IKE Diagnostic Logs** | IKE-specific diagnostics for IPsec connections. |
In Azure Virtual WAN, ExpressRoute gateway metrics can be exported as logs via a
### Log Analytics sample query
-If you selected to send diagnostic data to a Log Analytics Workspace, then you can use SQL-like queries such as the example below to examine the data. For more information, see [Log Analytics Query Language](/services-hub/health/log-analytics-query-language).
+If you selected to send diagnostic data to a Log Analytics Workspace, then you can use SQL-like queries, such as the following example, to examine the data. For more information, see [Log Analytics Query Language](/services-hub/health/log-analytics-query-language).
The following example contains a query to obtain site-to-site route diagnostics. `AzureDiagnostics | where Category == "RouteDiagnosticLog"`
-Replace the values below, after the **= =**, as needed based on the tables reported in the previous section of this article.
+Replace the following values, after the **= =**, as needed based on the tables reported in the previous section of this article.
* "GatewayDiagnosticLog" * "IKEDiagnosticLog"
In order to execute the query, you have to open the Log Analytics resource you c
:::image type="content" source="./media/monitor-virtual-wan-reference/log-analytics-query-samples.png" alt-text="Screenshot of Log Analytics Query samples." lightbox="./media/monitor-virtual-wan-reference/log-analytics-query-samples.png":::
-For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided to make log analysis easier. Using its graphical interface, it will be possible to investigate the diagnostic data without manually writing any Log Analytics query.
+For Azure Firewall, a [workbook](../firewall/firewall-workbook.md) is provided to make log analysis easier. Using its graphical interface, you can investigate the diagnostic data without manually writing any Log Analytics query.
## <a name="activity-logs"></a>Activity logs
For more information on the schema of Activity Log entries, see [Activity Log sc
For detailed description of the top-level diagnostic logs schema, see [Supported services, schemas, and categories for Azure Diagnostic Logs](../azure-monitor/essentials/resource-logs-schema.md).
-When reviewing any metrics through Log Analytics, the output will contain the following columns:
+When reviewing any metrics through Log Analytics, the output contains the following columns:
|**Column**|**Type**|**Description**| | | | |
When reviewing any metrics through Log Analytics, the output will contain the fo
## <a name="azure-firewall"></a>Monitoring secured hub (Azure Firewall)
-If you have chosen to secure your virtual hub using Azure Firewall, relevant logs and metrics are available here: [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md).
+If you chose to secure your virtual hub using Azure Firewall, relevant logs and metrics are available here: [Azure Firewall logs and metrics](../firewall/logs-and-metrics.md).
You can monitor the Secured Hub using Azure Firewall logs and metrics. You can also use activity logs to audit operations on Azure Firewall resources. For every Azure Virtual WAN you secure and convert to a Secured Hub, an explicit firewall resource object is created in the resource group where the hub is located.
For every Azure Virtual WAN you secure and convert to a Secured Hub, an explicit
## Next steps * To learn how to monitor Azure Firewall logs and metrics, see [Tutorial: Monitor Azure Firewall logs](../firewall/firewall-diagnostics.md).
-* For additional information about Virtual WAN monitoring, see [Monitoring Azure Virtual WAN](monitor-virtual-wan.md).
+* For more information about Virtual WAN monitoring, see [Monitoring Azure Virtual WAN](monitor-virtual-wan.md).
* To learn more about metrics in Azure Monitor, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
Previously updated : 07/28/2023 Last updated : 02/15/2024 # Monitoring Azure Virtual WAN
When you have critical applications and business processes relying on Azure reso
This article describes the monitoring data generated by Azure Virtual WAN. Virtual WAN uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+## Prerequisites
+
+You have a virtual WAN deployed and configured. For help with deploying a virtual WAN:
-## Prerequisites
-You have created a Virtual WAN setup. For help in deploying a Virtual WAN:
* [Creating a site-to-site connection](virtual-wan-site-to-site-portal.md) * [Creating a User VPN (point-to-site) connection](virtual-wan-point-to-site-portal.md) * [Creating an ExpressRoute connection](virtual-wan-expressroute-portal.md) * [Creating an NVA in a virtual hub](how-to-nva-hub.md) * [Installing Azure Firewall in a Virtual hub](howto-firewall.md)
-## Analyzing metrics
+## Analyzing metrics
Metrics in Azure Monitor are numerical values that describe some aspect of a system at a particular time. Metrics are collected every minute, and are useful for alerting because they can be sampled frequently. An alert can be fired quickly with relatively simple logic.
The following steps help you locate and view metrics:
:::image type="content" source="./media/monitor-virtual-wan-reference/metrics-page.png" alt-text="Screenshot that shows the 'Metrics' page with the categories highlighted." lightbox="./media/monitor-virtual-wan-reference/metrics-page.png":::
-1. To see metrics for the virtual hub router, you can select **Metrics** from the virtual hub **Overview** blade.
+1. To see metrics for the virtual hub router, you can select **Metrics** from the virtual hub **Overview** page.
## Analyzing logs
The following steps help you create, edit, and view diagnostic settings:
:::image type="content" source="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png" alt-text="Screenshot for Select View in Azure Monitor for Logs." lightbox="./media/monitor-virtual-wan-reference/view-hub-gateway-logs.png":::
-1. In this page, you can create a new diagnostic setting (**+Add diagnostic setting**) or edit an existing one (**Edit setting**). You can choose to send the diagnostic logs to Log Analytics (as shown in the example below), stream to an event hub, send to a 3rd-party solution, or archive to a storage account.
+1. In this page, you can create a new diagnostic setting (**+Add diagnostic setting**) or edit an existing one (**Edit setting**). You can choose to send the diagnostic logs to Log Analytics (as shown in the following example), stream to an event hub, send to a 3rd-party solution, or archive to a storage account.
:::image type="content" source="./media/monitor-virtual-wan-reference/select-gateway-settings.png" alt-text="Screenshot for Select Diagnostic Log settings." lightbox="./media/monitor-virtual-wan-reference/select-gateway-settings.png"::: 1. After clicking **Save**, you should start seeing logs appear in this log analytics workspace within a few hours.
To see a list of monitoring best practices when configuring alerts, see [Monitor
## Virtual WAN Insights
-Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "Insights".
+Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "Insights".
-Virtual WAN uses Network Insights to provide users and operators with the ability to view the state and status of a Virtual WAN, presented via an autodiscovered topological map. Resource state and status overlays on the map give you a snapshot view of the overall health of the Virtual WAN. You can navigate resources on the map via one-click access to the resource configuration pages of the Virtual WAN portal. For more information, see [Azure Monitor Network Insights for Virtual WAN](azure-monitor-insights.md).
+Virtual WAN uses Network Insights to provide users and operators with the ability to view the state and status of a virtual WAN, presented via an autodiscovered topological map. Resource state and status overlays on the map give you a snapshot view of the overall health of the virtual WAN. You can navigate resources on the map via one-click access to the resource configuration pages of the Virtual WAN portal. For more information, see [Azure Monitor Network Insights for Virtual WAN](azure-monitor-insights.md).
## Next steps
-* See [Monitoring Virtual WAN data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN.
+* See [Monitoring Virtual WAN - Data reference](monitor-virtual-wan-reference.md) for a data reference of the metrics, logs, and other important values created by Virtual WAN.
* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
-* See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for additional details on **Azure Monitor Metrics**.
+* See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for more details on **Azure Monitor Metrics**.
* See [All resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md) for a list of all supported metrics.
-* See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting on creating diagnostic settings via Azure portal, CLI, PowerShell, etc., you can visit
+* See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting for creating diagnostic settings via Azure portal, CLI, PowerShell, etc.
vpn-gateway Monitor Vpn Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/monitor-vpn-gateway-reference.md
-# Monitoring VPN Gateway data reference
+# Monitoring VPN Gateway - Data reference
This article provides a reference of log and metric data collected to analyze the performance and availability of VPN Gateway. See [Monitoring VPN Gateway](monitor-vpn-gateway.md) for details on collecting and analyzing monitoring data for VPN Gateway.
Metrics in Azure Monitor are numerical values that describe some aspect of a sys
| **Tunnel QMSA Count** | Count | 5 minutes | Number of quick mode security associations present. | | **Tunnel Total Flow Count** | Count | 5 minutes | Number of distinct flows created per tunnel. | | **User Vpn Route Count** | Count | 5 minutes | Number of user VPN routes configured on the VPN Gateway. |
-| **VNet Address Prefix Count** | Count | 5 minutes | Number of VNet address prefixes that are used/advertised by the gateway. |
+| **VNet Address Prefix Count** | Count | 5 minutes | Number of virtual network address prefixes that are used/advertised by the gateway. |
## Resource logs
The following resource logs are available in Azure:
## Next steps
-* For additional information about VPN Gateway monitoring, see [Monitoring Azure VPN Gateway](monitor-vpn-gateway.md).
+* For more information about VPN Gateway monitoring, see [Monitoring Azure VPN Gateway](monitor-vpn-gateway.md).
* To learn more about metrics in Azure Monitor, see [Metrics in Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md).
vpn-gateway Vpn Gateway Peering Gateway Transit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-peering-gateway-transit.md
Previously updated : 08/18/2023 Last updated : 02/15/2024 # Configure VPN gateway transit for virtual network peering
-This article helps you configure gateway transit for virtual network peering. [Virtual network peering](../virtual-network/virtual-network-peering-overview.md) seamlessly connects two Azure virtual networks, merging the two virtual networks into one for connectivity purposes. [Gateway transit](../virtual-network/virtual-network-peering-overview.md#gateways-and-on-premises-connectivity) is a peering property that lets one virtual network use the VPN gateway in the peered virtual network for cross-premises or VNet-to-VNet connectivity. The following diagram shows how gateway transit works with virtual network peering.
+This article helps you configure gateway transit for virtual network peering. [Virtual network peering](../virtual-network/virtual-network-peering-overview.md) seamlessly connects two Azure virtual networks, merging the two virtual networks into one for connectivity purposes. [Gateway transit](../virtual-network/virtual-network-peering-overview.md#gateways-and-on-premises-connectivity) is a peering property that lets one virtual network use the VPN gateway in the peered virtual network for cross-premises or VNet-to-VNet connectivity.
+The following diagram shows how gateway transit works with virtual network peering. In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks.
-In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks.
-The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the legacy classic deployment model.
->
+The transit option is available for peering between the same, or different deployment models and can be used with all VPN Gateway SKUs except the Basic SKU. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the legacy classic deployment model.
In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks propagate to the routing tables for the peered virtual networks using gateway transit.
web-application-firewall Create Waf Policy Ag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-waf-policy-ag.md
First, create a basic WAF policy with a managed Default Rule Set (DRS) using the
## Configure WAF rules (optional)
-When you create a WAF policy, by default it is in *Detection* mode. In Detection mode, WAF doesn't block any requests. Instead, the matching WAF rules are logged in the WAF logs. To see WAF in action, you can change the mode settings to *Prevention*. In Prevention mode, matching rules defined in the CRS Ruleset you selected are blocked and/or logged in the WAF logs.
+When you create a WAF policy, by default it is in *Detection* mode. In Detection mode, WAF doesn't block any requests. Instead, the matching WAF rules are logged in the WAF logs. To see WAF in action, you can change the mode settings to *Prevention*. In Prevention mode, matching rules defined in the Microsoft Managed Rulesets you selected are blocked and/or logged in the WAF logs.
## Managed rules
web-application-firewall Geomatch Custom Rules Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/geomatch-custom-rules-examples.md
This article introduces Azure WAF geomatch custom rules and shows you how to cre
Geomatch custom rules enable you to meet diverse security goals, such as blocking requests from high-risk areas and permitting requests from trusted locations. They're particularly effective in mitigating distributed denial-of-service (DDoS) attacks, which seek to inundate your web application with a multitude of requests from various sources. With geomatch custom rules, you can promptly pinpoint and block regions generating the most DDoS traffic, while still granting access to legitimate users. In this article, you learn about various custom rule patterns that you can employ to optimize your Azure WAF using geomatch custom rules.
-## Scenario 1: Block traffic from all countries except "x"
+## Scenario 1 - Block traffic from all countries except "x"
Geomatch custom rules prove useful when you aim to block traffic from all countries, barring one. For instance, if your web application caters exclusively to users in the United States, you can formulate a geomatch custom rule that obstructs all requests not originating from the US. This strategy effectively minimizes your web applicationΓÇÖs attack surface and deters unauthorized access from other regions. This specific technique employs a negating condition to facilitate this traffic pattern. For creating a geomatch custom rule that obstructs traffic from all countries except the US, refer to the following portal, Bicep, and PowerShell examples: